问题描述:

I am working on a vector drawing application (in java) and I am struggling with the the separation between my model classes and the view/controller classes.

Some background:

You can draw different shapes:

rectangles, lines and pie segments

There are 4 tools to manipulate the shapes on the canvas:

scale-tool, move-tool, rotate-tool and morph-tool

For this question the morph tool is the most interesting one:

It lets you change a shape by dragging one of it's points and adjusting the other properties as shown in this graphic:

These Transformation rules are different for each shape and I think they are part of the model's business logic but in a way they need to be exposed to the view/controller (the tool classes) so they can apply the correct one.

Additionally the shapes are internally represented via different values:

- The rectangle is stored as center, width, height, rotation

- The line is stored as start and end point

- The pie segment is stored as center, radius, angle1, angle2

I plan to add more shapes in the future, like stars, speech bubbles or arrows, each with their own control points.

I also plan to add more tools in the future, like rotating or scaling groups of shapes.

The control points for each tool are different. Eg when using the scale tool, you can not grab the center point, but each scaling control points needs to be associated with one pivot point (or multiple to let the user choose from).

For the simple shapes like rectangle, line and pie the control points are the same for each instance of the class but futures shapes like a bezier path or a star (with configurable spike count) would have a different amount of control points per instance.

So the question is whats a good way to model and implement these control points?

As they are slightly different for each tool and carry some tool/controller specific data they belong to the tool/controller in some way. But as they are also specific for each type of shape and carry very important domain logic they also belong to the model.

I would like to avoid the combinatoric explosion of adding a special type of control point for each combination of tool/shape whenever one tool or shape is added.


Update: To give another example: In the future it may occur that I have a idea for a new shape I want to support: the arc. It is similar to the pie segment but looks a bit different and behaves completely different when dragging the control points.

To implement this I would like to be able to just create a ArcShape class implementing my Shape interface and be done.

网友答案:

Basic Considerations

First of all let us make some definitions for simplicity.

Entity is a Domain Model object, which defines all the structure and behaviour, i.e. logic. EntityUI is the graphical control that represents the Entity in the UI.

So basically, for Shape classes I think ShapeUI must be pretty much aware of the structure of the Shape. The structure is mainly composed of the control points I guess. In other words, having all the information about the control points (maybe vectors in future), the ShapeUI will be able to draw itself on the UI.

Initial Suggestions

What I would suggest for the Shape classes, is that the Shape class defines all the behaviour. The ShapeUI class will be aware of Shape class and keep a reference to one it is representing, by which it will have access to the control points, as well as be able to manipulate them, e.g. set their locations. The Observer pattern simply asks to be used in this context. Particularly, the Shape class may implement the Observable and the ShapeUI will implement Observer and subscribe to the corresponding Shape object.

So basically what will happen in this case, the ShapeUI object will handle all UI operations, and will be responsible for updating the Shape parameters, e.g. control point locations. Afterwards, as soon as a location update occurs, the Shape object executes its logic upon the state change and then blindly (without being aware of ShapeUI) notifies the ShapeUI about the updated state. So correspondingly the ShapeUI will draw the new state. Here you will gain low-coupled model and view.

As for the Tools, my own opinion is that each Tool must know how to manipulate each type of Shape, i.e. the per shape manipulation logic must be implemented inside the Tool class. For decoupling the view and the model, it is pretty much the same as for the Shape. The ToolUI class handles where the cursor is clicked, what ShapeUI was it clicked on, what control point was it clicked on, etc. By obtaining this information, ToolUI will pass it to the appropriate Tool object, which will then apply the logic based on the received parameters.

Handling Different Shape Types

Now when it comes to the Tool treating different Shapes in their own ways, I think the Abstract Factory pattern steps in. Each tool will implement an Abstract Factory where we will provide manipulation implementations for each type of Shape.

Summary

Based on what I suggested, here is the draft Domain Model:

To get the whole idea out of my suggestions, I am also posting the Sequence Diagram for a specific Use Case:

Using ToolUI the user clicks on ShapeUI's ControlPointUI

网友答案:

If I correctly understand, here what we have :

  • different figures which all have control points
  • the UI allows to draw figures and drag the control points

My advice here is to say that what characterizes a figure goes in Model layer, and that the UI part go in the View/Controller one.

One step further for the model :

  • figures should implement an interface :

    public interface Figure {
        List<Segment> segments();
        List<ControlPoint> controlPoints();
        void drag(ControlPoint point, Pos newPos);
        void rotate(ControlPoint point, Pos newPos, Pos center); // or rotate(Pos center, double angle);
    }
    
  • Segment is an abstraction that can represent a line segment, an arc or a Bezier curve

  • a ControlPoint has a sense for a Figure implementation and has a current Pos

    public interface ControlPoint{
        Figure parent();
        void drag(Pos newPos); // unsure if it must exist in both interfaces
        Pos position();
        ToolHint toolHint();
    }
    
  • the ToolHint should be a indication for which tool can use the control point and for which usage - per your requirement, the rotate tool should considere the center as special.

  • a Pos represents x,y coordinates

That way the UI does not have to know anything about what the figures actually are.

To draw a Figure, the UI gets the list of Segment and simply draw independely each Segment, and add a mark at each control points. When a control point is dragged, the UI gives new position to the Figure and redraws it. It should be able to erase a Figure before redrawing it in its new position, or alternatively (simpler but slower) it could redraw all at each operation

With the drag method, we only can drag a simple control point on a single shape. It is easily extensible, but extensions will be have to be added for each tool. For example, I have allready added the rotate method that allows to rotate a shape by moving one control point with a define center. You could also add a scale method.

Multiple shapes

If you want to apply a transformation to a set of shapes, you could use a subclass of the rectangle. You build a rectangle with sides parallel to coordinate axes that contain all shapes. I recommend to add a method in Figure that returns an (reasonnably small) enclosing rectangle with sides parralel to coordinate axes to ease the creation of the multi shapes rectangle. Then, when you apply a transformation to the englobing rectangle, it simply reports the transformation to all of its elements. But we come here to transformations that cannot be done by dragging control points, because the point that is dragged does not belong to the internal shape.

Internal transformations

Until now, I have only dealt with the interface between the UI and the model. But with the multi shapes, we saw that we need to apply arbitrary affine transformations (translation of a point of an englobing rectangle or scaling of the englobing rectangle) or rotation. If we choose to implement rotation as rotate(center, angle) the rotation of an included shape is already done. So we simply have to implement the affine transformation

class AffineTransform {
    private double a, b, c, d;
    /* creators, getters, setters omitted, but we probably need to implement
       one creator by use case */

    Pos transform(Pos pos) {
         Pos newpos;
         newpos.x = a * pos.x + b;
         newpos.y = c * pos.y + d;
         return newpos;
    }
}

That way, to apply an affine transformation to a Figure, we just have to implement transform(AffineTransform txform) in a way that simply apply the all the points defining the structure.

Figure is now :

    public interface Figure {
        List<Segment> segments();
        List<ControlPoint> controlPoints();
        void drag(ControlPoint point, Pos newPos);
        void rotate(Pos center, double angle);
        // void rotate(ControlPoint point, double angle); if ControlPoint does not implement Pos
        Figure getEnclosingRectangle();
        void transform(AffineTransform txform);
    }

Summary :

It is just the general ideas, but it should be the basics to allows tools to act on arbitrary shapes, with a low coupling

网友答案:

I'd not expect a good design to emerge without getting down on coding and hitting actual problems. But if you don't know where to start here is my proposal.

inteface Shape {
   List<Point> getPoints(ToolsEnum strategy); // you could use factory here
}

interface Point {
    Shape rotate(int degrees); // or double radians if you like
    Shape translate(int x, int y);
    void setStrategy(TranslationStrategy strategy);
}

interface Origin extends Point {}

interface SidePoint extends Point {}

interface CornerPoint extends Point {}

Then implement Point interface extensions as inner classes in each concrete shape.

I assume next user flow:

  1. Tool selected - currentTool inside controller set to appropriate value from enum.
  2. User selects/ hower a shape - getPoints called, depending on tool some type of points may be filtered out. E.g. only corner points returned for morph operations. Inject appropriate strategies for exposed points.
  3. As user drags the point - translate called and you have new shape transformed with a given tool.
网友答案:

In principle, it is a good idea to make the model match the drawing interface. So, for example, in Java Swing, rectangles may be drawn with the drawRect method which takes as arguments the x,y of the upper left corner, the width, and the height. So, usually you would want to model a rectangle as { x-UL, y-UL, width, height }.

For arbitrary paths, including arcs, Swing provides the GeneralPath object with has methods for working with a sequence of points connected either by lines or Quadratic/Bezier curves. To model a GeneralPath you can provide a list of points, a winding rule, and the necessary parameters of either a Quadratic curve or a Bezier curve.

相关阅读:
Top