Intelligent Dragging
Eric Allan Bier
Dept. of EECS
UC Berkeley
Berkeley, CA 94720
Maureen Stone
Xerox PARC
3333 Coyote Hill Rd.
Palo Alto, CA 94304
1. Introduction
Artists drawing technical illustrations often require that precise relationships hold among picture elements. For instance, certain line segments should be horizontal, parallel, or congruent. Interactive illustration systems in the past have provided techniques such as grids and constraints to facilitate precise positioning. Unfortunately, neither of these techniques is particularly convenient. Grids provide only a small fraction of the desired types of precision. Constraints, while more powerful, require significant time to specify and add extra structure which the user must understand and manipulate. We introduce an interactive technique, intelligent dragging, which is a compromise.
Intelligent Dragging uses the familiar idea of snapping the cursor to points and curves using a gravity function. The new idea is that the program automatically constructs a set of gravity sensitive points and curves which are likely to help the user perform the current operation with precision. The user gives the system hints as to what types of precision will be needed. The result is a system with as much power as a ruler, compass, protractor, and T-square drafting set, where very little time is spent performing geometric constructions.
The name "intelligent dragging" is meant to suggest that the program is making intelligent guesses about which gravity sensitive objects to compute, but the user is performing all scene modifications individually by dragging scene objects until they snap onto gravity sensitive objects.
Intelligent Dragging can be seen as an extension to the idea of gravity sensitive grids (used in illustrators such as Griffin, Gremlin, MacDraw and StarGraphics [???refs???]) or as a diluted form of constraint based system (such as Sketchpad, ThingLab, and Juno [refs]). As in a grid system, the user snaps a cursor to gravity sensitive objects provided by the system. However, the set of gravity sensitive objects is richer and varies with the current scene and with the operation being performed. As in a constraint-based system, points can be placed to satisfy angle constraints, slope constraints, and distance constraints. However, with intelligent dragging, constraints are solved two at a time, with the user explicitly controlling the placement of each point, and the constraints are forgotten as soon as they are used rather than becoming part of the data structure.
While most of our discussion concerns polygonal shapes, the ideas extend naturally to more general curves. <Perhaps we can include the blurb about curves just before the Conclusion, if the paper isn't already too long.>
Intelligent Dragging is a general-purpose technique. It would probably not be successful in a restricted domain, such as VLSI design, where custom-tailored geometric transformations are essential [Magic, ChipNDale].
In Section 2, we describe intelligent dragging in more detail. In particular, we describe the heuristics used to guess a set of gravity-sensitive points and lines which are likely to be helpful. In Section 3, we will provide some criteria for judging a geometric precision technique, and compare intelligent dragging to grid and constraint approaches using these criteria. In Section 4, we will use examples to illustrate how intelligent dragging works and how it compares to the other techniques.
Intelligent dragging has been implemented as part of the Gargoyle 2D illustrator, running in the Cedar programming environment, on the Xerox Dorado (high-performance bitmap-display personal workstation). The translating, rotating, and scaling operations which we describe can be performed on scenes made of filled polygons and filled cubics in real time.
2. Intelligent Dragging
Intelligent dragging is the combination of these techniques: 1) A "gravity function" is used to snap a cursor to points and curves of interest. 2) The points and curves of interest include not only the objects in the scene, but also points and curves which can be derived from these by a simple rule, e.g. "the circle of radius 25 centered on that vertex," or "the intersection point of those two curves." 3) Objects are translated, rotated, or scaled in a smooth motion, which is coupled to the motion of the same cursor that snaps to points and curves. Hence, all of the motions can be performed with precision.
Before defining intelligent dragging further, we consider a simple construction example. Say we wish to construct an equilateral triangle 1 inch on a side with its base at 30 degrees to horizontal. We can do this in these steps:
1) Activate 1 inch circles. 2) Activate lines of slope 30 degrees. 3) Place the lower left vertex.  At this point, a 1 inch circle appears centered on the new vertex, and a line sloping at 30 degrees appears which passes through the new vertex. 4) Place the second vertex at one of the intersections of the circle and the 30 degree line. A second circle appears, centered on the second vertex. 5) Place the last vertex at the intersection of the two circles. 6) Invoke a close polygon command, or place a fourth vertex on top of the first.
The whole process takes 6 keystrokes. All but 2 of these would be needed to draw any triangle. These last 2 are the price we pay for precision. The reason this cost is so low is that we are allowing the computer to automatically decide where to place the alignment structures. To make this definition we begin with the following observations:
1) The new point A is often placed relative to an existing point B so that one of these constraints is satisfied: a) The distance from A to B is known or b) The slope of segment AB is known, or c) some combination of these.
2) The new point A is often placed relative to an existing segment BC so that one of these constraints is satisfied: The distance from A to BC is known or the angle ABC is known, or some combination of these.
3) Often the same distance, slope, or angle is used over and over.
These observations suggest that we might construct alignment lines and circles based on the points and curves in the scene, after the user has specified some distances, slopes, and angles of interest (from a menu, by typing, or some combination). Distances are shown as circles around a point of interest. Slopes and angles are shown as lines through points of interest. The intersection of all the alignment lines is mades extra sensitive to aid in satisfying two constraints at once. Figure n shows the gravity lines for step 4 of the example above. <or maybe we should make a small sequence for the entire example>
With appropriate gravity functions, the user will be able to point to any of the lines, circles, and intersection points. If our heuristics are correct, the next point to be placed will often be a member of this set.
2.2 Reducing the Number of Alignment Objects
How do we define which points should generate alignment lines? Displaying alignment lines at every vertex in a scene, will soon flood the screen with so many gravity sensitive points and lines that the user is at a loss to snap his cursor onto the proper one. The utility of intelligent dragging depends on a good strategies for reducing the number of alignment objects. These strategies group into two areas. First, modify the construction rules to make fewer guesses. Second, allow the user to be more specific about the type of construction he is trying to perform and the set of objects which are involved. We use both approaches.
2.2.1 Modify the Construction Rules
We should not reduce the set of guesses at random. Instead, we can take advantage of some more of the information which is available to us. We know what type of operation the user is trying to perform (e.g. placing the first point of a polygon, adding a point to a polygon, reshaping a polygon, translating, rotating, or scaling a polygon) as soon as the user initiates that command. Furthermore, we often know some of the objects which are involved (e.g. if we are reshaping a polygon, we known which polygon is being reshaped).
We can use both types of information to guess a small number of vertices and segments that the user is most likely to try to align the changing shape with. Call these vertices and segments the TRIGGERS (because we will use them to trigger the construction of alignment lines are circles). To make this guess, we are relying on a fourth heuristic:
4) When the user moves one point of a polygon, he will often want to align it with other parts of that same polygon.
Hence, for all operations which modify a polygon, we might use as triggers just the set of vertices and segments of that polygon. However, we cannot stop there because the heuristic is not always true. The user may wish to align a part of one polygon with points from another. In this case, the user must be more specific about his intent, which is the subject of the next sub-section.
2.2.2 Let the User be More Specific
In the scheme described above, we have allowed the user to activate and deactivate whole classes of constraints at once. This certainly gives the user coarse control over the number of alignment objects which are constructed. How can we provide finer control?
We could associate particular alignment lines with particular vertices and segments. However, this would greatly increase the number of things that the user would have to remember about the current state of his session. It would also complicate the implementation.
Instead, we allow the user to be more specific about the regions which are of interest. As described above, our heuristics will usually only choose a few triggers. As a consequence, if the user wants to make alignments involving large portions of the scene, he must indicate those portions of the scene explicitly. We have simply attached a flag to each scene object, indicating whether it is "hot" (should be used as a trigger at all times) or "cold" (use as a trigger only if Heuristic 4 recommends it). Actually, the system is allowed elimanate an object, even when it is hot if the object is moving during the operation.
Fig. 3 shows how the final set of alignment objects is computed. The operation is used to determine which triggers to guess. These triggers are combined with those made hot by the user to make up the set of active triggers. Next the menus are consulted to see which slopes, distances, and angles are currently of interest. This information alone is used to determine which lines and circles to construct around the triggering vertices and segments.
A special case is made for the vertices and segments themselves to be gravity sensitive, since we often need to be able to point to the objects in the scene independent of which alignment objects are to appear. Fig. 3 shows the set of gravity sensitive objects as being separate from the set of active triggers. Originally, we tried to combine the two pathways, thinking of a gravity sensitive vertex or segment as triggering itself. However, this resulted in many alignment lines appearing when we only wanted the objects themselves to be gravity sensitive. The current scheme greatly simplified the code.
When polygons are being repositioned (rather than modified), we cannot use Heuristic 4. Instead, we have decided that all stationary objects in the scene will be gravity sensitive but will not trigger any alignment lines. This approach relies on yet a fifth heuristic:
5) When a polygon is translated, rotated, or scaled in entirety, the resulting operation is often performed so as to make that polygon touch an existing polygon.
This Heuristic is a further justification for treating snapping to scene objects as a special case.
2.3 Transformations Parameterized by Points
One of the unifying ideas in Gargoyle is that a special point (the caret) can be placed with precision using gravity. The implementor of a positioning operation can use the position of the caret as a parameter to a new interactive operation with low overhead. Interactive operations based on the current position of the cursor are already present in commercial illustrators (STAR, MacDraw). Our operations are similar or identical to these in many aspects. However, the ability to take advantage of alignment lines in all operations makes new operation types possible. The operations translate, rotate, even scale, uneven scale, and skimming (coupled translation and rotation) are discussed here.
Each transformation is performed in three steps. First, the objects to be transformed are selected. Second, the caret is placed at an initial position in the scene. Gravity and alignment lines can be used to choose this position with precision. Third, the selected objects are smoothly translated, rotated, or scaled using the displacement between the original position of the caret and the new (constantly updated) position of the caret to determine the current transformation. Like the original caret position, the new caret position can be determined using gravity to snap to alignment lines.
2.3.1 Translation
When performing translation, we simply add the old caret/new caret displacement vector to the original position of each selected object to get its new position. In this way, we can move an object by a known displacement (e.g. if we point to one vertex in the scene as our initial position and another as our final position, all selected objects move by the displacement between the two points). We can also move one polygon to touch another by selecting a point on the selected polygon in the first step, and a point on the target polygon at the end.
2.3.2 Rotation
For rotations, we need to know which point to rotate around. We call this point the anchor. The anchor is placed by positioning the caret in a normal manner and then invoking the Drop Anchor command. Hence, the center of rotation can be placed with precision. During rotation, the transformation is a rotation about the anchor point of an angle equal to the angle through which the caret displacement vector has moved from its initial position. Objects can be rotated through a known angle this way . For example, the user can place the anchor point at the intersection of two alignment lines of known slope. Making the initial selection on the first line and the final selection on the second, the user can rotate an object through the angle made by the two lines. Objects can also be rotated so as to point at other objects. For example, place the anchor point at the base of an arrow, make the initial selection at its tip, and make the final selection on the object you would like the arrow to point to (the target can far from the arrow since the magitude of the caret displacement vector is ignored).
2.3.3 Even Scaling
The anchor point is used as the center of scaling. The magnitude of scaling is the ratio in magnitude of the new caret displacement vector to the original caret displacement vector. This transformation can be used to scale one object until one of its dimensions matches a dimension of another object. For instance, the square can be enlarged to fit on edge of the hexagon in Fig. ??.
2.3.4 Uneven Scaling
<We don't have to talk about this I suppose.>
2.3.5 Skimming
Skimming combines rotation and translation. No anchor point is needed. The initial caret position must be placed in the middle of a segment of a selected object, call this point P. The segment tangent direction is computed. Just as for normal translation, point P moves to stay on the caret. In addition, whenever the caret is moved to touch a new alignment line or scene segment, the selected objects are rotated about point P so that the tangent direction at point P is aligned with the tangent direction of the alignment line or scene segment. This operation is particularly useful when curved objects are used as well as polygons.
2.4 Measuring
If the user is allowed to measure existing distances, slopes, and angles, then alignments based on absolute relationships can be used to achieve relative relationships. For instance, to make two segments parallel, it suffices to measure the slope of one segment and then adjust the other segment to have the same slope. It is interesting that solving the constraint in this way takes about the same number of keystrokes as with a constraint based system, namely, point at both segments, choose the parallel constraint, and invoke the constraint solver. Measuring is an essential part of intelligent dragging. Without measuring, two lines could still be made parallel or congruent and so forth, but only by adjusting both lines to have the same known slope or length (which the user would type or select from a menu). With measurement, a line drawn freehand can become a full-citizen of the scene, participating in many precise relationships, without needing to be modified. In Gargoyle, the distance between a pair of points, the slope between a pair of points, and the angle made by three points can be measured by pointing with the mouse. <This is a fib. Angles are not yet implemented.>
3. What We Want from Geometric Precision
Below, we list seven desirable features of any set of tools which can help us draw tidy illustrations. These features amount to the requirement that the tools give us a good deal of constructive power with a minimum of confusion.
3.1 Handle Most of the Common Alignment Types
We could construct an illustrator with exactly one way to specify precise relationships -- let the user type the coordinates of points. This scheme has all the constructive power the user needs to construct any scene with precision (if the primitives can be parameterized by points). Nonetheless, this scheme is inadequate, because the user will have to make constant reference to scratch paper and pocket calculator, and because the scheme takes no advantage of interactive input devices. The example serves to remind us that, when we provide the user with a set of alignment aids, we are not increasing the set of objects which he can construct with precision, we are simply allowing him to construct the same set in less time, and with less use of scratch paper. For the most part, then, the choice of which aids to provide, is a matter of taste.  Below, we list a number of alignment relationships which we wish to facilitate. The list is constructed in a number of levels. Each level contains a set of alignments which we felt were inconvenient if only the levels above were provided. This list is constructed with Gargoyle in mind, but the relationships mentioned are desirable in any illustrator.
Level 0
 known point coordinates
Level 1
 known distance (between two points)
 known slope (between two points--includes horizontal,
  vertical, 45 degrees)
Level 2
 known angle (made by two pairs of points)
 known fraction of distance (e.g. half way from A to B)
 known line-distance (distance between a point and a line)
Level 3
 known symmetry group (mirror symmetry, cyclic symmetry, or both)

Derived Constraints
 congruent (same distance)
 touching (vertex on vertex, vertex on curve, curve on curve)
  known distance, where distance = 0.
 parallel (same slope, or known angle = 0)
 perpendicular (slopes differ by 90, or known angle = 90 degrees)
 horizontal, vertical (known slope = 0, 90 degrees)
 equiangular (same angle)
 colinear (line-distance = 0)
 known displacement vector (between two points)
  combination of known distance and known slope
 midpoint (known fraction of distance where fraction = 0.5)
 Level 1 gives us all of the power of compass, straight-edge, and protractor. However, the user must still calculate a bit if he has a line sloping at 73.4 degrees and wants another line to make an angle of 60 degrees with that. Similarly, it is possible to construct midpoint of a segment by measuring its length and slope, dividing the length in two and finding a new point which makes the same slope with the first two and has the calculated distance from them. Level 2 makes it possible to perform these operations easily. Level 3 is added not so much to reduce calculation as to reduce tedium. Objects with symmetry are often required and should be easy to make.
This list of relationships points out some of the weaknesses of grid systems. The strength of grid systems is that, with very little mechanism, they provide most of these common alignments to some extent. The weakness of grid systems is that none of the alignments is provided in full strength. For instance, each of our fundamental constraints is restricted. Angles must have rational tangents, and segments must have rational slopes. Distances (point to point and point to line) will be members of a small discrete set. As a result, such constructions as dropping a perpendicular from a point to a line are not, in general, possible. Of course, grids do work well for Manhattan (strictly horizontal and vertical) geometries.
3.2 The User is in Control
The user has lost control if he cannot accurately predict the outcome of each action. This can be a problem in constraint based systems. A set of simultaneous non-linear equations almost always has multiple solutions. If the non-linear equations are solved by relaxation, the chances are good that an unexpected solution will be found. Users of Juno are usually not pleased when an unfortunate set of constraints causes the scene to collapse to a point. Even with very sophisticated solution techniques, the solver must either guess a solution, or must show all of them. Neither possibility is particularly attractive. With intelligent dragging, each picture change is simple (a translation, rotation, or scaling), and each change is made interactively, reducing surprises.
3.3 Capture Intent and Instill Confidence
If we are trying to construct a 45 degree angle, we would like the system to know that we have succeeded so that it can tell us so. This is a problem in grid systems, where the user might count grid points several times to make sure all was well. Furthermore, we would like to know that our illustration will look right when it is printed. If a line segment looks like it ends on a circle on the screen, will we discover a gap between line and circle on the hardcopy? Part of a user's confidence in the correctness of his picture comes from the feedback he receives as he is making it. After the fact confidence can come from the ability to measure. For instance, if a user can measure slopes, distances, and angles, he can verify an illustration which someone else has made. Measuring is also useful during the construction process as discussed above. A user may also lose confidence if an operation takes so many steps that he can get lost in the middle. For instance, consider making symmetrical objects with a constraint system. An object can be given mirror symmetry by constraining pairs of points to be equidistant from a point on the mirror axis. Unfortunately, this often requires adding so many constraints, that the user can lose track of which he hasn't added. Providing a single higher-level symmetry constraint might alleviate this problem. This example suggests that user confidence can be improved by providing operations at the right semantic level. Such operations can be described easily, and carried out atomically. One of the arguments for using constraint systems, is that the constraints are added to the data structure, so more of the user's INTENT is being captured compared to systems which just store geometry. This is a useful notion. However, we argue that constraints represent this intent at too low a level. Even for simple shapes, like rectangles, determining that a set of constraints do properly constrain a shape to be rectangular is time consuming. For more complex shapes, getting the constraints right is like debugging code. It is perhaps for this reason that Borning [ThingLab] invisioned that an expert would define the primitive shapes in ThingLab, which others would use.
3.4 Isotropic
An illustrator must allow objects to be rotated. It should be just as helpful with aligning an object after it has been rotated as before. Likewise, it should be as helpful with lines sloping at 30 degrees or 45 degrees as with lines at 0 degrees or 90 degrees. Both grids and constraints have trouble with this. In a grid system, rotation allows vertices to leave the safety of the grid points. Thereafter, modifications to the rotated object must be done without the help of the grid.
Constraint-based systems also have trouble with rotated objects. Constraints such as horizontal are very convenient for defining shapes. (Try to construct Greg Nelson's block letter A without using the horizontal or vertical constraints.) However, once these constraints have been used, the shape cannot be rotated without either changing the constraints, or violating the constraints. If we simply decide to use the shape, and not use its constraints any more, we have the same situation as with grids -- the object is not editable once it is rotated. If we edit the constraints to make them consistent with the new rotation, we will have to change the constraint network in multiple places. If we design a shape that makes only a single mention of absolute orientation so that we can rotate the object by altering just that one constraint, then designing the shape becomes more difficult.
3.5 Easy Error Recovery
Of course, every good illustrator should be built with an Undo command. However, it is appealing if simple blunders can be undone long after they have been committed. For instance, if you rotate a house with a grid system and later notice that it has no chimney, you would like to unrotate it so the chimney can be put on straight. With some grid systems, this is possible by snapping one vertex onto a grid point, and then rotating a second point onto the grid. This is certainly better than nothing.  Error recovery can be a problem with constraint systems as well. An ill-conditioned set of constraints may collapse two lines together or two points together. Fixing this problem may require the addition of inequality constraints [Beautify paper] as used in Van Wyk's beautifier, or may require starting over.
3.6 Easy To See What Constraints are Active
We do not wish to force the user to remember which constraints have been placed on a particular scene. Ideally, we would like all of the constraints to be visible but unobtrusive. The beauty of grid systems is that the regularity of grid constraints makes this easy to do -- draw the grid points. With constraint-based systems, this is difficult and many approaches have been implemented. Sketchpad [Sutherland] drew the constraint network on the display with the artwork. Juno [Nelson] converts the constraints to text, but the text is hard to correlate with the geometry. ThingLab [Borning] does not show the constraints, but one might argue that the constraints in ThingLab are solved so fast that the constraints are obvious from the motion. In Gargoyle, all gravity sensitive points, lines, and curves are drawn semi-transparently on top of the scene. Since they are more obtrusive than a set of grid points, they are only shown when they are being used.
3.7 Easy to Change the Set of Active Constraints
There are two main reasons why a user may which to change the set of constraints which bear on his scene. First, the user may change his mind about a relationship (e.g. about whether two lines should be parallel or not). Second, he may wish to view a set of constraints in a different way (e.g. converting 2 horizontal constraints into a parallel constraint). Constraints should be as easy to add and remove as the graphical objects themselves.
With grid systems, there isn't much to change except for the grid spacing, and this is usually easy.
In a constraint system, adding and removing constraints can be time consuming since we must, in general, specify the type of constraint and the vertices it applies to.
With intelligent dragging, the user turns a class of constraints (e.g. 60 degree lines) on or off. Often, this is all that is required. More complicated operations may require explicitly making some vertices and segments hot or cold as described above. In these cases, changing the active constraints takes about as much time as in a constraint system. The examples in Section 4 should make this more concrete.
3.8 Summary
The table below summarizes the properties of grid, constraint, and intelligent dragging systems.
Table 1:
  Grids  Constraints Intelligent Dragging
User in Control Yes  No  Yes
Capture Intent No  Partly  Partly
Handle Common
Alignments No  Yes  Yes
Isotropic  No  Often Not  Yes
Error Recovery If careful  Often Not  Yes
See Active
Constraints Yes (Grid On) Sketchpad, Not Juno Yes
Change
Constraints Yes (Spacing) Often Not  Yes
4. Examples
<There is only room in this paper for one or two examples in this section, if they are to be explained carefully. I would like the examples to show that intelligent dragging: 1) is isotropic 2) uses measuring 3) makes interesting use of the cursor for rotation, scaling, and skimming. 4) can make pretty pictures The other aspects will probably come through no matter what examples are used. Scaling and rotating a square to fit on the set of an n-gon shows isotropic and scaling.  Thelast example should be flashy. We could show a colorful picture that is almost done (I have visions of a house and garden) and use intelligent dragging to finish it. Wouldn't it be great to have the silhouette of a gargoyle in one of the windows? >
5. Conclusion
<If it's all Manhattan, use grids. If you really want to play with the stroke widths of block letter A's, use constraints.>
Grid systems and constraint systems are two extremes in the way the user communicates his intent to the machine. With grids, the user tells the computer nothing of his intent. The computer helps the user by guiding his "pencil" to an agreed-upon set of grid points. The user achieves geometric precision without any understanding on the part of the computer. In constraint systems, the user works hard to tell the computer formally what his intent is. Having done this, the user gives up control while the computer carries out the intent. Intelligent dragging is a compromise. The user describes, in general terms, what relationships will be needed. The computer then sets up a custom-tailored "grid" which is used to help the user guide his pencil.
Intelligent dragging is a bizarre idea. It takes the perfectly sensible notion of constructing shapes with rulers, compasses, and protractors and adds the strange twist that the computer tries to guess where the rulers, compasses, and protractors should be placed.
Intelligent dragging provides many of our precision goals. It has two problems: Time to specify desired alignments. Time to reduce the number of active triggers. Will the advantages outweigh the disadvantages? Time will tell. We guess it will be easy to learn (intuitive). It makes good use of pointing devices and feedback.