\input TR10WideFormat.tex \input GeoMacros
\begfile{SubVor.tex of March 26, 1984 9:05:21 am PST --- Stolfi}
% Quad-edge and Voronoi paper --- Presented in STOC '83
% Reformatted to use TR8TwoColFormat.tex
% Titles and stuff
\pagetitlestuff{
\title{Primitives for the Manipulation of General Subdivisions\cp
and the Computation of Voronoi Diagrams}
\titlerule
\author{Leo Guibas and Jorge Stolfi}
\institution{Xerox PARC and Stanford University}
\titlerule
\abstract{We discuss the following problem: given $n$ points in the plane (the ``sites''), and an arbitrary query point $q$, find the site that is closest to $q$. This problem can be solved by constructing the Voronoi diagram of the given sites, and then locating the query point in one of its regions. We give two algorithms, one that constructs the Voronoi diagram in $O(n\lg n)$ time, and another that inserts a new site in $O(n)$ time. Both are based on the use of the Voronoi dual, the Delaunay triangulation, and are simple enough to be of practical value. The simplicity of both algorithms can be attributed to the separation of the geometrical and topological aspects of the problem, and to the use of two simple but powerful primitives, a geometric predicate and an operator for manipulating the topology of the diagram. The topology is represented by a new data structure for generalized diagrams, that is embeddings of graphs in two-dimensional manifolds. This structure represents simultaneously an embedding, its dual, and its mirror-image. Furthermore, just two operators are sufficient for building and modifying arbitrary diagrams.}
}
\support{The work of Jorge Stolfi, who is on leave from the University of S\tilde{\rm a}o Paulo (S\tilde{\rm a}o Paulo, Brazil) was partially supported by a grant from Conselho Nacional de Desenvolvimento Cient\acute{\i}fico e Tecnol\acute{\rm o}gico (CNPq).}
\botinsert{\vbox to 1.2 truein{}} % For ACM copyright notice, etc
% macros for quad-edge operations (in math style)
\def\thin{\hskip0.7pt}
\def\sym{\thin\hbox{\it Sym\/}}
\def\flip{\thin\hbox{\it Flip\/}}
\def\rot{\thin\hbox{\it Rot\/}}
\def\onext{\thin\hbox{\it Onext\/}}
\def\oprev{\thin\hbox{\it Oprev\/}}
\def\dnext{\thin\hbox{\it Dnext\/}}
\def\dprev{\thin\hbox{\it Dprev\/}}
\def\lnext{\thin\hbox{\it Lnext\/}}
\def\lprev{\thin\hbox{\it Lprev\/}}
\def\rnext{\thin\hbox{\it Rnext\/}}
\def\rprev{\thin\hbox{\it Rprev\/}}
\def\lef{\thin\hbox{\it Left\/}}
\def\rif{\thin\hbox{\it Right\/}}
\def\org{\thin\hbox{\it Org\/}}
\def\dest{\thin\hbox{\it Dest\/}}
\def\dual{\thin\hbox{\it Dual\/}}
% miscellaneous math macros
\def\S{\sphere}
\def\B{\ball}
\def\Si{\Sigma}
\def\Vs{\Vscr}
\def\as{^\ast}
\def\Es{\Escr}
\def\Ls{\Lscr}
\def\Cs{\Cscr}
\def\Ks{\Kscr}
\def\Ts{\Tscr}
\def\ls{\lscr}
\def\Fs{\Fscr}
% reference macros
\def\Sham{Sh}
\def\Kirki{Ki1}
\def\Kirkii{Ki2}
\def\ShamHo{SH}
\def
\GrSi{GS}
\def\Baum{Ba}
\def\Braid{Br}
\def\IyKa{IK}
\def\MuPr{MP}
\def\MaSu{MS}
\def\Knuth{Kn}
\def\Even{Ev}
\def\Lee{Le}
\def\Har{Har}
\def\PH{PH}
\section{0. Introduction}
One of the fundamental data structures of computational geometry is the Voronoi diagram. This diagram arises from consideration of the following natural problem. Let $n$ points in the plane be given, called {\it sites}. We wish to preprocess them into a data structure, so that given a new query point $q$, we can efficiently locate the nearest neighbor of $q$ among the sites. The $n$ sites in fact partition the plane into a collection of $n$ regions, each associated with one of the sites. If region $P$ is associated with site $p$, then $P$ is the locus of all points in the plane closer to $p$ than to any of the other $n-1$ sites. This partition is known as the {\it Voronoi} diagram (or the {\it Dirichlet}, or {\it Thiessen}, tesselation) determined by the given sites.
The closest site problem is can therefore be solved by constructing the Voronoi diagram, and then locating the query point in it. Using the currently best available algorithms, the Voronoi diagram of $n$ points can be computed in $O(n\lg n)$ time and stored in $O(n)$ space; these bounds have been shown to be optimal in the worst case [\Sham]. Once we have the Voronoi diagram, we can construct in linear further time a structure with which we can do point location in a planar subdivision in $O(\lg n)$ time [\Kirki].
Shamos [\Sham] first pointed out that the Voronoi diagram can be used as a powerful tool to give efficient algorithms for a wide variety of other geometric problems. Given the Voronoi, we can compute in linear time the closest pair of sites, or the closest neighbor of each site, or the Euclidean minimum spanning tree of the $n$ sites, or the largest point-free circle with center inside their convex hull, etc. Several of these problems are known to require $\Omega(n\lg n)$ time in the worst case, so these Voronoi-based algorithms are asymptotically optimal.
The complexity of the $O(n\lg n)$ Voronoi algorithms that can be found in the literature is a serious barrier to their widespread utilization. To the authors' knowledge, they so far have never been used in any of the significant practical applications of closest-point problems in statistics, operations research, geography, and other areas. In every case the authors of those programs chose to use asymptotically slower $O(n^2)$ algorithms, which were much simpler to code and almost certainly faster for the ranges of interest in $n$. Furthermore, the presentation of Voronoi algorithms in the literature has often been insufficiently precise. Authors typically confine themselves only to a very high-level description of their algorithms. As with many geometric problems, difficulties can arise in the implementation when degeneracies occur, as for example when three of the given points happen to be cocircular.
In this paper we present a novel way of looking at the standard Voronoi computation techniques, such as the divide and conquer [\ShamHo] and incremental [\GrSi] methods, that results in Voronoi algorithms that are very concise and substantially easier to read, implement and verify. One of the main reasons for this simplicity is that we work with the duals of the Voronoi diagrams, which are known as {\it Delaunay triangulations}, rather than with the diagrams themselves. The other major reason is the clean separation that we are able to make between topological and geometrical aspects of the problem. In sections 6 through 10 we show that the hardest part of constructing a Voronoi diagram or Delaunay triangulation is the determination of its topological structure, that is, the incidence relations between vertices, edges, and faces. Once the topological properties of the diagram are known, the geometrical ones (coordinates, angles, lengths, etc.) can be computed in time linear in the size of the diagram.
Our algorithms are built using essentially two primitives: a geometric predicate, and a topological operator for manipulating the structure of the diagrams. The geometrical primitive, that we call the $\incircle$ test, encapsulates the essential geometric information that determines the topological structure of the Voronoi diagram, and is a powerful tool not only in the coding of the algorithms but also in proving their correctness. As evidence for its importance, we show that it possesses many interesting properties, and can be defined in a number of equivalent ways.
The topological structure of a Voronoi or Delaunay diagram is equivalent to that of a particular embedding of some undirected graph in the Euclidean plane. We have found it convenient to consider such diagrams as being drawn on the sphere rather than on the plane; topologically that is equivalent to augmenting the Euclidean plane by a dummy {\it point at infinity}. This allows us to represent such things as infinite edges and faces in the same way as their finite counterparts. In sections 1 through 5 we will establish the mathematical properties of such embeddings, define a notation for talking about them, and describe a data structure for their representation.
It turns out that the data structure we propose is general enough to allow the representation of undirected graphs embedded in arbitrary two-dimensional manifolds. In fact, it may be seen as a variant of the ``winged edge'' representation for polyhedral surfaces [\Baum]. We show that a single topological operator, which we call {\it Splice}, together with a single primitive for the creation of isolated edges, is sufficient for the construction and modification of arbitrary diagrams. Our data structure has the ability to represent simultaneously and uniformly both the primal, the dual, and the mirror-image diagrams, and to switch arbitrarily from one of these domains to another, in constant time. Finally, the design of the data structure enables us to manipulate its geometrical and topological parameters independently of each other. As it will become clear in the sequel, these properties have the effect of producing programs that are at once simple, elegant, efficient from a practical point of view, and asymptotically optimal in time and space.
Since this paper is quite long, some guidance to the forthcoming sections may be advisable. Section 1 introduces the concept of a simple subdivision of a manifold and discusses some of the conventions we adopt as compared to the extant literature. Section 2 presents the very important ideas of the dual of a subdivision, and the edge algebra associated with a subdivision. The edge algebra is a combinatorial structure on the edges of the subdivision that we claim captures all the topological information associated with the subdivision. Section 3 is more technical and may be omitted on a first reading. It formalizes and then proves the above claim about edge algebras by showing that isomorphism of edge algebras is equivalent to topological homeomorphism between the corresponding subdivisions. In section 4 we present a computer representation for an edge algebra, which is our {\it quad edge} data structure. Section 5 introduces the topological primitives that we use to manipulate this structure and discusses their properties and implementation. Section 6 tailors these primitives to the application on hand, namely the Delaunay/Voronoi computation. Section 7 reviews some properties of the Voronoi/Delauanay subdivision and section 8 presents our main geometric primitive for their computation, the $\incircle$ test and its properties. Section 9 presents in detail and proves correct a divide and conquer algorithm for Voronoi computations, and section 10 discusses incremental techniques.
\section{1. Subdivisions}
In this section we will give a precise definition for the informal concept of an embedding of an undirected graph on a surface. Special instances of this concept are sometimes referred to as a subdivision of the plane, a generalized polyhedron, a two-dimensional diagram, or by other similar names. They have been extensively discussed in the solid modeling literature of computer graphics [\Baum, \MaSu]. We want a definition that accurately reflects the topological properties one would intuitively expect of such embeddings (for instance, that every edge is on the boundary of two faces, every face is bounded by a closed chain of edges and vertices, every vertex is surrounded by a cyclical sequence of faces and edges, and so forth) and at the same time is as general as possible and leads to a clean theory and data structure.
We assume the reader is familiar with a few basic concepts of point-set topology, such as topological space, continuity, and homeomomorphism [\IyKa]. Two subsets $A$ and $B$ of a topological space $M$ are said to be {\it separable} if some neighborhood of $A$ is disjoint from some neighborhood of $B$; otherwise, they are said to be {\it incident} on each other. A {\it line} of $M$ is a subspace of $M$ homeomorphic to the open interval $\B𡤁=(0\sp 1)$ of the real line. A {\it disk} of $M$ is a subspace homeomorphic to the open circle of unit radius $\B𡤂=\rset{x\in\reals^2\,:\, \leftv x\rightv<1}$. Recall that a {\it two-dimensional manifold} is a topological space with the property that every point has an open neighborhood which is a disk (all manifolds in this paper will be two-dimensional).
\definition{1.1}{A {\it subdivision} of a manifold $M$ is a partition $S$ of $M$ into three finite collections of disjoint parts, the {\it vertices}, the {\it edges}, and the {\it faces} (denoted respectively by $\Vscr S$, $\Escr S$, and $\Fscr S$), with the following properties:
\begitems
\itemno{S1.}Every vertex is a point of $M$.
\itemno{S2.}Every edge is a line of $M$.
\itemno{S3.}Every face is a disk of $M$.
\itemno{S4.}The boundary of every face is a closed path of edges and vertices.
\enditems
}
The vertices, edges, and faces of a subdivision are called its {\it elements}. Figure 1.1 shows some examples of subdivisions.
\capfig7cm{Figure 1.1. Examples of subdivisions.}
Condition S4 needs some explanation. We will denote by $\closure{\B}𡤂$ the {\it closed} circle of unit radius, and by $\S𡤁$ its circumference. Let us define a {\it simple path} in $\S𡤁$ as a partition of $\S𡤁$ into a finite sequence of isolated points and open arcs. The precise meaning of S4 is then the following: for every face $F$ there is a simple path $\pi$ in $\S𡤁$ and a continuous mapping $\phi𡤏$ from $\closure{\B}𡤂$ onto the closure of $F$ that (i) maps homeomorphically $\B𡤂$ onto $F$, (ii) maps homeomorphically each arc of $\pi$ into an edge of $S$, and (iii) maps each isolated point of $\pi$ to a vertex of $S$.
Condition S4 has a number of important implications. Clearly the points and arcs of $\pi$ must alternate as we go around $\S𡤁$; if $\alpha$ is the arc between two consecutive points $a$ and $b$ of $\pi$, then its image $\phi𡤏(\alpha)$ is an edge incident to the points $\phi𡤏(a)$ and $\phi𡤏(b)$. Therefore, the images of the elements of $\pi$, taken in the order in which they occur around $\S𡤁$, constitute a closed, connected path $\pi𡤏$ of edges and vertices of $S$, whose union is the boundary of $F$. Notice that the bounding path $\pi𡤏$ need not be simple, since $\phi𡤏$ may take two or more distinct arcs or points of $\pi$ to the same element of $S$. Therefore the closure of a face may not be homeomorphic to a disk, as figure 1.1 shows.
Since it is impossible to cover a disk with only a finite number of edges and vertices, every edge and every vertex in a subdivision of a manifold must be incident to some face. We conclude that every edge is entirely contained in the boundary of some face, and that it is incident to two (not necessarily distinct) vertices of $S$. These vertices are called the {\it endpoints} of the edge; if they are the same, then the edge is a {\it loop}, and its closure is homeomorphic to the circle $\S𡤁$.
Since every element of $S$ is in the closure of some face, and since the closed disk $\closure{\B}𡤂$ is compact, the manifold $M$ is the union of a finite number of compact sets --- and therefore is itself compact. In fact, condition S4 can be replaced by the requirement that $M$ be compact, that the edges be pairwise separable, and that every vertex is incident to some edge. Furthermore, every compact manifold has a subdivision. We will not attempt to prove these statements, since they are too technical for the scope of this paper.
Informally speaking, a compact two-dimensional manifold is a surface that closes upon itself, has no boundary, and in which every infinite sequence has an accumulation point. The sphere, the torus, and the projective plane are such manifolds; the disk, the line segment, the whole plane, and the M\um{\rm o}bius strip are not. The compactness condition is not as restrictive as it may seem; any manifold can be made compact by adding a dummy ``point at infinity'' that is by definition an accumulation point of all sequences with no other accumulation points. This operation transforms the Euclidean plane $\reals^2$ into the {\it extended plane}, which is homeomorphic to the sphere.
\subsection{1.1. Equivalence and connectivity}
\definition{1.2}{Let $S$ and $S\pr$ be two subdivisions of the manifolds $M$ and $M\pr$. An {\it isomorphism} from $S$ to $S\pr$ is a homeomorphism of $M$ onto $M\pr$ that maps each element of $S$ onto an element of $S\pr$. When such a mapping exists, we say that $S$ and $S\pr$ are {\it equivalent}, and we write $S\iso S\pr$.}
Such an isomorphism will perforce map vertices into vertices, faces into faces, edges into edges, and will preserve the incidence relationships among them. A {\it topological property} of subdivisions is a property that is invariant under equivalence. Our goal will be to develop a computer representation that fully captures all topological properties of subdivisions.
The collection of all edges and vertices of a subdivision $S$ constitutes an undirected graph, the {\it graph of $S$}. The graphs of two equivalent subdivisions $S$ and $S\pr$ are obviously isomorphic. The converse is not always true: if $S$ and $S\pr$ have isomorphic graphs, it doesn't follow that they are equivalent, or that $M$ and $M\pr$ are homeomorphic. Figure 1.2 shows an example. Note that the subdivisions are not equivalent even though there also is a one-to-one correspondence between the faces of $S$ and $S\pr$ with the property that corresponding faces are incident to corresponding edges and vertices. This example shows that the {\it set} of edges and vertices on the boundary of a face is not enough information to characterize its relationship to the rest of the manifold.
\capfig5cm{Figure 1.2. Two non-equivalent subdivisions with isomorphic graphs.}
This fact is the main source of complexity in the theoretical treatment of subdivisions, notably in the proof that our data structure is a consistent representation of a general subdivision. It is possible to define subdivisions in such a way that their topological structure is completely determined by that of their graphs. For example, if the manifold is restricted to be a sphere, and the graph is triply connected [\Har], then the subdivision is determined up to equivalence. However, any set of conditions strong enough to achieve this goal would probably outlaw ``degeneracies'' such as loops, multiple edges with the same endpoints, faces with nonsimple boundaries, and so forth. Subdivisions with such degeneracies are much more common than it may seem: they inevitably arise as intermediate objects in the transformation of a ``well-behaved'' subdivision into another. An even stronger reason for adopting our liberal definition is that it leads to more flexible data structures and simpler atomic operations with weaker preconditions.
On the other hand, we depart from the common solid modeling tradition by insisting that every face be a simple disk, without ``handles'' or ``holes'', even though the whole manifold is allowed to have arbitrary connectivity. The main reason for this requirement is to enable a clean and unambiguous definition of the dual subdivision (see subsection 2.2). One important consequence of this restriction is stated below:
\theorem{1.1}{The graph of a simple subdivision is connected iff the manifold is connected.}
\proof{Since every face is incident to some edge, if the graph is connected then the whole manifold is too. Now assume the the graph is not connected, but the manifold is. Since the faces are pairwise separable, and their addition to the graph makes it connected, some face is incident to two distinct components of the graph. By condition S4 the boundary of that face is connected, a contradiction.}
Therefore, the connected components of the manifold are in one-to-one correspondence with the connected components of the underlying graph.
\section{2. The edge algebra of a subdivision}
In this section we will develop a convenient notation for describing relationships among edges of a subdivision, and a mathematical framework that will justify the choice of our data structure. We will develop first the theory and representation for arbitrary compact manifolds, and then we will show that certain important simplifications can be made in the particular case when the manifold is orientable. For many applications, including the computation of Voronoi diagrams, the only relevant manifold will be the extended plane.
\subsection {2.1 Basic edge functions}
On any disk $D$ of a manifold there are exactly two ways of defining a local ``clockwise'' sense of rotation; these are called the two possible {\it orientations} of $D$. An {\it oriented element} of a subdivision $P$ is an element $x$ of $P$ together with an orientation of a disk containing $x$. There are also exactly two consistent ways of defining a linear order among the points of a line $\lscr$; each of these orderings is called a {\it direction along} $\lscr$. A {\it directed edge} of a subdivision $P$ is an edge of $P$ together with a direction along it. Since directions and orientations can be chosen independently, for every edge of a subdivision there are four directed, oriented edges. Observe that this is true even if the edge is a loop, or is incident twice to the same face of $P$.
For any oriented and directed edge $e$ we can define unambiguously its vertex of {\it origin}, $e\org$, its {\it destination}, $e\dest$, its {\it left face}, $e\lef$, and its {\it right face}, $e\rif$. We define also the {\it flipped} version $e\flip$ of an edge $e$ as being the same unoriented edge taken with {\it opposite orientation} and same direction, as well as the {\it symmetric} of $e$, $e\sym$, as being the same undirected edge with the {\it opposite direction} but the same orientation as $e$. We can picture the orientation and direction of an edge $e$ as a small bug sitting on the surface over the midpoint of the edge and facing along it. Then the operation $e\sym$ corresponds to the bug making a half turn on the same spot, and $e\flip$ corresponds to the bug hanging upside down from the other side of the surface, but still at the same point of the edge and facing the same way.
The elements $e\org$, $e\lef$, $e\rif$, and $e\dest$ are taken by definition with the orientation that agrees locally with that of $e$. More precisely, the orientation of $e\org$ agrees with that of some initial segment of $e$, and that of $e\dest$ agrees with some final segment of $e$. Note that for some loops $e\org$ and $e\dest$ may have opposite orientations, in spite of being the same (unoriented) vertex. Similarly, the orientation of $e\lef$ agrees with $e$ along the ``left margin'' of $e$, and that of $e\rif$ agrees along its ``right margin''. If $e$ is a bridge, it may be the case that $e\lef$ and $e\rif$ have different orientations, in spite of being the same (unoriented) face. By extending our previous notation, we will denote by $VS$, $ES$ and $FS$ the sets of directed and oriented elements of a subdivision $S$. In the rest of this section, unless otherwise specified, all subdivision elements are assumed to be oriented, and directed if edges.
A sufficiently small disk containing the vertex $v=e\org$, it can be mapped homeomorphically onto the unit disk $\B𡤂$ in such a way that $v$ is mapped to the origin, and the intersection of $D$ with every edge incident to $v$ is a ray of $\B𡤂$. Traversing the boundary of $D$ in the counterclockwise direction (as defined by the orientation of $v$) establishes a cyclical ordering of those edges. If each edge is oriented so as to agree with $v$, and directed away from $D$, we obtain what is called the {\it ring of edges out of $v$}. We can define the {\it next edge with same origin}, $e\onext$, as the one immediately following $e$ (counterclockwise) in this ring (see figure 2.1). Note that if $e$ is a loop it will occur twice in the ring of edges out of $v$. To be precise, both $e$ and an oppositely directed version of it (either $e\sym$ or $e\sym\flip$) will occur once each: since the manifold around $v$ is like a disk, $e$ will occur only once in each circuit, and we will never encounter $e\flip$.
Similarly, given an edge $e$ we define the {\it next counterclockwise edge with same left face}, denoted by $e\lnext$, as being the first edge we encounter after $e$ when moving along the boundary of the face $F=e\lef$, in the counterclockwise sense as determined by the orientation of $F$. The edge $e\lnext$ is oriented and directed so that $e\lnext\lef=F$ (including orientation). The successive images of $e$ under $\lnext$ give precisely the edges of the bounding path $\pi𡤏$ of condition S4 (in one of the two possible orders). As in the case of $\onext$, the edge $e$ appears exactly once in this list, and either $e\sym$ or $e\flip$ (but not $e\sym\flip$) may appear once.
\capfig4.5cm{Figure 2.1. The ring of edges out of a vertex.}
\subsection{2.2. Duality}
The dual of a planar graph $G$ can be defined intuitively as a graph $G^\ast$ obtained from $G$ by interchanging vertices and faces while preserving the incidence relationships. The definition below extends this intuitive concept to arbitrary subdivisions:
\definition{2.1}{Two subdivisions $S$ and $S^\ast$ are said to be {\it dual} of each other if for every directed and oriented edge $e$ of either subdivision there is another edge $e\dual$ of the other such that
\begitems
\itemno{D1.}$(e\dual)\dual=e$
\itemno{D2.}$(e\sym)\dual=(e\dual)\sym$
\itemno{D3.}$(e\flip)\dual=(e\dual)\flip\sym$
\itemno{D4.}$(e\lnext)\dual=(e\dual)\onext^{-1}$
\enditems
}
Equation D4 states that moving counterclockwise around the left face of $e$ in one subdivision is the same as moving clockwise around the origin of $(e\dual)$ in the other subdivision. To see why, note that the edges on the boundary of the face $F=e\lef$, in counterclockwise order, are $\seq{e\lnext,\penalty-500 e\lnext^2,\penalty-500 \ldots,\penalty-500 e\lnext^m=e}$ for some $m\geq 1$. This path maps through $\dual$ to the sequence $\seq{(e\dual)\onext^{-1},\penalty-500 (e\dual)\onext^{-2},\penalty-500 \ldots, (e\dual)\onext^{-m}\penalty-500=e\dual}$ of all edges coming out of the vertex $v=(e\dual)\org$ of $S^\ast$, in clockwise order around $v$.
We can therefore extend $\dual$ to vertices and faces of the two subdivisions, by defining $(e\lef)\dual=(e\dual)\org$ and $(e\org)\dual=(e\dual)\lef$. Equations D2 and D3 imply that any two edges that differ only in orientation and direction will be mapped to two versions of the same undirected edge. Combining this with the preceding argument we conclude that $\dual$ establishes a correspondence between $ES$ and $ES^\ast$, between $VS$ and $FS^\ast$, and between $FS$ and $VS^\ast$, such that incident elements of $S$ correspond to incident elements of $S^\ast$, and vice-versa. It follows that two vertices of one subdivision are connected by an edge whenever (and as many times as) the corresponding faces of the other are incident to a common edge. So, in the particular case when $S$ and $S^\ast$ are subdivisions of the sphere, the graphs of $S$ and $S^\ast$ are duals of each other in the sense of graph theory.
Figure 2.2 shows a subdivision of the extended plane (solid lines) superimposed on its dual (dotted lines). Note that the two subdivisions of figure 2.2 have the property that each undirected edge of one meets (and crosses) only the corresponding dual edge of the other, and that each vertex of one is in the corresponding dual face of the other. When this happens, we say that $S$ and $S^\ast$ are {\it strict duals} of each other. In that case, the dual of an oriented and directed edge $e$ is the edge of the dual subdivision that crosses $e$ from left to right, but taken with orientation {\it opposite} to that of $e$. That is, the dual subdivision should be looked from the other side of the manifold, or the manifold should be turned inside out. This reflects the correspondence between counterclockwise traversal of $e\lef$ to clockwise traversal of $(e\dual)\org$.
\capfig5.5cm{Figure 2.2. A subdivision of the extended plane (solid lines) and a strict dual (dashed lines).}
This implicit ``flipping'' of the manifold is unavoidable if $S$ and $S^\ast$ are superimposed as strict duals and we insist that $\dual$ be its own inverse. It has the serious drawback of making the calculus of the edge functions much less intuitive. It is therefore preferable to relate the two dual subdivisions by means of the function
$$e\rot=e\flip\dual=e\dual\flip\sym$$
which maps $ES$ to $ES^\ast$ without this implicit ``flipping''. The edge $e\rot$ is called the {\it rotated} version of $e$; it is the undirected dual of $e$, directed from $e\rif$ to $e\lef$, and oriented so that moving counterclockwise around the right face of $e$ corresponds to moving counterclockwise around the origin of $e\rot$. If the two subdivisions are superimposed as strict duals, like in figure 2.2, then we may say that $e\rot$ is $e$ ``rotated $90\deg$ counterclockwise'' around the crossing point. In fact, the only reason for not defining duality in terms of $\rot$ (rather than $\dual$) is that it fails short of being its own inverse: $(e\rot)\rot$ gives $e\sym$ instead of $e$.
\subsection{2.3. Properties of edge functions}
The functions $\flip$, $\rot$, and $\onext$ satisfy the following properties:
\begitems
\itemno{E1.}$e\rot^4=e$
\itemno{E2.}$e\rot\onext\rot\onext=e$
\itemno{E3.}$e\rot^2\neq e$
\itemno{E4.}$e\in ES$\ \ iff\ \ $e\rot\in ES^\ast$
\itemno{E5.}$e\in ES$\ \ iff\ \ $e\onext\in ES$
\vskip5pt
\itemno{F1.}$e\flip^2=e$
\itemno{F2.}$e\flip\onext\flip\onext=e$
\itemno{F3.}$e\flip\onext^n\neq e$\ \ for any $n$
\itemno{F4.}$e\flip\rot\flip\rot=e$
\itemno{F5.}$e\in ES$ iff $e\flip\in ES$
\enditems
A number of useful properties can be deduced from these, as for example
$$\vbox{\halign{\hskip20pt\lft{$ # $}&\lft{$\null = # $}\cr
e\flip^{-1}&e\flip\cr\vsk4
e\sym&e\rot^2\cr\vsk4
e\rot^{-1}&e\rot^3\cr
\null&e\flip\rot\flip\cr\vsk4
e\dual&e\flip\rot\cr\vsk4
e\onext^{-1}&e\rot\onext\rot\cr
\null&e\flip\onext\flip,\cr
}}$$
and so forth. For added convenience in talking about subdivisions, we introduce some derived functions. By analogy with $e\lnext$ and $e\onext$, for a given $e$ we define the {\it next edge with same right face}, $e\rnext$, and {\it with same destination}, $e\dnext$, as the first edges that we encounter when moving counterclockwise from $e$ around $e\rif$ and $e\dest$, respectively. These functions satisfy also the following equations:
$$\vbox{\halign{\hskip20pt\lft{$ # $}&\lft{$\null= # $}\cr
e\lnext&e\rot^{-1}\onext\rot\cr\vsk4
e\rnext&e\rot\onext\rot^{-1}\cr\vsk4
e\dnext&e\sym\onext\sym\cr
}}$$
The orientation and direction of the returned edges is defined so that $e\lnext\lef=e\lef$, $e\rnext\rif=e\rif$, and $e\dnext\dest=e\dest$. Note that $e\rnext\dest= e\org$, rather than vice-versa. By moving {\it clockwise} around a fixed endpoint or face, we get the inverse functions, defined by
$$\vbox{\halign{\hskip20pt\lft{$ # $}&\lft{$\null= # $}&\lft{$\null= # $}\cr
e\oprev&e\onext^{-1}&e\rot\onext\rot\cr\vsk4
e\lprev&e\lnext^{-1}&e\onext\sym\cr\vsk4
e\rprev&e\rnext^{-1}&e\sym\onext\cr\vsk4
e\dprev&e\dnext^{-1}&e\rot^{-1}\onext\rot^{-1}\cr
}}$$
It is important to notice that every function defined so far (except $\flip$) can be expressed as the composition of a constant number of $\rot$ and $\onext$ operations, independently of the size or complexity of the subdivision. Figure 2.3 illustrates these various functions.
\capfig5.5cm{Figure 2.3. The edge functions.}
\subsection{2.4. Edge algebras}
\definition{2.2}{An {\it edge algebra} is an abstract algebra $(E, E^\ast, \onext, \rot, \flip)$, where $E$ and $E^\ast$ are arbitrary finite sets, and $\onext$, $\rot$, and $\flip$ are functions on $E$ and $E^\ast$ satisfying properties F1--F5 and E1--E5.}
The axioms imply that $\rot$ is a bijection from $E$ to $E^\ast$ and from $E^\ast$ to $E$. Also $\flip$ and $\onext$ each define permutations acting on $E$ and $E^\ast$ separately. The orbits of $\onext$ in $E$ are in one-to-one correspondence with the vertices of $S$, and therefore also with the faces of $S^\ast$. These orbits are joined by $\flip$ in pairs of opposite orientation. We can therefore define $e\org$ in an edge algebra as the orbit of $e$ under $\onext$, and similarly $e\lef=e\rot^{-1}\org$, $e\rif=e\rot\org$, $e\dest=e\sym\org$. Notice that $e\flip\org$ is the same as $e\org\flip$ (the set obtained by $\flip$ping every element of $e\org$).
Edge algebras are a purely combinatorial abstraction of subdivisions, in the same way that undirected graphs are an abstraction of the subdivision graphs. In the next section we will defend the position that {\it all topological properties of a subdivision $S$ are accurately captured by an edge algebra}. As a consequence this algebra, which is a finite object, can be used as a computer representation of $S$. An edge algebra represents simultaneously a pair of dual subdivisions; as we remarked before, this allows us to express all our edge functions in terms of only three basic primitives, $\flip$, $\rot$, and $\onext$. Other advantages of this primal/dual representation will be encountered later on, and we will see that they are obtained at a negligible cost in storage and time.
\section{3. From edge algebras to subdivisions}
In the present section we will show that the topological properties of an arbitrary subdivision $S$ are accurately and unambiguously represented by its associated edge algebra. The concepts and theorems developed in this section are essential for showing the consistency and completeness of the data structure but are not used in the rest of the paper, so the reader whose interest is mostly practical can skip to section 4. The major idea will be to show that a general subdivision $S$ can be fully characterized by the graph of a standard refinement of $S$, which in turn is closely related to the edge algebra of $S$.
\subsection{3.1. Completions}
\definition{3.1}{Let $S$ and $\Si$ be subdivisions of a manifold $M$. We say that $\Si$ is a {\it completion} of $S$ if it is a refinement of $S$ obtained by adding one vertex $c𡤎$ on each edge $e$ and one vertex $v𡤏$ in each face $F$, and then connecting $v𡤏$ by new edges to every vertex (old or new) on the boundary of $F$.}
The vertices of $\Si$ are called {\it primal}, {\it crossing}, or {\it dual} depending on whether they lie on vertices, edges, or faces of $S$; they are denoted by $\Vscr \Si$, $\Cscr\Si$, and $\Vscr^\ast\Si$, respectively. Every edge of $S$ is split by its crossing vertex in two {\it primal links} of $\Si$; the new edges added in each face are called {\it dual links} if they connect a dual vertex to a crossing point, and {\it skew links} if they connect a dual vertex to primal one. These links are denoted $\Lscr \Si$, $\Lscr^\ast\Si$, and $\Kscr\Si$, in that order. Figure 3.1 shows a completion of a subdivision of the extended plane.
Definition 3.1 must be understood appropriately in the case of a face $F$ whose bounding path $\pi𡤏$ is not simple. If $\pi𡤏$ passes $k$ times through a vertex or crossing point $p$, then $p$ is to be connected to $v𡤏$ by exactly $k$ new links, and their order around $v𡤏$ should be the same as the order of the crossings around $\pi𡤏$. To describe this process precisely, let $\phi𡤏$ be any continuous function from $\closure{\B}\null^2$ to the closure of $F$ that establishes condition S4. Let $\pi=\seq{u𡤁, \alpha𡤁, u𡤂, \alpha𡤂, \ldots, u←n, \alpha←n, u←{n+1}=u𡤁}$ be the path in the circle $\S𡤁$ that is mapped to $\pi𡤏$ by $\phi𡤏$; in each arc $\alpha←i$ there is a point $c←i$ that is mapped to the crossing vertex of the edge $\phi𡤏(\alpha←i)$. Take $\phi𡤏((0,0))$ to be the dual vertex $v𡤏$; connect in $\closure{\B}\null^2$ the origin $(0,0)$ to each $u←i$ and to each $c←i$ by a straight line segment, and let the images of these segments under $\phi𡤏$ to be respectively the dual and skew links for the face $F$. Note that the restriction of faces to simple disks is essential for a simple and unambiguous definition of the completion.
\capfig9.5cm{Figure 3.1. A completion of the extended plane, showing primal links (solid), dual links (dashed), and skew links (dotted).}
From the definition, it is clear that every subdivision has at least one refinement which is a completion. Every face of $\Si$ consists of three vertices and three links, one of each kind, and therefore all distinct. An important consequence is that the closure of each face is homeomorphic to (not just the continuous image of) the sector of $\closure{\B}\null^2$ bounded by two rays and an arc $u←ic←i$ or $c←iu←{i+1}$, which in turn is homeomorphic to a disk. In fact, the closure of a face of $\Si$ is homeomorphic to any planar triangle, with each corner mapping to a vertex and each side to a link. For this reason, we will refer to the faces of $\Si$ as {\it triangles}, and denote them by $\Tscr\Si$.
It is also apparent from the definition that every edge of $\Si$ has two distinct endpoints, and is incident to exactly two triangles (which may or may not lie in the same face of $S$). A completion may have more than one link connecting any given pair of vertices, but it has no loops. Every crossing vertex $c𡤎$ is incident to exactly four links, two primal and two dual, and to four distinct triangles. The vertex $c𡤎$ and these eight elements constitute a disk of $M$ that contains the edge $e$. It can be seen also that, given a primal link $\lscr$ and a dual link $\lscr^\ast$ that are incident to the same (crossing) vertex, there is exactly one triangle that is incident to both $\lscr$ and $\lscr^\ast$.
We consider the distinction between primal, dual and crossing vertices to be an integral part of the description of $\Si$, so $S$ is uniquely determined by it. We call $S$ the {\it primal subdivision} of $\Si$, denoted by $S\Si$. In the same spirit, we say that two completions $\Si𡤁$ and $\Si𡤂$ are equivalent only if there is a homeomorphism that maps each element of $\Si𡤁$ to an element of $\Si𡤂$, takes $\Vscr\Sigma𡤁$ to $\Vscr\Sigma𡤂$, and takes $\Vscr^\ast\Sigma𡤁$ to $\Vscr^\ast\Sigma𡤂$. Such an homeomorphism will clearly take $\Cscr\Sigma𡤁$, $\Lscr\Sigma𡤁$, $\Lscr^\ast\Sigma𡤁$, and $\KScr\Sigma𡤁$ to the corresponding components of $\Sigma𡤂$.
\subsection{3.2. Existence of duals and algebras}
As it was defined, the edge algebra of a subdivision $S$ seems to depend not only on $S$ itself, but also on the choice of a dual subdivision $S\as$, and of the function $\dual$ (or $\rot$) that connects the two. The first part of our theoretical justification is the proof that such $S\as$ and $\dual$ always exist, and that the edge functions of $S$ and $S\as$ satisfy axioms E1--E5 and F1--F5.
Let $\Si$ be a completion on a manifold $M$. For every crossing $c𡤎$ of $\Si$, define the {\it dual} of the (unoriented and undirected) edge $e$ of $S\Si$ as the set $e\as=\lscr𡤁\uni \set{c𡤎}\uni \lscr𡤂$, where $\lscr𡤁, \lscr𡤂$ are the two dual links incident to $c𡤎$. Denote by $\Escr^\ast\Si$ the set of all such objects. Define the {\it dual} $F←v\as$ of a primal vertex $v$ as the union of $\set{v}$ and all elements of $\Si$ incident to $v$. Let $\Fs\as\Si$ be the set of all those objects.
\lemma{3.1}{The triplet $S\as\Si=(\Vs\as\Si, \Es\as\Si, \Fs\as\Si)$ is a subdivision of $M$.}
\proof{Besides $v$ itself, the dual $F←v\as$ of a vertex $v$ contains only triangles, primal links, and skew links incident to $v$. Each link of $F←v\as$ is incident to exactly two distinct triangles of $F←v\as$, and conversely each of triangle is incident to two distinct links of $F←v\as$, one primal and one skew. Therefore, these links and triangles can be arranged in one or more sequences (without repetitions) $\seq{\lscr𡤁, t𡤁, \lscr𡤂, t𡤂, \ldotss, \lscr←n, t←n, \lscr←{n+1}=\lscr𡤁}$ where the $t←i$ are triangles, the $\lscr←i$ are alternately primal and skew links, and each $t←i$ is incident to $\lscr←i$ and to $\lscr←{i+1}$. Each such sequence plus $v$ is a disk containing $v$; since $M$ is a manifold, there can be only one such disk.
\pp We conclude that $F←v\as$ is a disk of $M$. Furthermore, it is clear that we can construct a continuous function $\phi$ from the closed ball onto the closure $F←v\as$ that esablishes condition S4. Since a triangle or primal link cannot be incident to two distinct primal vertices, the elements of $\Fs\as\Si$ are pairwise disjoint. Clearly the elements of $\Es\as\Si$ are lines of $M$ that are pairwise disjoint, and also disjoint from the members of $\Fs\as\Si$ and $\Vs\as\Si$. Therefore $S\as\Si$ is a subdivision of $M$.}
\definition{3.2}{Let $\Si$ be a completion. Let $\rot$ be the function from $ES\Si\uni ES\as\Si$ into itself defined as follows: for every edge $e\in ES\Si$, let $e\rot$ be the dual edge $e\as$ of $S\as\Si$, directed so as to cross $e$ from right to left and oriented so as to agree with the orientation of $e$ at the crossing point. Similarly, for each element $e\in ES\as\Si$ let $e\rot$ be the edge of $S\Si$ of which $e$ is the dual, directed and oriented according to the same rules with respect to $e$. The {\it standard edge algebra of $\Si$} is by definition $A\Si=(ES\Si, ES\as\Si, \onext, \rot, \flip)$}
\theorem{3.2}{The standard edge algebra $A\Si$ of any completion $\Si$ satisfies axioms E1--E5 and F1--F5, and $S\as\Si$ is a (strict) dual of $S\Si$.}
\proof{Each oriented and directed edge $e$ of $ES\Si$ (or $ES\as\Si$) can be represented unambiguously by a pair of links $(e←o, e←r)$, where $e←o$ is the origin half of $e$, and $e←r$ is the dual (or primal) link of $\Si$ that is incident to the crossing vertex of $e$ and lies to its right. Conversely, any pair $(x, y)$ of adjacent links (one primal and one dual) corresponds to a unique edge of $ES\Si$ or $ES\as\Si$.
\pp For any link pair $(x, y)$ of this kind there is a unique triangle $T$ of $\Si$ incident to $x$ and $y$, and a unique triangle $T\pr$ sharing a skew link with $T$. Let's call the {\it opposite} of the pair $(x, y)$ the link pair $(r, s)$ such that $r$ and $s$ are on the boundary of $T\pr$ and are of the same sort (primal/dual) as $x$ and $y$, respectively. Let $x\pr$ denote the link of the same sort (primal/dual) as $x$ and incident to the same crossing.
\pp According to this notation, we have$(a, b)\flip=(a, b\pr)$, $(a, b)\rot=(b, a\pr)$, and $(a, b)\onext=(x, y)$ where $(x, y)$ is opposite to $(a, b\pr)$. Now it is easy to check that the algebra $A\Si$ satisfies E1--E5 and F1--F5. For example, let $(x, y)$ be the opposite of $(b, a)$; then $(a, b)$ is the opposite of $(y,x)$, and we have
$$\eqalign{(a, b)\rot\onext\rot\onext
&=(b, a\pr)\onext\rot\onext\cr
\null&=(x, y)\rot\onext\cr
\null&=(y, x\pr)\onext\cr
\null&=(a, b)\cr}$$
and so forth. The function $e\dual=e\flip\rot$ satisfied D1--D4, since these conditions can be proved from E1--E5 and F1--F5. We conclude that $S\Si$ and $S\as\Si$ are (strict) duals of each other.}
For any subdivision $S$ there is a completion $\Si$ such that $S=S\Si$, and therefore a dual $S\as\Si$ and a valid edge algebra $A\Si$ that describes $S$ (and $S\as\Si$).
\subsection{3.3. Equivalence and isomorphism}
The second part of our argument shows that the edge algebra of a subdivision is determined up to isomorphism, and conversely the subdivision of a edge algebra is unique up to equivalence.
\theorem{3.3}{Let $A←i$ ($i=1, 2$) be an edge algebra for a pair of dual subdivisions $S←i$ and $S←i\as$. If $S𡤁$ is equivalent to $S𡤂$, then $A𡤁$ and $A𡤂$ are isomorphic algebras.}
\proof{Let $A←i=(ES←i, ES←i\as, \onext←i, \rot←i, \flip←i)$, and let $\eta$ be the homeomorphism between the manifolds of $S𡤁$ and $S𡤂$ that establishes their equivalence. An orientation or direction for an element of $S𡤁$ determines via $\eta$ a unique orientation or direction for the corresponding element in $S𡤂$, and so $\eta$ is also a one-to-one correspondence between $ES𡤁$ and $ES𡤂$. From the definition of $\onext$ we can conclude that $\eta(e\onext𡤁)=\eta(e)\onext𡤂$ for all $e\in ES𡤁$; the same holds for $\lnext$ and $\flip$.
\pp Let us now define the function $\xi$ from $ES𡤁\uni ES𡤁\as$ to $ES𡤂\uni ES𡤂\as$ as
$$\xi(e)=\alter{
\eta(e) &if $e\in ES𡤁$,\cr\vsk4
\eta(e\rot𡤁^{-1})\rot𡤂&if $e\in ES𡤁\as$.\cr}$$
Clearly $\xi$ is one-to-one, for $\rot←i$ is one-to-one from $ES←i$ to $ES←i\as$. Let us now show that $\xi(e\onext𡤁)=\xi(e)\onext𡤂$. If $e\in ES$ the proof is trivial. If $e\in ES𡤁\as$, then $e\onext𡤁\in ES𡤁\as$, and
$$\eqalign{\xi(e\onext𡤁)&=\eta(e\onext𡤁\rot𡤁^{-1})\rot𡤂\cr
\null&=\eta(e\rot𡤁^{-1}\lprev𡤁\rot𡤁\rot𡤁^{-1})\rot𡤂\cr
\null&=\eta(e\rot𡤁^{-1}\lprev𡤁)\rot𡤂\cr
\null&=\eta(e\rot𡤁^{-1})\lprev𡤂\rot𡤂\hbox{\quad(since $e\rot𡤁^{-1}\in ES$)}\cr
\null&=\eta(e\rot𡤁^{-1})\rot𡤂\rot𡤂^{-1}\lprev𡤂\rot𡤂\cr
\null&=\xi(e)\onext𡤂\cr}$$
The proof for $\xi(e\flip𡤁)=\xi(e)\flip𡤂$ is entirely similar, using $e\flip𡤁=e\rot𡤁^{-1}\flip𡤁\rot𡤁$, and finally $\xi(e\rot𡤁)=\xi(e)\rot𡤂$ is trivial.}
We say that two completions are {\it similar} if there is an isomorphism of the graph of $\Si𡤁$ to that of $\Si𡤂$ that takes primal vertices to primal vertices and dual vertices to dual vertices.
\vfill\eject
\lemma{3.4}{Let $\Si𡤁$ and $\Si𡤂$ be two completions. If their edge algebras $A\Si𡤁$ and $A\Si𡤂$ are isomorphic algebras, then $\Si𡤁$ and $\Si𡤂$ are similar.}
\proof{For any completion $\Si$, we establish one-to-one mappings between certain subsets of oriented and directed edges of the the algebra $A\Si$ and the primal links, dual links, and vertices of $\Si$ in the following way. To each primal (or dual) link $\lscr$ of $\Si$ there corresponds a unique pair of primal (or dual) elements of $A\Si$ of the form $\set{e, e\flip}$; these elements are the directed and oriented edges of $S\Si$ (or $S\as\Si$) of which $\lscr$ is the ``origin'' half. To each primal vertex of $\Si$ there corresponds an orbit of $A\Si$ under $\onext$ and $\flip$ (i.e., a set of the form $e\org\uni e\org\flip$ for some edge $e$); similarly, to each crossing of $\Si←i$ there corresponds an orbit of $A\Si$ under $\rot$ and $\flip$. These mappings are one-to-one, and a primal or dual link of $\Si$ is incident to a vertex if and only if the corresponding orbits in $A\Si$ intersect.
\pp We also associate each skew link of $\Si$ to a set of the form
$$\vbox{\halign{\rt{$ # $}&\lft{$ # $}\cr
\{&e, e\flip, e\rot^{-1}, e\rot^{-1}\flip,\cr
\null&f, f\flip, f\rot, f\rot\flip\}\cr}}$$
where $f=e\onext$, in the following way. There are exactly two triangles of $\Si$ incident to $s$, each incident also to a primal and to a dual link. We take $s\pr$ to be the union of the four subsets of $A\Si$ that correspond to those four links. It is easy to check that these subsets have the form above, and that $s$ is incident to a primal or dual vertex of $\Si$ if and only if an element of $s\pr$ intersects the orbit corresponding to that vertex. Conversely, every set of the form above determines a unique skew link by this rule.
\pp The isomorphism between $A\Si𡤁$ and $A\Si𡤂$ maps those representative subsets of $A\Si𡤁$ to subsets of $A\Si𡤂$ having the same form, and therefore it establishes a one-to-one correspondence $\xi$ between the primal (or dual) links and vertices of $\Si𡤁$ and those of $\Si𡤂$. Since intersecting subsets are mapped to intersecting subsets, $\xi$ preserves incidence. We conclude $\Si𡤁$ and $\Si𡤂$ are similar.}
\vfill\eject
\lemma{3.5}{If two completions $\Si𡤁$ and $\Si𡤂$ are similar, then they are equivalent.}
\proof{Let $\xi$ be the isomorphism between the graphs of $\Si𡤁$ and $\Si𡤂$ that establishes their similarity. We will construct from it an homeomorphism $\eta$ between the manifolds of the two completions that establishes their equivalence. First we define $\eta$ on the vertices of $\Si𡤁$ as being the same as $\xi$. For every link $r$ of $\Si𡤁$ with endpoints $u$ and $v$, we can always find an homeomorphism $\eta←r$ from the closure of $r$ to that of $\xi(r)$ that takes $u$ to $\xi(u)$ and $v$ to $\xi(v)$; we define $\eta(p)=\eta←r(p)$ for all points $p$ of $r$. Clearly, $\eta$ is an homeomorphism of the graph of $\Si𡤁$ onto that of $\Si𡤂$.
\pp Since any pair of adjacent links of which one is primal and the other dual determines a unique triangle, the similarity of the two completions gives also a one-to-one correspondence between their triangles that preserves incidence. For each pair of corresponding triangles $T$ and $T\pr$ there is a homeomorphism $\eta←T$ from the closure of $T$ onto the closure of $T\pr$ that agrees with $\eta$ on the boundary of $T$; this follows readily from the fact that both closures are homeomorphic to closed disks. So $\eta$ and all $\eta←T$ constitute a finite collection of continuous maps of closed subsets of $M$ into $M\pr$, with the property that any two of them agree in the intersection of their domains. Their union $\eta^*$ is therefore a continuous map from $M$ into $M\pr$. Clearly, $\eta^*$ is one-to-one and onto, so it is an homeomorphism. By construction, it maps elements of $\Si𡤁$ to elements of $\Si𡤂$.}
\lemma{3.6}{If two completions $\Si𡤁$ and $\Si𡤂$ are equivalent, then so are $S\Si𡤁$ and $S\Si𡤂$.}
\proof{Each face of $S\Si←i$ is the union of a dual vertex and all elements of $\Si←i$ that are incident to it. Each edge of $S\Si←i$ is the union of a crossing and all (two) primal links of $\Si←i$ incident to it. The homeomorphism $\eta$ that establishes the equivalence of the two completions preserves incidence and the primal/dual character of links and vertices, so it maps elements of $S\Si𡤁$ to $S\Si𡤂$, establishing their equivalence.}
\theorem{3.7}{Let $A𡤁$ and $A𡤂$ be edge algebras for two subdivisions $S𡤁$ and $S𡤂$. If $A𡤁$ and $A𡤂$ are isomorphic, then $S𡤁$ and $S𡤂$ are equivalent.}
\proof{Let $\Si𡤁$ and $\Si𡤂$ be any two completions of $S𡤁$ and $S𡤂$. By theorem 3.3, we have $A𡤁\iso A\Si𡤁$ and $A𡤂\iso A\Si𡤂$, and therefore $A\Si𡤁\iso A\Si𡤂$. Then by lemmas 3.4 and 3.5, the subdivisions $\Si𡤁$ and $\Si𡤂$ are equivalent; by lemma 3.6 the same is true of $S𡤁$ and $S𡤂$.}
Therefore, the topological structure of a subdivision is completely and uniquely characterized by its edge algebra. Theorems 3.3 and 3.7 also imply that all completions of a subdivision are equivalent, and that two subdivisions are equivalent if and only if their duals are equivalent. Therefore, the dual of a simple subdivision is unique up to equivalence.
\subsection{3.4. Realizability of algebras}
To conclude our thoretical justification, we will show that every edge algebra corresponds to a subdivision of some manifold. This fact is of great practical importance, for it guarantees that any modification to the data structure that leaves axioms E1--E5 and F1--F5 invariant corresponds to a valid operation on manifolds.

\theorem{3.8}{Every edge algebra can be realized by some subdivision.}
\proof{Let $A=(E, E\as, \flip, \rot, \onext)$ be an edge algebra. We will prove this by by constructing a completion $\Si$ such that $A\Si$ is isomorphic to $A$. The manifold of $\Si$ is constructed by taking a collection of disjoint closed triangles, that will become the triangles of a completion, and ``pasting'' their edges together as specified by $A$.
\pp Let then $U$ be the set of all {\it unoriented edges} of $A$, that is, the set of all unordered pairs $\set {e, e\flip}$ where $e\in E$. Similarly, let $U^\ast$ denote the unoriented edges of $E^\ast$. We define a {\it corner} of the algebra as being a pair of unoriented edges of the form $\set{\set{e, e\flip}, \set{e\rot, e\rot\flip}}$, where $e$ is an edge. Notice that there are $\leftv E\rightv$ distinct corner in the algebra, and that every unoriented edge belongs to exactly two corners. Let $\Tscr$ be a collection of $\leftv E\rightv$ disjoint closed triangles on the plane, each triangle $T←r$ associated to a unique and distinct corner $r$ of the algebra. Label the three vertices of each triangle with the symbols \prg{V}, \prg{E}, \prg{F}.
\pp For each unoriented edge $u\in U$, take the two corners $r$ and $s$ to which $u$ belongs, and identify homeomorphically the \prg{VE} sides of the two triangles $T←r$ and $T←s$ (matching \prg{V} with \prg{V} and \prg{E} with \prg{E}). That common side minus its two endpoints is the {\it primal link} corresponding to $u$. In the same manner, for every $u^\ast\in U^\ast$ take the two corners $r$ and $s$ intersecting $u^\ast$, and identify the \prg{FE} sides of $T←r$ and $T←s$; the common side will become the {\it dual link} corresponding to $u^\ast$.
\pp Finally, for every corner
$$r=\set{\set{e, e\flip}, \set{e\rot, e\rot\flip}}$$
there is exactly one {\it opposite corner}
$$s=\set{\set{f, f\flip}, \set{f\rot, f\rot\flip}}$$
such that $f=e\rot\onext$ and $e=f\rot\onext$. Identify the \prg{VF} sides of $T←r$ and $T←s$. Call the seam segment a {\it vertex-face link}.
\pp Clearly any point interior to a triangle has a neighborhood homeomorphic to a disk. Every side of every triangle is joined with exactly one side of a distinct triangle, so a point on a link also has a disk-like neighborhood. Now consider a vertex $v$ of some triangle, and all other points that have been identified with it; they have all the same label by construction. An \prg{E} type vertex $v$ belongs to exactly four triangles, corresponding to the corners
$$\set{\set{e\rot^k, e\rot^k\flip}, \set{e\rot^k\rot, e\rot^k\rot\flip}}$$
for $0\leq k<4$ and some edge $e$. Each triangle is pasted to the next one by a primal or dual link incident at $v$, so as to form a quadrilateral with center $v$. A \prg{V} or \prg{F} type vertex $v$ is common to $2n$ triangles (for some $n\geq 1$) corresponding to the corners $$\set{\set{e←k, e←k\flip}, \set{e←k\rot, e←k\rot\flip}}\hbox{\rm \ and}$$ $$\set{\set{e←k\flip, e←k}, \set{e←k\flip\rot, e←k\flip\rot\flip}},$$ where $e←k=e\onext^k$ for some edge $e$ and $0\leq k<n$. These triangles are pasted alternately by vertex-face links and primal or dual links, so as to form a $2n$-sided polygon around $v$. In all cases, the vertex $v$ has a disk-like neighborhood.
\pp We conclude that the triangles $\Tscr$ pasted as above constitute a manifold. The links, the triangle interiors, and the identified vertices obviously define a completion $\Si$ of this manifold, and $A\Si$ is isomorphic to $A$.}
\section{4. The quad edge data structure}
% macros for quad-edge operations ( in programs)
\def\psym{\hbox{\fx .Sym}}
\def\pflip{\hbox{\fx .Flip}}
\def\prot{\hbox{\fx .Rot}}
\def\ponext{\hbox{\fx .Onext}}
\def\poprev{\hbox{\fx .Oprev}}
\def\pdnext{\hbox{\fx .Dnext}}
\def\pdprev{\hbox{\fx .Dprev}}
\def\plnext{\hbox{\fx .Lnext}}
\def\plprev{\hbox{\fx .Lprev}}
\def\prnext{\hbox{\fx .Rnext}}
\def\prprev{\hbox{\fx .Rprev}}
\def\plef{\hbox{\fx .Left}}
\def\prif{\hbox{\fx .Right}}
\def\pnext{\hbox{\fx .Next}}
\def\pdata{\hbox{\fx .Data}}
\def\porg{\hbox{\fx .Org}}
\def\pdest{\hbox{\fx .Dest}}
We represent a subdivision $S$ (and simultaneously a dual subdivision $S^\ast$) by means of the {\it quad edge data structure}, which is a natural computer implementation of the corresponding edge algebra. The edges of the algebra can be partitioned in groups of eight: each group consists of the four oriented and directed versions of an undirected edge of $S$, plus the four versions of its dual edge. The group containing a particular edge $e$ is therefore the orbit of $e$ under the subalgebra generated by $\rot$ and $\flip$. To build the data structure, we select arbitrarily a {\it canonical representative} in each group. Then any edge $e$ can be written as $\bar e\rot^r\flip^f$, where $r\in\set{0, 1, 2, 3}$, $f\in\set{0,1}$, and $\bar e$ is the canonical representative of the group to which $e$ belongs.
The group of edges containing $e$ is represented in the data structure by one {\it edge record} \prg{e}, divided into four parts \prg{e[0]} through \prg{e[3]}. Part \prg{e[$r$]} corresponds to the edge $\bar e\rot^r$. See figure 4.1a.
A generic edge $e=\bar e\rot^r\flip^f$ is represented by the triplet $(\prg{e}, r, f)$, called an {\it edge reference}. We may think of this triplet as a pointer to the ``quarter-record'' \prg{e[$r$]}, plus a bit $f$ that tells whether we should look at it from ``above'' of from ``below''.
Each part \prg{e[$r$]} of an edge record contains two fields, $\prg{Data}$ and $\prg{Next}$. The \prg{Data} field is used to hold geometrical and other non-topological information about the edge $\bar e\rot^r$. This field neither affects nor is affected by the topological operations that we will describe, so its contents and format is entirely dependent on the application.
The \prg{Next} field of \prg{e[$r$]} contains a reference to the edge $\bar e\rot^r\onext$. Given an arbitrary edge reference $(\prg{e}, r, f)$, the three basic edge functions $\rot$, $\flip$, and $\onext$ are given by the formulas
$$\hskip20pt minus20pt
\vcenter{\halign{
\lft{$ # $}&\lft{$\null= # $}\cr\vsk3
(\prg{e}, r, f)\rot&(\prg{e}, r+1+2f, f)\cr\vsk3
(\prg{e}, r, f)\flip&(\prg{e}, r, f+1)\cr\vsk3
(\prg{e}, r, f)\onext&(\prg{e[$r+f$]\pnext})\rot^f \flip^f\cr
}}\eqno(1)$$
where the $r$ and $f$ components are computed modulo 4 and modulo 2, respectively. In the first expression above, note that $r+1+2f$ is congruent modulo 4 to $r+1$ if $f=0$, and $r-1$ if $f=1$; this corresponds to saying that rotating $e$ $90\deg$ counterclockwise, as seen from one side of the manifold, is the same as rotating it $90\deg$ clockwise as seen from the other side. Similarly, the third expression implies that
$$\hskip20pt minus20pt
\vbox{\halign{
\lft{$ # $}&\lft{$ # $}\cr
(\prg{e}, r, 0)\onext&
\null=\prg{e[$r+f$]\pnext}, \hbox{\quad and}\cr\vsk6
(\prg{e}, r, 1)\onext&
\null=(\prg{e[$r+1$]\pnext})\rot\flip\cr\vsk3
\null&\null=(\prg{e}, r, 0)\rot\onext\rot\flip\cr\vsk3
\null&\null=(\prg{e}, r, 0)\oprev\flip\cr
}}$$
i.e., moving counterclockwise around a vertex is the same as moving clockwise on the other side of the manifold. From these formulas it follows also that
$$\vbox{\halign{
\hskip20pt\lft{$ # $}&\lft{$ # $}\cr
(\prg{e}, r, f)\sym&\null=(\prg{e}, r+2, f)\cr\vsk3
(\prg{e}, r, f)\rot^{-1}&\null=(\prg{e}, r+3+2f, f)\cr\vsk3
(\prg{e}, r, f)\oprev&\null\cr\vsk3
\null&\null\hskip-20pt=(\prg{e[$r+1- f$]\pnext})
\rot^{1-f} \flip^f, \cr
}}$$
and so forth.
The record of an arbitrary edge belongs to four circular lists, corresponding to its two endpoints and its two faces. Figure 4.1 illustrates a portion of a subdivision and its quad edge data structure. We may think of each record as belonging to four circular lists, corresponding to the two vertices and two faces incident to the edge. Note however that to traverse those lists we have to use the $\onext$ function, not just the \prg{Next} pointers. Consider for example the situation depicted in figure 4.2, where the canonical representative of edge $a$ has orientation opposite to that of the others.
\capfig4cm{(a) Edge record showing \prg{Next} links.}
\capfig4cm{(b) A subdivision of the sphere.}
\capfig4cm{(c) The data structure for the subdivision (b).}
\capfig0cm{Figure 4.1. The data structure for the subdivision (b).}
The quad edge data structure contains no separate records for vertices or faces; a vertex is implicitly defined as a ring of edges, and the standard way to refer to it is to specify one of its outgoing edges. This has the added advantage of specifying a reference point on its edge ring, which is frequently necessary when using the vertex as a parameter to topological operations. Similarly, the standard way of referring to a connected component of the edge structure is by giving one of its directed edges. In this way, we are also specifying one of the two dual subdivisions, and a ``starting place'' and ``starting direction'' on it. Therefore a subdivision referred to by the edge $e$ can be ``instantaneously'' transformed into its dual by taking $e\rot$.
\capfig7.5cm{Figure 4.2. An $\onext$ ring with canonical representatives on both sides of the manifold.}
\subsection{4.1. Simplifications for orientable manifolds}
In many applications, including the Voronoi and Delaunay algorithms that we are going to discuss, all manifolds to be handled are orientable. This means we can assign a specific orientation to each edge, vertex and face of the subdivision so that any two incident elements have compatible orientations. Therefore the elements of the edge algebra can be partitioned in two sets, each closed under $\rot$ and $\onext$, and each the image of the other under $\flip$. Then we don't need the $f$ bit in edge references, and the formulas simplify to
$$\vbox{\halign{
\hskip20pt\lft{$ # $}&\lft{$\null= # $}\cr
(\prg{e}, r)\rot&(\prg{e}, r+1)\cr\vsk3
(\prg{e}, r)\onext&\prg{e[$r$]\pnext}\cr\vsk6
(\prg{e}, r)\sym&(\prg{e}, r+2)\cr\vsk3
(\prg{e}, r)\rot^{-1}&(\prg{e}, r+3)\cr\vsk3
(\prg{e}, r)\oprev&(\prg{e[$r+1$]\pnext})\rot,\cr
}}$$
and so forth.
We can represent a simple subdivision (without its dual) by a ``simple edge algebra'' that has only $\onext$ and $\sym$ as the primitive operators. Then we can get $\dnext$, $\lprev$ and $\rprev$ in constant time, but not their inverses. However, this may be adequate for some applications. We save two pointers (and perhaps two data fields) in each edge record. Note that this optimization cannot be used with $\flip$.
\subsection{4.2 Additional comments on the data structure}
The storage space required by the quad edge data structure, including the \prg{Data} fields, is $\leftv EP\rightv\times\null$(8 record pointers + 12 bits). The simplification for orientable manifolds reduces those 12 bits to 8. This compares favorably with the winged-edge representation [\Baum] and with the Muller-Preparata variant [\MuPr]. Indeed, all three representations use essentially the same pointers: each edge is connected to the four ``immediately adjacent'' ones ($\onext$, $\oprev$, $\dnext$, $\dprev$), and the four \prg{Data} fields of our structure may be seen as corresponding to the vertex and face links of theirs.
Compared with the two versions mentioned above, the quad edge data structure has the advantage of allowing uniform access to the dual an mirror-image subdivisions. As we shall see, this capability allows us to cut in half the number of primitive and derived operations, since these usually come in pairs whose members are ``dual'' of each other. As an illustration of the flexibility of the quad edge structure, consider the problem of constructing a diagram which is a cube joined to an octahedron: we can construct two cubes (calling twice the same procedure) and join one to the dual of the other.
The systematic enumeration of all edges in a (connected) subdivision is a straightforward programming exercise, given an auxiliary stack of size $O(\leftv EP\rightv)$ and a boolean mark bit on each directed edge [\Knuth]. With a few more bits per edge, we can do away with the stack entirely [\Even]. A slight modification of those algorithms can be used to enumerate the vertices of the subdivision, in the sense of visiting exactly one edge out of every vertex. If we take the dual subdivision, we get an enumeration of the faces. In all cases the running time is linear in the number of edges. Recall also that from Euler's relation it follows that the number of vertices, edges, and faces of a subdivision are linearly related.
\section{5. Basic topological operators}
Perhaps the main advantage of the quad edge data structure is that the construction and modification of arbitrary diagrams can be effected by as few as two basic topological operators, in contrast to the half-dozen or more required by the previous versions [\Braid, \MaSu].
The first operator is denoted by \prg{$e$ \pgets\ MakeEdge[]}. It takes no parameters, and returns an edge $e$ of a newly-created data structure representing a subdivision of the sphere (see fig. 5.1).
\capfig4.5cm{Figure 5.1. The result of \prg{MakeEdge}.}
Apart from orientation and direction, $e$ will be the only edge of the subdivision, and will not be a loop; we have $e\org\neq e\dest$, $e\lef=e\rif$, $e\lnext=e\rnext=e\sym$, and $e\onext=e\oprev=e$. To construct a loop, we may use \prg{$e$ \pgets\ MakeEdge[]\prot}; then we will have $e\org=e\dest$, $e\lef\neq e\rif$, $e\lnext=e\rnext=e$, and $e\onext=e\oprev=e\sym$.
The second operator is denoted by \prg{Splice[$a$, $b$]} and takes as parameters two edges $a$ and $b$, returning no value. This operation affects the two edge rings $a\org$ and $b\org$, and, independently, the two edge rings $a\lef$ and $b\lef$. In each case,
\begitems
\item if the two rings are distinct, \prg{Splice} will combine them into one;
\item if the two are exactly the same ring, \prg{Splice} will break it in two separate pieces;
\item if the two are the same ring taken with opposite orientations, \prg{Splice} will $\flip$ (and reverse the order) of a segment of that ring.
\enditems
The parameters $a$ and $b$ determine the place where the edge rings will be cut and joined. For the rings $a\org$ and $b\org$, the cuts will occur immediately {\it after} $a$ and $b$ (in counterclockwise order); for the rings $a\lef$ and $b\lef$, the cut will occur immediately {\it before} $a\rot$ and $b\rot$. Figure 5.2a illustrates this process for one of the simplest cases, when $a$ and $b$ have the same origin and distinct left faces. In this case \prg{Splice[$a$, $b$]} splits the common origin of $a$ and $b$ in two separate vertices, and joins their left faces. If the origins are distinct and the left faces are the same, the effect will be precisely the opposite: the vertices are joined and the left faces are split. Indeed, \prg{Splice} is its own inverse: if we perform \prg{Splice[$a$, $b$]} twice in a row we will get back the same subdivision.
\capfig7.5cm{\null\hfil$a\org = b\org$, $a\lef\neq b\lef$}
\capfig7.5cm{\null\hfil$a\org \neq b\org$, $a\lef= b\lef$}
\capfig0cm{Figure 5.2a. The effect of \prg{Splice}: trading a vertex for a face.}
Figure 5.2b illustrates the effect of \prg{Splice[$a$, $b$]} in the case where $a$ and $b$ have distinct left faces and distinct origins. In this case, \prg{Splice} will either join two components in a single one, or add an extra ``handle'' to the manifold, depending on whether $a$ and $b$ are in the same component or not. The quad-edge data structure contains no mechanism to distinguish between these two cases, or to keep track automatically of the components and connectivity of the manifold. There seems to be no general way of doing this at a bounded cost per operation; on the other hand, in many applications this problem is trivial or straightforward, so it is best to solve this problem independently for each case. Figure 5.2b also illustrates the case when both left faces and origins are distinct.
\capfig7.5cm{\null\hfil$a\org \neq b\org$, $a\lef\neq b\lef$}
\capfig7.5cm{\null\hfil$a\org = b\org$, $a\lef= b\lef$}
\capfig0cm{Figure 5.2b. The effect of \prg{Splice}: Changing the connectivity of the manifold.}
In the edge algebra, the $\org$ and $\lef$ rings of an edge $e$ are the orbits under $\onext$ of $e$ and $e\onext\rot$, respectively. The effect of \prg{Splice} can be described as the construction of a new edge algebra $A\pr=(E, E^\ast, \onext\pr, \rot, \flip)$ from an existing algebra $A=(E, E^\ast, \onext, \rot, \flip)$, where $\onext\pr$ is obtained from $\onext$ by redefining some of its values. The modifications needed to obtain the effect described above are actually quite simple. If we let $\alpha=a\onext\rot$ and $\beta=b\onext\rot$, basically all we have to do is to interchange the values of $a\onext$ with $b\onext$ and $\alpha\onext$ with $\beta\onext$. The apparently complex behavior of \prg{Splice} now can be recognized as the familiar effect of interchanging the \prg{next} links of two circular list nodes [\Knuth].
As one may well expect, to preserve the validity of the axioms F1--F5 and E1--E5 we may have to make some additional changes to the $\onext$ function. For example, whenever we redefine $e\onext\pr$ to be some edge $f$, we must also redefine $e\flip{(\onext\pr)}^{-1}$ to be $f\flip$, or, equivalently, $f\flip\onext\pr$ to be $e\flip$. So, \prg{Splice[$a$, $b$]} must perform at least the following changes in the function $\onext$:
\vfill\eject
$$\vcenter{\halign{\hfil\rt{$ # $}&\lft{$\null= # $}\cr
a\onext\pr&b\onext\cr
b\onext\pr&a\onext\cr\vsk3
\alpha\onext\pr&\beta\onext\cr
\beta\onext\pr&\alpha\onext\cr\vsk6}
\halign{\hfil\rt{$ # $}&\lft{$\null= # $}\cr
(b\onext\flip)\onext\pr&a\flip\cr
(a\onext\flip)\onext\pr&b\flip\cr\vsk3
(\beta\onext\flip)\onext\pr&\alpha\flip\cr
(\alpha\onext\flip)\onext\pr&\beta\flip\cr
}}\eqno(2)$$
Note that these equations reduce to $\onext\pr=\onext$ if $b=a$. Since $a\onext\pr=b\onext$, to satisfy axiom E5 we must have $a\in E$ iff $b\onext\in E$, which is equivalent to $a\in E$ iff $b\in E$. We will take this as a precondition for the validity of \prg{Splice[$a$, $b$]}: the effect of this operation is not defined if $a$ is a primal edge and $b$ is dual, or vice-versa\foot\dag{Note that if $a$ and $b$ lie in distinct subalgebras $A𡤊$ and $A𡤋$ of $A$, then the union of $A𡤊$ and the dual of $A𡤋$ is also a valid edge algebra. So, in practice we can always perform \prg{Splice[$a$, $b$]} when $a$ and $b$ lie in disjont data structures.}. Another problematic situation is when $b=a\onext\flip$: according to equations (2) we would have $a\onext\pr=a\onext\flip\onext=a\flip$, which contradicts F3. In this particular case, it is more convenient to define the effect of \prg{Splice[$a$, $b$]} as being null, i.e. $\onext\pr=\onext$. It turns out that, with only these two exceptions, the equations above always define a valid edge algebra:
\theorem{5.1}{If $A$ is an edge algebra, $a$ and $b$ are both primal or both dual, and $b\neq a\onext\flip$, then the algebra $A\pr$ obtained by performing the operation \prg{Splice[$a$, $b$]} on $A$ is also an edge algebra.}
\proof{Since \prg{Splice} does not affect $\flip$ and $\rot$, all axioms except F2, F3, E2 and E5 are automatically satisfied by $A\pr$.
\pp Since $a$ and $b$, are both primal or both dual, the same is true of $\alpha$ and $\beta$, $a\onext\flip$ and $b\onext\flip$, and $\alpha\onext\flip$ and $\beta\onext\flip$. Thus the assignments corresponding to \prg{Splice[$a$, $b$]} will not destroy E5.
\pp Now let's show E2 holds in $A\pr$, i.e. $e\rot\onext\pr\rot\onext\pr=e$. Let $X$ be the set of edges whose $\onext$ has been changed, i.e.
$$\halign{\rt{$ # $}&\lft{$ # $}\cr
X={\left\{\right.}\,&a, b,\cr\vsk3
\null& \alpha, \beta,\cr\vsk3
\null& a\onext\flip, b\onext\flip,\cr\vsk3
\null& \alpha\onext\flip, \beta\onext\flip\,{\left.\right\}}}.\cr}$$
First, if $e\rot\not\in X$, then $e\rot\onext\rot\not\in X\onext\rot=X$, and so
$$\eqalign{e\rot\onext\pr\rot\onext\pr&=e\rot\onext\rot\onext\pr\cr
\null&=(e\rot\onext\rot)\onext\cr
\null&=e.\cr}$$
\pp Now assume $e\rot\in X$. Notice that \prg{Splice[$a$, $b$]} does exactly the same thing as \prg{Splice[$b$, $a$]}, \prg{Splice[$\alpha$, $\beta$]}, and \prg{Splice[$a\onext\flip$, $b\onext\flip$]}, so without loss of generality we can assume $e\rot=a$. Then
$$\eqalign{e\rot\onext\pr\rot\onext\pr&=a\onext\pr\rot\onext\pr\cr
\null&=b\onext\rot\onext\pr\cr
\null&=\beta\onext\pr\cr
\null&=\alpha\onext\cr
\null&=a\onext\rot\onext\cr
\null&=e\rot\onext\rot\onext\cr
\null&=e.\cr}$$
\pp In a similar way we can prove F2. To conclude, let us prove F3, or $e\flip{(\onext\pr)}^n\neq e$ for all $n$. In other words, we have to show that $\flip$ always takes an $\onext\pr$ orbit to a different $\onext\pr$ orbit. It suffices to show this for the orbits of elements of $X$; in fact the symmetry of \prg{Splice} implies it is sufficient to show this for the orbit of $a$.
\pp Let $a\org=\seq{a𡤁\; a𡤂\; \ldots\; a←{m-1}\; a←m(=a)}$ be the orbit of $a$ under the original $\onext$. The orbit of $a\flip$ under $\onext$ is then $a\flip\org=\seq{a←m\pr\; a←{m-1}\pr\; \ldots\; a𡤂\pr\; a𡤁\pr}$, where $a←i\pr=a←i\flip$ for all $i$. These two orbits are disjoint; they cannot contain any of the edges $\alpha$, $\beta$, $\alpha\onext\flip$, or $\beta\onext\flip$, because they are in the dual subdivision. Furthermore, one contains $b$ if and only if the other contains $b\flip$. There are then only three cases to consider (see figure 5.3):
\capfig9cm{\null\hskip20pt Case 1.\hfil Case2.\hskip20pt.\hfilneg}
\capfig6cm{\null\hfil Case 3.}
\capfig0cm{Figure 5.3. The effect of \prg{Splice} on the $\onext$ orbits.}
\pp Case 1: the edge $b$ is neither in $a\org$ nor in $a\flip\org$. Then let $b\org=\seq{b𡤁\; b𡤂\; \ldots\; b←{n-1}\; b←n(=b)}$ and $b\flip\org=\seq{b←n\pr\;b←{n-1}\pr\; \ldots\; b𡤂\pr\;b𡤁\pr}$. According to (2), we will have $a←m\onext\pr=b𡤁$, $b←m\onext\pr=a𡤁$, $a𡤁\pr\onext\pr=b←m\pr$, $b𡤁\pr\onext\pr=a←m\pr$. Therefore, the orbits of $a$ and $a\flip$ under $\onext\pr$ will be $a\org\pr=\seq{a𡤁\; a𡤂\; \ldots\; a←{m-1}\; a←m(=a)\; b𡤁\; b𡤂\; \ldots\; b←{n-1}\; b←n(=b)}$ and $a\flip\org\pr=\seq{b←n\pr\;b←{n-1}\pr\; \ldots\; b𡤂\pr\;b𡤁\pr\;a←m\pr\; a←{m-1}\pr\; \ldots\; a𡤂\pr\; a𡤁\pr}$.
\pp Case 2: the edge $b$ occurs in $a\org$. Then $b=a←i$ for some $i$, $1\leq i\leq m$. After \prg{Splice} is executed we will have $a←m\onext\pr=a←{i+1}$, $a←i\onext\pr=a𡤁$, $a𡤁\pr\onext\pr=a←i\pr$, and $a←{i+1}\pr\onext\pr=a←m\pr$. If $i=m$ (i.e., if $a=b$) then $\onext\pr=\onext$ and we are done. If $i\neq m$ then under $\onext\pr$ the elements of $a\org$ and $a\flip\org$ will be split in the four orbits $b\org\pr=\seq{a𡤁\;a𡤂\;\ldots\;a←i}$, $a\org\pr=\seq{a←{i+1}\; a←{i+2}\; \ldots\;a←m}$, $a\flip\org\pr=\seq{a←m\pr\; a←{m-1}\pr\; \ldots\; a←{i+1}\pr}$, and $b\flip\org\pr=\seq{a←i\pr\;a←{i-1}\pr\;\ldots\;a𡤁}$.
\pp Case 3: the edge $b$ occurs in $a\flip\org$. Since $b\neq a\onext\flip=a𡤁\pr$, we have $b=a←i\pr$ for some $i$, $2\leq i\leq m$. After \prg{Splice} is executed we will have $a←m\onext\pr=a←{i-1}\pr$, $a←i\pr\onext\pr=a𡤁$, $a𡤁\pr\onext\pr=a←i$, and $a←{i-1}\onext\pr=a←m\pr$. Then the orbits of $\onext\pr$ containing those elements will be $a\org\pr=\seq{a←{i-1}\pr\; a←{i-2}\pr\; \ldots\; a𡤁\pr\; a←i\; a←{i+1}\;\ldots\;a←{m-1}\;a←m}$ and $a\flip\org\pr=\seq{a←m\pr\;a←{m-1}\pr\;\ldots\;a←{i+1}\pr\;a←{i}\pr\;a𡤁\; a𡤂\;\ldots\;a←{i-1}\;a←i}$.
\pp In all three cases, the orbits of $e$ and $e\flip$ under $\onext\pr$ will be disjoint, for all edges $e$.}
The proof of theorem 5.1 gives a precise description of the effect of \prg{Splice} on the edge rings. In particular, the dicussion for case 3 helps in the understanding of figure 5.3. In that case the effect of \prg{Splice} is to add or remove a ``cross cap'' to the manifold.
In terms of the data structure, the \prg{Splice} operation is even simpler. The identities $$a\onext\flip=a\onext\rot\flip\rot=\alpha\flip\rot$$ and $$\alpha\onext\flip=a\onext\rot\onext\flip=a\flip\rot$$ allow us to rewrite (2) as
$$\vcenter{\halign{\hskip20pt \rt{$ # $}&\lft{$\,\gets\, # $}& \hskip0.4cm \rt{$ # $}&\lft{$\,\gets\, # $}\cr
a\onext&b\onext&(a\flip\rot)\onext&\beta\flip\cr\vsk3
b\onext&a\onext&(b\flip\rot)\onext&\alpha\flip\cr\vsk6
\alpha\onext&\beta\onext&
(\alpha\flip\rot)\onext&b\flip\cr\vsk3
\beta\onext&\alpha\onext&
(\beta\flip\rot)\onext&a\flip\cr
}}\eqno(3)$$
Only one of the two assignments in each line of (3) is meaningful. The reason is that only one of the receiving $\onext$ fields actually exists in the structure; the value of the other is determined implicitly from existing links by the equations (1). If the $f$ bit of $a$ is 0, then $a\onext$ exists, and \prg{Splice} writes $b\onext$ into it. Otherwise $a\flip\rot$ has $f=0$, and we can assign $b\flip\rot\onext$ to $a\flip\rot\onext$. The same applies to $b$, $\alpha$ and $\beta$. Note that these assignments are simultaneous, that is, all right-hand sides are computed before any value is assigned to the left-hand-sides. In addition, these assignments should be preceded by a test of whether $b=a\onext\flip$, in which case they should not be executed at all. Note however that there is no need to check for $a=b$.
Further reductions in the code of \prg{Splice} occur in the the case of orientable manifolds, when we can use the simplified data structure without $\flip$ and the $f$ bits. In that case, the meaningful assignments are precisely those in the left column of (3), and the test for $b=a\onext\flip$ is meaningless.

\capfig7cm{\null\hfil$a\org = b\org\flip$, $a\lef= b\lef$}
\capfig7cm{\null\hfil$a\org = b\org\flip$, $a\lef= b\lef$\cp
$c\lef = b\lef\flip$, $c\org= b\org$}
\capfig7cm{\null\hfil$c\lef = b\lef\flip$, $c\org\neq b\org$.}
\capfig0cm{Figure 5.2c. The effect of \prg{Splice}: Adding or removing a cross-cap.}
As an example of the use of \prg{Splice}, consider the operation \prg{SwapDiagonal[$e$]} that we will be using later on: given an edge $e$ whose left and right faces are triangles, the problem is to delete $e$ and connect the other two vertices of the quadrilateral thus formed (see fig 5.4).
This is equivalent to
{\prog
\ \ \ a \pgets\ e\poprev;
\ \ \ b \pgets\ e\psym\poprev;
\ \ \ Splice[e, a]; Splice[e\psym, b];
\ \ \ Splice[e, a\plnext]; Splice[e\psym, b\plnext]
\endprog}
The first pair of \prg{Splice}s disconnects $e$ from the edge structure, and leaves it as the single edge of a separate spherical component. The last two \prg{Splice}s connect $e$ again at the required position.
\capfig5cm{Figure 5.4. The effect of \prg{SwapEdge[$e$]}.}
\theorem{5.1}{An arbitrary subdivision $P$ can be transformed into a collection of $\leftv EP\rightv$ isolated edges by the application of at most $2\leftv EP\rightv$ \prg{Splice} operations.}
\proof{Let $e$ be an arbitrary edge of $P$. Then the operations \prg{Splice[$e$, $e\oprev$]} and \prg{Splice[$e\sym$, $e\sym\oprev$]} will have the effect of removing $e$ from $P$ and placing it as an isolated edge on a separate manifold homeomorphic to the sphere. By repeating this for every edge the theorem follows.}
From this theorem and from the fact that \prg{Splice} is its own inverse, we can conclude that any simple subdivision $P$ can be constructed, in $O(\leftv EP\rightv)$ time and space, by using only the \prg{Splice} and \prg{MakeEdge} operations.
The \prg{Data} links are not affected by (and do not affect) the \prg{MakeEdge} and \prg{Splice} operations; if used at all, they can be set and updated, at any time after the edge is created, by plain assignment statements. Since they carry no topological information, there is no need to forbid or restrict assignments to them. Usually each application imposes geometrical or other constraints on the \prg{Data} fields that may be affected by changes in the topology. Some care is required when enforcing those constraints; for example, the operation of joining two vertices may change the geometrical parameters of a large number of edges and faces, and updating all the corresponding \prg{Data} fields every time may be too expensive. However, even in such applications it is frequently the case that we can defer those updates until they are really needed (so that their cost can be amortized over a large number of \prg{Splice}s), or initialize the \prg{Data} links right from the beginning with the values they must have in the final structure.
\section{6. Topological operators for Delaunay diagrams}
In the Voronoi/Delaunay algorithms described further on, all edge variables refer to edges of the Delaunay diagram. The \prg{Data} field for a Delaunay edge $e$ points to a record containing the coordinates of its origin $e\org$, which is one of the sites; accordingly, we will use $e\porg$ as a synonym of $e\pdata$ in those algorithms. For convenience, we will also use $e\pdest$ instead of $e\psym\porg$. We emphasize again that these \prg{Dest} and \prg{Org} fields carry no topological meaning, and are not updated by the \prg{Splice} operation per se. The endpoints of the dual edges (Voronoi vertices) are neither computed nor used by the algorithms; if desired, they can be easily added to the structure, either during its construction or after it. The fields $e\prot\pdata$ and $e\prot^{-1}\pdata$ are not used.
The topological manipulations performed by the algorithms on the Delaunay/Voronoi diagrams can be reduced to a pair of higher-level topological operators, defined here in terms of \prg{Splice} and \prg{MakeEdge}. The operation \prg{$e$ \pgets\ Connect[$a$, $b$, \hbox{\it side}]} will add a new edge $e$ connecting the destination of $a$ to the origin of $b$, across either $a\lef$ or $a\rif$ depending on the \hbox{\it side} flag. For added convenience, it will also set the \prg{Org} and \prg{Dest} fields of the new edge to $a\pdest$ and $b\porg$, respectively.
{\prog
\ \ \ e \pgets\ MakeEdge[];
\ \ \ e\porg \pgets\ a\pdest;
\ \ \ e\pdest \pgets\ b\porg;
\ \ \ Splice[e, IF Side=left THEN a\plnext ELSE a\psym];
\ \ \ Splice[e\psym, IF Side=left THEN b ELSE b\poprev]
\endprog}
The operation \prg{DeleteEdge[$e$]} will disconnect the edge $e$ from the rest of the structure (this may cause the rest of the structure to fall apart in two separate components). In a sense, \prg{DeleteEdge} is the inverse of \prg{Connect}. It is equivalent to
{\prog
\ \ \ Splice[e, e\poprev];
\ \ \ Splice[e\psym, e\psym\poprev]
\endprog}
\section{7. Voronoi and Delaunay -- some basic facts}
In this section we recapitulate some of the most important properties of Voronoi diagrams and their duals, with an emphasis on the results we will need later on. For a fuller treatment of these topics the reader should consult [\Sham], [\Lee], or [\ShamHo].
If we are given only two sites, then the associated Voronoi regions are simply the two (open) half-planes delimited by the bisector of the two sites. More generally, when $n$ sites are given, the region associated with a particular site $p$ will be the intersection of all half-planes containing $p$ and delimited by the bisectors between $p$ and the other sites. It follows that the Voronoi regions are (possibly unbounded) convex polygons, whose edges are portions of inter-site bisectors, and whose vertices are circumcenters of triangles defined by three of the sites. An example Voronoi diagram for a small collection of sites is shown in fig. 7.1.
As mentioned above, most of the time we will actually deal with a dual of the Voronoi subdivision, commonly called the Delaunay triangulation. As the name implies, this is a planar subdivision whose vertices are the $n$ sites and whose edges are straight line segments that connect every pair of sites associated with regions that share a common edge. If no four of the sites happen to be cocircular, then this dual is in fact a triangulation and, in any case, additional edges can be introduced to make it one. Although our algorithms can handle the degeneracies implied by four cocircular points, in this section only we will assume that such degeneracies do not arise, so we can speak unambiguously of the Delaunay triangulation of the $n$ sites. If $S$ denotes our collection of sites, then $D(S)$ will denote the Delaunay triangulation of $S$.
We say that a circle is {\it point-free} if none of the given sites is contained in its interior. It is known that the circumcircle of each Delaunay triangle is point-free. Actually the converse is also true. If the circumcircle of a triangle defined by three of the sites is point-free, then that triangle is a Delaunay triangle. And if it is possible to find a point-free circle passing through two sites, then these sites will be connected by a Delaunay edge. For a discussion of these facts see [\Lee]. We will speak of these circles as {\it witnesses} to the Delaunayhood'' of the corresponding triangle or edge respectively. For completeness we also state here a couple of other obvious properties of the Delaunay triangulation which will be useful in the sequel.
\capfig9cm{Figure 7.1. TThe Voronoi diagram (solid) and the Delaunay diagram (dashed).}
\lemma{7.1}{Each triangulation of $n$ points of which $k$ lie on the convex hull has $2(n-1)-k$ triangles and $3(n-1)-k$ edges.}
\proof{Let $t$ be the number of triangles and $e$ the number of edges. By counting edges we have $3t+k = 2e$, and from Euler's relation $n-e+(t+1) = 2$. We can solve this system of two equations and obtain the lemma.}
\lemma{7.2}{Each convex hull edge is a Delaunay edge.}
\proof{The half-plane supporting the edge and not containing the convex hull is a degenerate circle witness to the Delaunayhood of the edge.}
\lemma{7.3}{If $S$ and $T$ are two point sets then each Delaunay edge $e \in D(S\union T)$, both of whose endpoints are in $S$, is also a Delaunay edge in $D(S)$.}
In other words, the addition of extra points does not add new edges between old points.
\proof{There is a point-free circle through $e$ in $S\union T$, and therefore {\it a fortiori} in $S$.}
\section{8. The InCircle test}
We now proceed to define the main geometric primitive we will use for Delaunay computations. This test is applied to four distinct points in the plane $A$, $B$, $C$, and $D$. See fig. 8.1.
\capfig4.5cm{Figure 8.1. The $\incircle$ test.}
\definition{8.1}{The predicate $\incircle(A,B,C,D)$ is defined to be true if and only if point D lies interior to the region of the plane bounded by the circle passing through $A$, $B$, and $C$ and lying to the left of that circle when the latter is traversed in the direction from $A$ to $B$ to $C$.}
In particular this implies that $D$ should be inside the circle $ABC$ if the points $A$, $B$, and $C$ define a counterclockwise oriented triangle, and {\it outside}, if they define a clockwise oriented one. (In case $A$, $B$, and $C$ are collinear we interpret the line as a circle by adding a point at infinity). If $A$, $B$, $C$, and $D$ are cocircular, then our predicate returns false. Notice that the test is equivalent to asking whether $\angle ABC + \angle CDA >\angle BCD + \angle DAB$. Another equivalent form of it is given below, based on the coordinates of the points.
\lemma{8.1}{The test $\incircle(A,B,C,D)$ is equivalent to
$$\Dscr(A,B,C,D) = \detmat{
\matrix4{
x𡤊&y𡤊&x𡤊^2+y𡤊^2&1\cr
\noalign{\vskip 2.5pt}
x𡤋&y𡤋&x𡤋^2+y𡤋^2&1\cr
\noalign{\vskip 2.5pt}
x𡤌&y𡤌&x𡤌^2+y𡤌^2&1\cr
\noalign{\vskip 2.5pt}
x𡤍&y𡤍&x𡤍^2+y𡤍^2&1\cr}} > 0.$$}
\proof{We consider the following mapping from points in the plane to points in space:
$$\lambda:\ (x,y)\mapsto(x,y,x^2+y^2),$$
which lifts each point on the $x,y$-plane onto the paraboloid of revolution $z=x^2+y^2$. See fig. 8.2 for an illustration. We first show that $A$, $B$, $C$, and $D$ are cocircular if and only if $\lambda(A)$, $\lambda(B)$, $\lambda(C)$, and $\lambda(D)$ are coplanar, a rather amazing fact.
\capfig9cm{Figure 8.2. The quadratic map for computing $\incircle$.}
\pp Suppose first that $A$, $B$, $C$, and $D$ are cocircular. If we have the degenerate case where they are collinear, then $\Dscr(A,B,C,D)$ is zero, as we can see by expanding it by the third column. But $\Dscr(A,B,C,D)$ is also the (signed) volume of the tetrahedron defined by $\lambda(A)$, $\lambda(B)$, $\lambda(C)$, and $\lambda(D)$. Since the volume is zero, the points must be coplanar. Otherwise let $(p,q)$ denote the center and $r$ the radius of the circle passing through the points $A$, $B$, $C$, $D$. We must have
$$(x𡤊-p)^2+(y𡤊-q)^2=r^2,$$
or equivalently
$$-2p\cdot x𡤊-2q\cdot y𡤊+1\cdot (x𡤊^2+y𡤊^2)+(p^2+q^2-r^2)\cdot 1=0.\eqno(1)$$
This relation also holds for points $B$, $C$, and $D$, and therefore we have a linear dependence among the columns of the determinant $\Dscr(A,B,C,D)$, which implies that its value is zero. So again we can conclude that $\lambda(A)$, $\lambda(B)$, $\lambda(C)$, and $\lambda(D)$ are coplanar.
\pp Now conversely suppose that $\lambda(A)$, $\lambda(B)$, $\lambda(C)$, and $\lambda(D)$ are coplanar. If all of $A$, $B$, $C$, and $D$ are collinear, then we are done. So suppose, without loss of generality, that $A$, $B$, and $C$ are not collinear. As above, let $(p,q)$ denote the center and $r$ the radius of the circumcircle of triangle $ABC$. Then $A$, $B$, and $C$ satisfy equation (1) above. Since $A$, $B$, and $C$ are {\it not} collinear, the corresponding three rows of $\Dscr(A,B,C,D)$ are linearly independent. But all four rows are linearly dependent, since the determinant is zero. So the last row can be expressed as a linear combination of the first three, and therefore point $D$ satisfies (1) as well , i.e., it is on the circle $ABC$.
\pp The above result shows that planar sections of the paraboloid of revolution $z=x^2+y^2$ project onto circles in the $x,y$-plane. The paraboloid is a surface that is convex upwards, and therefore, in a section of it with a plane, the part below the plane projects to the interior of the corresponding circle in the $x,y$-plane, and the part above the plane to the exterior. From this and the standard right-handed orientation convention for the sign of volumes the lemma follows. Notice that this establishes an interesting correspondance between circular queries in the plane and half-space queries in 3-space.}
As a side note we remark that Ptolemy's theorem in Euclidean plane geometry does not lead to a useful implementation of the $\incircle$ test, as we always have
$$AB\times CD+BC\times AD \geq BD\times AC,$$
with equality only when the four points are cocircular. In fact, the quantity one obtains by rendering $AB\times CD+BC\times AD - BD\times AC$ radical-free is essentially the square of the determinant $\Dscr(A,B,C,D)$ above.
The $\incircle$ test possesses several interesting properties:
\lemma{8.2}{If $A$, $B$, $C$, $D$ are any four non-cocircular points in the plane, then transposing any adjacent pair in the predicate $\incircle(A,B,C,D)$ will change the value of the predicate from true to false, or vice versa. In particular, the boolean sequence
$$\halign{\ctr{$ # $}\cr
\incircle(A,B,C,D), \incircle(B,C,D,A)\cr
\incircle(C,D,A,B),\incircle(D,A,B,C)\cr}$$
is either T(rue), F(alse), T, F, or F, T, F, T.}
\proof{This is obvious from the properties of determinants.}
The last two lemmas show that in $\incircle(A,B,C,D)$ all four points play a symmetric role, even though from the definition $D$ seems to be special. If $A$, $B$, $C$, $D$ define a counterclockwise oriented quadrilateral and $\incircle(A,B,C,D)$ is true, then $\incircle(C,B,A,D)$ is false, so reversing the orientation flips the value of the predicate. Whenever we consider applying this test to a (rooted) quadrilateral in the sequel, we will take the quadrilateral to be counterclockwise oriented.
For a convex quadrilateral $ABCD$ there are two ways to triangulate it. We can either draw diagonal $AC$ or diagonal $BD$. By lemma 7.2 the sides $AB$, $BC$, $CD$, and $DA$ are always Delaunay edges. If $\incircle(A,B,C,D)$ is false, then circle $ABC$ is point-free, so the diagonal $AC$ completes the Delaunay triangulation for the four points $A$, $B$, $C$, $D$ (else the diagonal $BD$ does). More generally, for any collection of points, all the convex hull edges are Delaunay edges. For each non-hull Delaunay edge $BD$, if the two triangles $ABD$ and $BDC$ on its two sides form a convex quadrilateral, then $\incircle(A,B,C,D)$ is true. A converse of this also holds [\Lee]. This is commonly expressed in terms of the swapping rule'', an implementation of which was presented in section 5. This rule states that if there exists a convex quadrilateral $ABCD$ that has been triangulated with diagonal $AC$, but $\incircle(A,B,C,D)$ is true, then this diagonal should be erased and diagonal $BD$ should be drawn. Success of the swapping rule on an edge ($AC$ in the previous sentence) is a witness for non-Delaunayhood. The above converse then states that a triangulation stationary under the swapping rule is the Delaunay triangulation. In fact, the swapping rule provides a new necessary and sufficient condition for the Delaunayhood of an edge, namely that the rule should fail for any quadrilateral defined by the endpoints of the edge and any two other points, one on each side.
A useful intuition about the $\incircle$ test comes from the following observation. Given two points in the plane $X$ and $Y$, the set of circles passing through $X$ and $Y$ forms a one parameter family. For example, we can think of these circles as the family $\left\{C←t\right\}$, where $t$ denotes the signed distance of the center of such a circle along the bisector of $XY$, measured from the midpoint of $XY$. Thus $C←{-}$ denotes the circle corresponding to the half-plane to the left of the line $XY$, $C𡤀$ denoted the circle with diameter $XY$, and $C←$ denotes the circle corresponding to right half-plane of $XY$. Note that the portion of these circles to the left of $XY$ strictly decreases (by proper inclusion) as $t$ increases, while the portion to the right of $XY$ strictly increases. See fig. 8.3. The next lemma uses this intuition to derive another property of the Delaunay triangulation that will be useful to us in the next section.
\capfig7cm{Figure 8.3. The set of circles passing through two given points.}
\lemma{8.3}{Let $A$ be a site and consider any line $\lscr$ through $A$. For convenience of terminology, assume that $\lscr$ is horizontal. Let $N𡤁, N𡤂, \ldotss, N←k$ denote the Delaunay neighbors of $A$ lying above $\lscr$, listed in counterclockwise order. If $X$ is any point of $\lscr$ to the right of $A$, let $\Gamma←i$ denote the portion of the circumcircle of $AXN←i$ that is above $\lscr$. Then the sequence $\left\{\Gamma←i\right\}$ is unimodal, in the sense that there is some $j$, $1jk$, such that for $1i<j$ we have $\Gamma←i\supset\Gamma←{i+1}$, while for $ji<k$ we have $\Gamma←i\subset\Gamma←{i+1}$.}
\proof{We first show that if $X←i$ denotes the rightmost intersection of the circumcircle of triangle $AN←iN←{i+1}$, $i = 1, 2, \ldotss, k-1$, with $\lscr$, then the sequence of points $\left\{X←i\right\}$ moves monotonically to the left.
\pp Consider, as in fig. 8.4, three consecutive Delaunay neighbors $N←i$, $N←{i+1}$, and $N←{i+2}$ of $A$. The point $N←{i+2}$ is outside the circle $AN←iN←{i+1}$ and points $N←i$ and $N←{i+2}$ are on opposite sides of $AN←{i+1}$. So to get to the circle $AN←{i+1}N←{i+2}$ from $AN←iN←{i+1}$, while always passing through $A$ and $N←{i+1}$, we must expand on the side of $N←{i+2}$, and therefore we must contract on the side of $N←i$, where $X←i$ and $X←{i+1}$ also lie. This proves our assertion.
\capfig5cm{Figure 8.4. A property of the neighbors of $A$.}
\pp To prove the lemma now, note that as long as the $X←i$ are to the right of $X$, then $X$ is inside the circumcircle of $AN←iN←{i+1}$, or equivalently, $N←{i+1}$ is inside the circumcircle of $AN←iX$. After the $X←i$ move to the left of $X$, then $N←{i+1}$ is outside the circumcircle of $AN←iX$. Thus the $\Gamma←i$ behave as stated.}
We will refer to lemma 8.3 as the {\it unimodality lemma}. Another useful property of the Delaunay triangulation that can be derived with the help of the $\incircle$ test is:
\lemma{8.4}{Let $A$ be a site and let $X$ and $Y$ denote any two distinct Delaunay neighbors of $A$. Then all other Delaunay neighbors of $A$ that lie inside the convex angle $\angle XAY$ are inside the circumcircle of triangle $AXY$, and all neighbors lying outside the angle are outside that same circle.}
\proof{Without loss of generality let us assume that in the convex angle $\angle XAY$ the ray $AY$ is counterclockwise of $AX$. Suppose the lemma is false inside the convex angle $\angle XAY$. Let $N$ be the Delaunay neighbor of $A$ outside the circle $AXY$ which is the first to be encounterd as we sweep this angle in counterclockwise fashion. Let also $M$ denote the predecessor of $N$, as in fig. 8.5. Note that $N$ is also outside the circle $AMY$ since $X$, $M$, and $N$ are all on the same side of $AY$. Therefore $Y$ is inside the circle $AMN$, contradicting Delaunayhood. A similar argument proves the lemma outside $\angle XAY$.}
\capfig5cm{Figure 8.5. A property of the Delaunay diagram.}
\section{9. The divide and conquer algorithm.}
% Variables of the program
\def\lcand{\hbox{\it lcand}}
\def\rcand{\hbox{\it rcand}}
\def\basel{\hbox{\it basel}}
\def\cross{\hbox{\it cross}}
\def\ldi{\hbox{\it ldi}}
\def\rdi{\hbox{\it rdi}}
In this section we use the tools we have developed so far to describe, analyze, and prove correct a divide and conquer algorithm for computing the Delaunay triangulation of $n$ points in the plane. Topologically the quad-edge data structure gives us the dual for free, so by associating some relevant geometric information with our face nodes, e.g., the coordinates of the corresponding Voronoi vertices, we are simultaneously computing the Voronoi diagram as well. Our algorithm is the dual of that proposed by Shamos and Hoey [\ShamHo] and, like theirs, runs in time $O(n \lg n)$ and uses linear storage. In [\Lee] D. T. Lee proposes a dual algorithm which is similar to but more complex than ours. Unlike both of these, our algorithm need not compute straight line intersections unless the coordinates of Voronoi vertices are needed. The only geometric primitives used by our algorithm are the $\incircle$ test and the predicate $\CCW(A,B,C)$, which is true if the points $A$, $B$, and $C$ form a counterclockwise oriented triangle. The latter is a standard tool in geometric algorithms, and is equivalent to reporting on which side of a line a point lies. It is equivalent to $\incircle(A,B,C,D)$, for $D$ chosen as the barycenter of triangle $ABC$.
As one might expect, in the divide and conquer algorithm we start by partitioning our points into two halves, the left half ($L$) and the right half ($R$), which are separated in the $x$-coordinate. We next recursively compute the Delaunay triangulation of each half. Finally we need to marry the two half triangulations into the Delaunay triangulation of the whole set.
We now elaborate on this brief description in stages. First of all it is advantageous to start out by sorting our points in increasing $x$-coordinate. When there are ties we resolve them by sorting in increasing $y$-coordinate and throwing away duplicate points. This makes all future splittings constant time operations. After splitting in the middle and recursively triangulating $L$ and $R$, we must consider the merge step. Note that this may involve deleting some $L-L$ or $R-R$ edges and will certainly require adding some $L-R$ or {\cross} edges. By lemma 7.3, however, no new $L-L$ or $R-R$ edges will be added.
What is the structure of the cross edges? All these edges must cross a line parallel to the $y$ axis and placed at the splitting $x$ value. This establishes a linear ordering of the cross edges, so we can talk about successive cross edges, the bottom-most cross edge, etc. The algorithm we are about to present will produce the cross edges incrementally, in ascending $y$-order. See fig. 9.1.
\capfig8.5cm{Figure 9.1. The structure of the $L$--$R$ edges.}
\lemma{9.1}{Any two cross edges adjacent in the $y$-ordering share a common vertex. The third side of the triangle they define is either an $L-L$ or an $R-R$ edge.}
\proof{Any two consecutive intersections of a triangulation with a straight line must belong to the same triangular face. Therefore the two cross edges in question have one endpoint in common, and the third side of the triangle is fully to one side or the other of the vertical divider.}
Lemma 9.1 has the following important consequence. Let us call the current cross edge the base, and write its directed variant going from right to left as {\basel}. The successor to {\basel} will either be an edge going from the left end-point of {\basel} to one of the $R$-neighbors of the right end-point lying above'' {\basel}, or symmetrically, it will be an edge from the right end-point of {\basel} to one of the $L$-neighbors of the left end-point lying above'' {\basel}. In the program below edges from the left end-point of {\basel} to its candidate $L$-neighbors will be held in the variable {\lcand}, and their symmetric counterparts in {\rcand}. See fig. 9.2.
\capfig9cm{Figure 9.2. The variables \prg{lcand}, \prg{rcand}, and \prg{basel}.}
\vfill
{\def\textwidth{\pagewidth}
\capfig11cm{Figure 9.3. The rising bubble.}
}
\eject
We can intuitively view what happens by imagining that a circular bubble weaves its way in the space between $L$ and $R$ and in so doing gives us the cross edges. Inductively we have a point free circle circumscribing the triangle defined by {\basel} and the previous cross edge. Consider continuously transforming this circle into other circles having {\basel} as a chord, but lying further into the half-plane above {\basel}. As we remarked, there is only a single degree of freedom, as the center of the circle is constrained to lie on the bisector of {\basel}. See fig. 9.3. Our circles will be point free for a while, but unless {\basel} is the upper common tangent of $L$ and $R$, at some point the circumference of our transforming circle will encounter a new point, belonging either to $L$ or $R$, giving rise to a new triangle with a point-free circumcircle. The new $L-R$ edge of this triangle is the next cross edge determined by the body of the main loop below.
In more detail, edge {\lcand} is is computed so as to have as destination the first $L$ point to be encountered in this process, and {\rcand} the first $R$ point. A final test chooses the point among these two that would be encountered first. We start the cross edge iteration by computing the lower common tangent of $L$ and $R$, which defines the first cross edge. The edge {\ldi} is the clockwise convex hull edge out of the rightmost $L$ point, and {\rdi} is the counterclockwise convex hull edge out of the leftmost $R$ point. We assume these are returned by the recursive calls of the triangulation procedure to $L$ and $R$. In the program below prose in $\{\}$ is comments; certain portions of code that are just symmetric variants of others shown are elided and are indicated by \prg{. . .}''.
\vfill\eject
{\def\progwidth{\pagewidth}
\prog
\pcomm{create a first cross edge ({\basel}) by computing the lower common tangent of $L$ and $R$}
DO WHILE CCW[ldi\porg, ldi\pdest, rdi\porg] DO ldi ← ldi\plnext OD;
IF CCW[rdi\pdest, rdi\porg, ldi\porg] THEN rdi ← rdi.Rprev ELSE EXIT;
OD;
\null
basel \pgets Connect[rdi\psym, ldi, left];
\null
\pcomm{initialize {\lcand} to the counterclockwise first edge incident to {\basel$\dest$} and above {\basel}}
\pcomm{symmetrically for {\rcand}}
lcand \pgets basel\psym\ponext; rcand \pgets basel\poprev;
\null
\pcomm{let \prg{ValidL} and \prg{ValidR} be procedures which test whether {\lcand} and {\rcand} are above {\basel}}
ValidL[l] : CCW[basel\porg, basel\pdest, l\pdest];
ValidR[r] : CCW[basel\porg, basel\pdest, r\pdest];
\null
\pcomm{this is the merge iteration}
DO
\pcomm{advance {\lcand} counterclockwise, deleting the old {\lcand} edge, until the $\incircle$ test fails}
IF ValidL[lcand] AND ValidL[lcand\ponext] THEN
WHILE InCircle[lcand\pdest, lcand\ponext\pdest, lcand\porg, basel\porg] DO
t ← lcand\ponext;
DeleteEdge[lcand];
lcand ← t;
OD;
\null
\pcomm{symmetrically, advance {\rcand} clockwise}
. . .
\null
\pcomm{if both {\lcand} and {\rcand} are invalid then {\basel} is the upper common tangent}
IF NOT ValidL[lcand] AND NOT ValidR[rcand] THEN EXIT;
\null
\pcomm{connect to either {\lcand} or {\rcand}}
\pcomm{if both are valid, then choose the appropriate one using the $\incircle$ test}
\pcomm{if {\lcand} is the winner then make the next cross edge close the triangle with {\lcand} and {\basel},}
\pcomm{else make the next cross edge close the triangle with {\rcand} and {\basel}}
\pcomm{advance {\basel} and reinitialize {\lcand} or {\rcand}}
IF NOT ValidL[lcand] OR
(ValidR[rcand] AND InCircle[lcand\pdest, lcand\porg, rcand\porg, rcand\pdest])
THEN BEGIN \pcomm{connect to the right}
basel \pgets Connect[rcand, basel\psym, left];
rcand \pgets basel\psym\plnext;
END
ELSE . . . \pcomm{connect to the left}
OD
\endprog}
\vfill
We now elaborate on the above program. Recall first that the number of vertices, edges, and faces in a triangulation are all linearly related. Also, the lower common tangent computation takes linear time as either {\ldi} or {\rdi} advances at each step. What about the cost of the {\lcand} computation? We can account for this loop by charging every iteration to the edge being deleted. Similarly, iterations of the {\rcand} loop can be charged to deleted edges. The rest of the body of the main loop requires constant time and may be charged to the $L-L$ or $R-R$ edge closing the next triangle. This shows that the overall cost of the merge pass is linear in the size of $L$ and $R$.
We now formally state the lemmas that prove the correctness of the above algorithm.
\lemma{9.2}{When the {\lcand} iteration stops, the circumcircle of the triangle defined by {\lcand} and {\basel} is free of other $L$ points. Furthermore all deleted edges are not in the final Delaunay triangulation.}
\eject\null\vfill
\proof{Let $A$ and $X$ denote respectively the left and right endpoints of {\basel}, and, as in lemma 8.3, let $N←i$, $1ik$, denote the Delaunay neighbors of $A$ in $L$ that are above {\basel}. We prove our lemma inductively on the sequence of cross edges discovered.
\pp Assuming that the algorithm has worked correctly up to now, then the circumcircle of the triangle defined by {\basel} and the previous cross edge is the circumcircle of a Delaunay triangle and therefore it is point-free. As in our earlier discussion, suppose that we push'' this circle upwards while always making it pass through $A$ and $X$. At some point it will encounter a new $L$ point $N$ for the first time. This point will be above {\basel}. We wish to show that $N$ is the point that the {\lcand} iteration will discover.
\eject
\pp To see this note first of all that $N$ must be a Delaunay neighbor of $A$, since our circle is a Delaunayhood witness. The {\lcand} iteration proceeds until the first time $X$ falls outside the circle $AN←iN←{i+1}$, as shown in the code. By the unimodality lemma, this gives us the neighbor $N=N←i$ which determines the circle $AXN←i$ with the smallest extent above {\basel}. This circle now forms the basis for the next iteration through the main loop.
\pp The inclusion of $X$ in the circle $AN←mN←{m+1}$ for $m = 1,2,\ldotss, m-1$ establishes that $AN←m$ is not a Delaunay edge in the final triangulation, since each circle with $AN←m$ as chord contains either $N←{m+1}$ or $X$ in its interior. Thus the edge deletions performed are valid. It is easy to check that the cross edges drawn, together with the remaining edges of $L$ and $R$ define a triangulation of the $n$ sites.
\pp This is essentially the proof, modulo expending some care on certain boundary cases. We must first of all show that the {\lcand} iteration, if started at all, never gets to neighbors on the bottom side of {\basel}. For if the iteration was invoked, then there must be at least two neighbors above {\basel}. We claim that the moment {$\lcand\ \onext$} crosses to the bottom of {\basel}, the {\fx WHILE} $\incircle$ test fails. See fig. 9.4.
\capfig6.5cm{Figure 9.4. End of \prg{lcand} loop.}
\pp In this case we had that $X$ was inside the circle $AN←{k-1}N←k$. If the angle $\angle N←kAN←{k+1}$ is convex then $X$ is obviously outside the circle $AN←kN←{k+1}$, so the $\incircle$ test fails. If the angle $\angle N←kAN←{k+1}$ is concave, then two things happen. First of all the circle $AN←kN←{k+1}$ reverses orientation, and secondly, since $N←{k+1}$ is outside the circle $AN←{k-1}N←k$ but on the same side of $AN←k$ as $N←{k-1}$ and $X$, $X$ now falls inside the circle $AN←kN←{k+1}$. So in this case the $\incircle$ test fails also.
\pp Finally note that the main loop starts with our ascending circle being the halfplane below the lower outer common tangent of $L$ and $R$, and ends with that circle being the halfplane above the upper such tangent.}
A symmetric statement holds for the {\rcand} iteration. Furthermore, the final choice between {\lcand} and {\rcand} is again done by the $\incircle$ test, namely {\rcand} will be chosen if \prg{InCircle[lcand\pdest, lcand\porg, rcand\porg, rcand\pdest]} is true. It now follows that:
\lemma{9.3}{The triangle defined by {\basel} and the next cross edge as computed by the above algorithm has a circumcircle free of both $L$ and $R$ points, and thus is a Delaunay triangle. Furthermore, after the completion of the cross edge iteration, all remaining $L$ or $R$ triangles are also Delaunay triangles for the full configuration.}
\proof{The first statement is obvious from the above discussion. The second statement is most easily proven by showing that each remaining $L--L$ or $R--R$ edge is stationary under the swapping rule. The only possible suspect edges are those bordering triangles containg cross edges. It follows that these edges were {\lcand} or {\rcand} at some point in the algorithm and were not deleted. So the $\incircle$ test of the inner iterations failed there, and this establishes that these edges are safe under the swapping rule.}
These lemmas complete the proof that the above algorithm correctly computes the Delaunay triangulation. The algorithm will work with cocircular sites and other degenerate cases. When both {\lcand} and {\rcand} are equally good, it arbitrarily favors {\rcand}. In practice floating-point errors in the computation of the $\incircle$ test usually interfere with any effort to handle degenerate cases in a consistent way.
It is worthwhile to comment on a way to view this algorithm as operating in three-dimensional space, on the lifted images of our sites under the map $\lambda$ discussed in the proof of lemma 8.1. These lifted images are points on a convex surface, and therefore they define a convex polyhedron, corresponding to their convex hull. The discussion in the proof of lemma 8.1 has also established that the downwards'' looking faces of this polyhedron are in a one-to-one correspondance with the Delaunay faces of the sites. The upwards facing faces of the polyhedron correspond to triangles of our collection of sites whose circumcircles enclose all the sites. These are of course the faces of the dual of the {\it furthest-point} Voronoi diagram [\Sham] for our collection of sites. Note that when our cross edge iteration ends, the ascending circle is the half-plane above the upper common tangent of $L$ and $R$. The half-plane below this tangent is a circle containing all the sites. If we now let this circle contract until it hits the first site, while always having as chord the last cross edge, we will have produced the next cross edge of the dual of the furthest point Voronoi. In fact, if our cross edge iteration is left to continue, but by using the furthest-point versions of $L$ and $R$ and reversing the sense of the $\incircle$ test, it will compute all the cross edges of this dual and will cycle back to the lower common tangent of $L$ and $R$.
From the above discussion it is apparent that our divide-and-conquer algorithm is computing the convex hull of the lifted images of the sites. It is in fact exactly the Preparata-Hong [\PH] algorithm for computing the convex hull of $n$ points in three dimensions. If the $\incircle$ test is replaced by a negative volume'' test, as obtained by substituting in $\Dscr(A,B,C,D)$ the third column by the $z$ coordinates of the points, then the code given above implements the Preparata-Hong algorithm! The reader may have fun verifying that our expanding circles passing through a chord become rotating supporting planes around the lifted image of the chord. Thus we are computing the sleeve'' discussed in [\PH].
\section{10. Incremental techniques}
The algorithm of the previous section assumes that all points are known at the beginning of time. For many applications we are interested in dynamic algorithms, which allow us to update our diagrams as new points are added or deleted. The machinery we have developed so far yields also simple and elegant insertion algorithms. Deletions present problems.
It will simplify our discussion to assume that our points are always within some large convex polygon (say a triangle) whose vertices are considered to be among the given points. Thus the entire interior of this polygon will have been triangulated, and when a new point is given, we can be sure that it will fall within one of the existing triangular faces. The algorithm for updating the Delaunay triangulation starts out by locating the new point in one of the triangular faces of the subdivision. This can be done by Kirkpatrick's method in time $O(\lg n)$ [\Kirki]. Our concern is with how the triangulation is to be updated.
\lemma{10.1}{The edges connecting the new point to vertices of its enclosing triangle are Delaunay edges.}
\proof{From fig. 10.1 it is clear the we can always find a circle contained inside the circumcircle of a triangle $ABC$, and passing through one of $A$, $B$, or $C$, and a specified interior point $X$ of that circle. Such a circle is point-free.}
\capfig6cm{Figure 10.1. A circle proving the ``Delaunayhood'' of $XC$.}
However, the edges of the enclosing triangle are themselves suspect. We can remove suspect edges by the swapping rule discussed in section 8. This gives rise to a very simple algorithm, given in high-level form below.
{\prog
place the three edges of the enclosing triangle
on a stack in ccw order;
\null
WHILE the stack is nonempty DO
BEGIN
\pcomm{each edge on the stack forms a triangle with the new point}
remove an edge from the stack;
consider the quadrilateral whose diagonal
is this edge;
if this edge passes the swapping rule test,
then forget about it,
else apply the swapping rule and stack,
in ccw order, the two edges not
incident to the new point;
OD;
\endprog}
At any one time the set of triangles with the new point $X$ as a vertex form a star-shaped polygon around $X$. This is because the polygon can only get larger with each application of the swap rule on a bounding edge, and if it does grow, then the two new stacked sides are in the angular sector with vertex $X$ and delimited by the deleted edge. Furthermore, because we stack edges in a consistent order, the stacked edges always form a contiguous section of the perimeter of that polygon, with the rest of the perimeter consisiting of the forgotten edges.
\lemma{10.2}{All edges incident to the new point introduced during applications of the swap rule in the above algorithm are Delaunay edges. So is each forgotten edge.}
\proof{Let $X$ be the new point and suppose that the swap rule was just applied to quadrilateral $XLMN$ where it replaced diagonal $LN$ with diagonal $XM$, as in fig. 10.2, Then $X$ must be interior to the circle $LMN$ and by the same argument as in the proof of lemma 10.1 it follows that $XM$ is a Delaunay edge. If the swap rule failed on $XLMN$, then $X$ is outside the circle $LMN$, and so edge $LN$ is Delaunay and can be forgotten.}
\lemma{10.3}{No edge is ever stacked twice.}
\proof{One of two things can happen at each application of the swapping rule. If the swap succeeds, then the polygon grows past the old bounding edge, so that edge can never be on its boundary again. If it fails, then that bounding edge is frozen and the algorithm will never again touch any edge in the angular sector with vertex $X$ and delimited by that edge.}
\capfig8cm{Figure 10.2. Proving correctness of the incremental method.}
\lemma{10.4}{When the above loop terminates the swap test fails everywhere in the triangulation.}
\proof{From lemma 10.2 we know that all edges leading to the new point $X$ and introduced during the above algorithm are Delaunay edges. The only other edges bounding new triangles are the edges of the star-shaped polygon discussed in the proof of lemma 10.3. At the termination of the loop the stack is empty and so all the bounding edges have been forgotten at some point. Thus there are no suspect edges left.}
The above lemmas imply that this loop correctly computes the final Delaunay triangulation, and that it does so in at most $O(n)$ time. More precisely, if $k$ edges need to be added or deleted to update the Delaunay structure, then our algorithm works in time $O(k)$, since each edge added by the algorithm must be added, and each edge that is deleted must be deleted.
We saw that the forgotten edges will always define a path around the new point, which will eventually close. If $e$ is an edge on the path, then the next edge is $e\ \lnext\ \oprev$. This observation gives us a way to implement the above algorithm without any auxiliary storage, such as the stack, by a method that is similar to that used in the previous section. Here {\basel} starts out (arbitrarily) as one of the edges connecting the new point to one of the vertices of its enclosing triangle. By convention we think of the new point as being the left end point of {\basel}. Then {\lcand} will always be invalid (non-existent, in fact) and we will consider for {\rcand} the various edges out of the other endpoint of {\basel}. The successively found {\rcand}$\,$s, as we proceed counterclockwise around the new point, will correspond to the forgotten edges of the previous algorithm. Note, however, that the incremental algorithm connects ahead'' after each deletion, while the algorithm of the previous section would connect all cross edges in strict counterclockwise order.
Something quite general is happening here. We have a method with which, given any two Delaunay triangulations $L$ and $R$ (not necessarily linearly separated), and a cross edge between them, we are able to find the next cross edge (if one exists) on a specified side of the original one. Thus we have another way to look at Kirkpatrick's [\Kirkii] linear time merge of two arbitrary Voronoi subdivisions.
The above arguments show that it is possible to insert a new site into the Delaunay structure in total time $O(k)$, if $k$ updates need to be made. Unfortunately we know of no $O(k)$ algorithm for handling the deletion of a site that leaves an untriangulated face of $k$ sides. Our best algorithm has asymptotic complexity $O(k\lg k)$, which in the worst case $k=\Theta(n)$ is as bad as rebuilding the subdivision from scratch. We do not know of a linear algorithm even if we assume that the deletion of the site leaves a convex face. We regard the handling of deletions as the major open problem in this area.
\section{11. Conclusions.}
In this paper we have presented a new data structure for planar subdivisions which simultaneously represents the subdivision, its dual, and its mirror image. Our quad edge structure is both general (it works for subdivisions on any two-dimensional manifold) and space efficient. We have shown that two topological operations, both simple to implement, suffice to build and dismantle any such structure.
We have also shown how, by using the quad edge structure and the $\incircle$ primitive, we can get compact and efficient Voronoi/Delaunay algorithms. The $\incircle$ test is shown to be of value both for implementing and reasoning about such algorithms. The code for these algorithms is sufficiently simple that we have practically given all of it in this paper.
\section{12. References.}
\ref [\Baum] {Baumgart, B. G. {\it A Polyhedron Representation for Computer Vision.} AFIPS Conference Proceedings, Vol. 44, 1975 National Computer Conference, pp. 589--596.}
\ref [\Braid] {Braid, I. C., Hillyard, R. C., and Stroud, I. A. {\it Stepwise construction of Polyhedra in Geometric Modeling}; in Brodlie, K. W. (ed.), {\it Mathematical Methods in Computer Graphics and Design}. Academic Press, London, 1980, pp. 123--141.}
\ref [\Even] {Even, S. {\it Algorithmic Combinatorics}. Macmillan, N. Y., 1973.}
\ref [\GrSi] {Green, P. J., and Sibson, R. {\it Computing Dirichlet Tesselation in the Plane.} Computer Journal, Vol. 21, no. 2, 1977, pp. 168--173.}
\ref [\Har] {Harary, F. {\it Graph Theory}. Addison-Wesley, 1972, p. 105.}
\ref [\IyKa] {Iyanaga, S., and Kawada, Y. {\it Encyclopedic Dictionary of Mathematics} (2nd. ed.). MIT Press, Cambridge, Mass., 1968.}
\ref [\Kirki] {Kirkpatrick, D. {\it Optimal Search in Planar Subdivisions}. Technical Report 81-13, Department of Computer Science, University of British Columbia, 1981.}
\ref [\Kirkii] {Kirkpatrick, D. {\it Efficient Computaion of Continuous Skeletons}. Proceedings of the 20th Annual Symposium on Foundations of Computer Science, 1979, pp. 18--27.}
\ref [\Knuth] {Knuth, D. E. {\it The Art of Computer Programming}, Vol. I: Fundamental Algorithms (2nd. ed.). Addison-Wesley, 1975.}
\ref [\Lee] {Lee, D. T. {\it Proximity and Reachability in the Plane}. Report R-831, Coordinated Science Laboratory, University of Illinois, Urbana, 1978.}
\ref [\MaSu] {Mantyla, M. J., and Sulonen, R. {\it GWB: a Solid Modeler with Euler Operators}. IEEE Computer Graphics and Applications, Vol. 2 N. 5 (September 1982) pp. 17--31.}
\ref [\MuPr] {Muller, D. E., and Preparata, F. P. {\it Finding the Intersection of Two Convex Polyhedra}. Theoretical Computer Science Vol. 7, 1978, pp. 217--236.}
\ref [\PH] {Preparata, F. P., and Hong, S. J. {\it Convex Hulls of Finite Sets of Points in Two and Three Dimensions}. CACM, Vol. 20, 1977, pp. 87--93}
\ref [\Sham] {Shamos, M. I. {\it Computational Geometry}. Ph. D. dissertation, Yale University, 1977.}
\ref [\ShamHo] {Shamos, M. I., and and Hoey, D. {\it Closest-Point Problems}. 16-th Annual Symposium on Foundations of Computer Science, 1975, pp. 151--162.}
\vskip 24pt plus 2pt minus 2pt
\hrule
\vskip 4pt
\vfill\eject\end