\input CS445Hdr.tex
\def\rtitle{$\null$}
\def\ltitle{$\null$}
\def\file{DLNA.tex of November 18, 1982 10:47 AM - Guibas}
\def\header{Primitives for Voronoi diagrams}
\def\InCircle{\hbox{\it InCircle}}
\def\lcand{\hbox{\it lcand}}
\def\rcand{\hbox{\it rcand}}
\def\basel{\hbox{\it basel}}
\def\CCW{\hbox{\it CCW}}
\def\cross{\hbox{\it cross}}
\def\ldi{\hbox{\it ldi}}
\def\rdi{\hbox{\it rdi}}
\def\quadedge{quad edge}

\def\sym{\,\hbox{\it Sym}\,}
\def\flip{\,\hbox{\it Flip}\,}
\def\rot{\,\hbox{\it Rot}\,}
\def\onext{\,\hbox{\it Onext}\,}
\def\oprev{\,\hbox{\it Oprev}\,}
\def\dnext{\,\hbox{\it Dnext}\,}
\def\dprev{\,\hbox{\it Dprev}\,}
\def\lnext{\,\hbox{\it Lnext}\,}
\def\lprev{\,\hbox{\it Lprev}\,}
\def\rnext{\,\hbox{\it Rnext}\,}
\def\rprev{\,\hbox{\it Rprev}\,}
\def\lef{\,\hbox{\it Left}\,}
\def\rif{\,\hbox{\it Right}\,}
\def\org{\,\hbox{\it Org}\,}
\def\dest{\,\hbox{\it Dest}\,}

\def\psym{\hbox{\fx .Sym}}
\def\pflip{\hbox{\fx .Flip}}
\def\prot{\hbox{\fx .Rot}}
\def\ponext{\hbox{\fx .Onext}}
\def\poprev{\hbox{\fx .Oprev}}
\def\pdnext{\hbox{\fx .Dnext}}
\def\pdprev{\hbox{\fx .Dprev}}
\def\plnext{\hbox{\fx .Lnext}}
\def\plprev{\hbox{\fx .Lprev}}
\def\prnext{\hbox{\fx .Rnext}}
\def\prprev{\hbox{\fx .Rprev}}
\def\plef{\hbox{\fx .Left}}
\def\prif{\hbox{\fx .Right}}
\def\pnext{\hbox{\fx .Next}}
\def\pdata{\hbox{\fx .Data}}
\def\porg{\hbox{\fx .Org}}
\def\pdest{\hbox{\fx .Dest}}

\def\defer#1{}

$\null$\par
\vskip 14pt

\ctrline{\hbf Primitives for the Manipulation of Planar Subdivisions} \vskip 2pt
\ctrline{\hbf and the Computation of Voronoi Diagrams}
\vskip 4pt
\ctrline{\vbox{\hrule width 3truein}}
\vskip 4pt
\ctrline{\bf Extended Abstract}
\vskip 0pt
\ctrline{\vbox{\hrule width 3truein}}
\vskip 10pt
\ctrline{\it Leo J. Guibas and Jorge Stolfi}
\vskip 2pt
\ctrline{\it Xerox PARC and Stanford University}
\vskip 10pt

%reference macros

\def\Sham{Sh}
\def\Kirk{Kir}
\def\ShamHo{ShHo}
\def \GrSi{GrSi}
\def\Baum{Bau}
\def\Braid{Bra}
\def\IyKa{IyKa}
\def\MuPr{MuPr}
\def\MaSu{MaSu}
\def\Knuth{Kn}
\def\Even{Ev}
\def\Lee{Lee}
\def\Har{Har}

\ctrline{\hbox par 350pt{We discuss the following problem: given $n$ points in the plane (the ``sites’’), and an arbitrary query point $q$, find the site that is closest to $q$. This problem can be solved by constructing the Voronoi diagram of the given sites, and then locating the query point in one of its regions. We give two algorithms, one that constructs the Voronoi diagram in $O(n\lg n)$ time, and another that inserts a new site in $O(n)$ time. Both are based on the use of the Voronoi dual, the Delaunay triangulation, and are simple enough to be of practical value. The simplicity of both algorithms can be attributed to the separation of the geometrical and topological aspects of the problem, and to the use of two simple but powerful primitives, a geometric predicate and an operator for manipulating the topology of the diagram. The topology is represented by a new data structure for generalized diagrams, that is embeddings of graphs in two-dimensional manifolds. This structure represents simultaneously an embedding, its dual, and its mirror-image. Furthermore, just two operators are sufficient for building and modifying arbitrary diagrams.}}\par
\sect{0. Introduction.}
One of the fundamental data structures of computational geometry is the Voronoi diagram. This diagram arises from consideration of the following natural problem. Let $n$ points in the plane be given, called {\it sites}. We wish to preprocess them into a data structure, so that given a new query point $q$, we can efficiently locate the nearest neighbor of $q$ among the sites. The $n$ sites in fact partition the plane into a collection of $n$ regions, each associated with one of the sites. If region $P$ is associated with site $p$, then $P$ is the locus of all points in the plane closer to $p$ than to any of the other $n-1$ sites. This partition is known as the {\it Voronoi diagram} (or the {Dirichlet tesselation}) determined by the given sites.

The closest site problem is can therefore be solved by constructing the Voronoi diagram, and then locating the query point in it. Using the currently best available algorithms, the Voronoi diagram of $n$ points can be computed in $O(n\lg n)$ time and stored in $O(n)$ space; these bounds have been shown to be optimal in the worst case [\Sham]. Once we have the Voronoi diagram, we can construct in linear further time a structure with which we can do point location in a planr subdivision in $O(\lg n)$ time [\Kirk].

Shamos [\Sham] first pointed out that the Voronoi diagram can be used as a powerful tool to give efficient algorithms for a wide variety of other geometric problems. Given the Voronoi, we can compute in linear time the closest pair of sites, or the closest neighbor of each site, or the Euclidean minimum spanning tree of the $n$ sites, or the largest point-free circle with center inside their convex hull, etc. Several of these problems are known to require $\Omega(n\lg n)$ time in the worst case, so these Voronoi-based algorithms are asymptotically optimal.

The complexity of the $O(n\lg n)$ Voronoi algorithms that can be found in the literature is a serious barrier to their widespread utilization. To the authors’ knowledge, they so far have never been used in any of the significant practical applications of closest-point problems in statistics, operations research, geography, and other areas. In every case the authors of those programs chose to use asymptotically slower $O(n↑2)$ algorithms, which were much simpler to code and almost certainly faster for the ranges of interest in $n$. Furthermore, the presentation of Voronoi algorithms in the literature has often been insufficiently precise. Authors typically confine themselves only to a very high-level description of their algorithms. As with many geometric problems, difficulties can arise in the implementation when degeneracies occur, as for example when three of the given points happen to be
collinear.\par
In this paper we present a novel way of looking at the standard Voronoi computation techniques, such as the divide and conquer [\ShamHo] and incremental [\GrSi] methods, that results in Voronoi algorithms that are very concise and substantially easier to read, implement and verify. One of the main reasons for this simplicity is that we work with the duals of the Voronoi diagrams, which are known as {\it Delaunay triangulations}, rather than with the diagrams themselves. The other major reason is the clean separation that we are able to make between topological and geometrical aspects of the problem. In sections 6 through 10 we show that the hardest part of constructing a Voronoi diagram or Delaunay triangulation is the determination of its topological structure, that is, the incidence relations between vertices, edges, and faces. Once the topological properties of the diagram are known, the geometrical ones (coordinates, angles, lengths, etc.) can be computed in time linear in the size of the diagram.

Our algorithms are built using essentially two primitives: a geometric predicate, and a topological operator for manipulating the structure of the diagrams. The geometrical primitive, that we call the {\InCircle} test, encapsulates the essential geometric information that determines the topological structure of the Voronoi diagram, and is a powerful tool not only in the coding of the algorithms but also in proving their correctness. As evidence for its importance, we show that is possesses many interesting properties, and can be defined in a number of equivalent ways.\par
The topological structure of a Voronoi or Delaunay diagram is equivalent to that of a particular embedding of some undirected graph in the Euclidean plane. We have found it convenient to consider such diagrams as being drawn on the sphere rather than on the plane; topologically that is equivalent to augmenting the Euclidean plane by a dummy {\it point at infinity}. This allows us to represent such things as infinite edges and faces in the same way as their finite counterparts. In sections 1 through 5 we will establish the mathematical properties of such embeddings, define a notation for talking about them, and describe a data structure for their representation.\par

It turns out that the data structure we propose is general enough to allow the representation of undirected graphs embedded in arbitrary two-dimensional manifolds. In fact, it may be seen as a variant of the ``winged edge’’ representation for polyhedral surfaces [\Baum]. We show that a single topological operator, which we call {\it Splice}, together with a single primitive for the creation of isolated edges, is sufficient for the construction and modification of arbitrary diagrams. Our data structure has the ability to represent simultaneously and uniformly both the primal, the dual, and the mirror-image diagrams, and to switch arbitrarily from one of these domains to another, in constant time. Finally, the design of the data structure enables us to manipulate its geometrical and topological parameters independently of each other. As it will become clear in the sequel, these properties have the effect of producing programs that are at once simple, elegant, efficient from a practical point of view, and asymptotically optimal in time and space.\par
\sect{1. Simple subdivisions.}

In this paper, all manifolds [
\IyKa] are assumed to be two-dimensional and compact. Two connected subsets $A$ and $B$ of a manifold $M$ are said to be {\it incident} on each other iff their union is connected; otherwise, they are said to be {\it separable}. A {\it disk} of a manifold $M$ is a subspace of $M$ homeomorphic to the open unit ball of $R↑2$; a {\it line} of $M$ is a subspace homeomorphic to the open unit segment of $R$.\par
\definition{1.1}{A {\it simple subdivision} $P$ of a manifold $M$ is a partition of $M$ into three finite collections of disjoint parts, the {\it vertices}, the {\it edges}, and the {\it faces} (denoted respectively by $VP, EP, FP$), with the following properties:\par

\thal{\halign{\hskip 20pt #\quad|\lft{#}\cr
P1.|every vertex is a single point;\cr\va
P2.|every face is a disk of $M$;\cr\va
P3.|every edge is a line of $M$;\cr\va
P4.|the edges are pairwise separable;\cr\va
P5.|every vertex is incident to some edge.\cr}}}
It can be shown that the concept of simple subdivision essentially captures the intuitive idea of a generalized ``polyhedron’’ or ``two-dimensional diagram’’, i.e. a specific embedding of an undirected graph on a manifold $M$. See, for example, the solid diagram in fig. 2.1. In particular, we can show that every edge is incident to two (not necessarily distinct) vertices and two (not necessarily distinct) faces, that every face is bounded by a ring of edges and vertices, and so forth.\par
The edges, vertices and faces of a subdivision are called its {\it elements}. Two subdivisions, $P$ of $M$ and $P\pr$ of $M\pr$ are {\it equivalent} if there is a homeomorphism of $M$ into $M\pr$ that takes each element of $P$ into one of $P\pr$. Such a homeomorphism will perforce map vertices into vertices, faces into faces, edges into edges, and will preserve the incidence relationships among them.\par

We now mention a couple of fine points regarding our definition. A complete treatment of these is deferred to the full paper.
We do not require the underlying graph to be triply connected, and therefore [\Har] several non-equivalent subdivisions may have the same underlying graph (even if the manifold is restricted to be a sphere). We have found that this liberal definition allows simpler atomic operations, with weaker preconditions. Note also that we allow loops and multiple edges with same endpoints.

On the other hand, we insist that every face be a simple disk, implying that its boundary must be a single ring of edges and vertices (where one or more edges may be traversed twice). The manifold as a whole, however, may have arbitrary connectivity. The main reason for imposing this restriction is that the dual subdivision (see below) is not sufficiently well-behaved unless the faces are simple disks. The following is an important consequence of this restriction:
\par
\theorem{1.1}{The underlying undirected graph of a simple subdivision is connected iff the manifold is connected.}\par
Therefore, the connected components of the manifold are in one-to-one correspondence with the connected components of the underlying graph.\par

\sect{2. Dual subdivisions.}

\definition{2.1}{Two simple subdivisions $P1̌$ and $P2̌$ of a manifold $M$ are said to be {\it dual} to each other iff the following conditions are satisfied:\par
\thal{\halign{\hskip20pt #\quad|\lft{#}\hfill\cr
D1.|Every face of $P1̌$ contains exactly one vertex of $P2̌$; and vice-versa.\cr\va
D2.|Every edge of $P1̌$ meets exactly one edge of $P2̌$, and that at a single point;\cr
$\null$|and vice-versa.\cr}}}

So, there is a one-to-one correspondence between $EP1̌$ and $EP2̌$, between $VP1̌$ and $FP2̌$, and between $FP1̌$ and $VP2̌$. It can be shown that two elements of $P1̌$ are incident to each other iff the corresponding ones of $P2̌$ are. Two vertices of $P1̌$ are connected by an edge whenever (and as many times as) the corresponding faces of $P2̌$ are incident to a common edge. We note that\par
\theorem{2.1}{Every simple subdivision has a dual.}

\theorem{2.2}{If $P1̌$ and $Q1̌$ are respectively duals of $P2̌$ and $Q2̌$, then $P1̌$ is equivalent to $Q1̌$ iff $P2̌$ is equivalent to $Q2̌$.}

Therefore, the dual of a simple subdivision is unique up to equivalence. Fig. 2.1 below shows a subdivision of the extended plane (solid edges) and its dual (dotted edges). Arrowheads denote edges incident to the point at infinity.\par
\definition {2.2}{A pair of simple subdivisions $\set{P1̌, P2̌}$ of $M$ which are duals of each other is called a {\it double subdivision} of $M$.}

\sect{3. The edge algebra of a subdivision.}

In this section, we will develop a convenient notation for describing relationships among edges of a subdivision, and a mathematical framework that will justify the choice of our data structure. For lack of space, we are forced to restrict the following discussion to subdivisions of {\it orientable} manifolds (the sphere is the only case relevant for our application). The full paper shows how to extend the same theoretical framework and the same data structures for the general case. The definitions that follow refer to a fixed double partition $\set{P1̌, P2̌}$ of an orientable manifold $M$, and to a specific orientation of the latter (so that at each point of $M$ we have consistently defined the meaning of a ``counterclockwise’’ or ``clockwise’’ orientation).

There are exactly two consistent ways of defining a linear order among the points of a line $\lscr$; each of these orderings is called a {\it direction along} $\lscr$. An edge $e$ of a subdivision, together with a specific direction along it, is called a {\it directed edge}; for every edge of a subdivision (including loops) there are two directed edges. If $e$ is a directed edge (either of $P1̌$ or of $P2̌$), we can define unambiguously its vertex of {\it origin}, $e\org$, its {\it destination}, $e\dest$, its {\it left face}, $e\lef$, and its {\it right face}, $e\rif$. In the rest of the paper, unless otherwise specified, all edges are assumed to be directed.\par
The {\it rotated} version $e\rot$ of an edge $e$ is defined as being the corresponding edge of the dual subdivision, directed so as to cross $e$ from right to left. We may say that $e\rot$ is $e$ ``rotated 90\deg counterclockwise’’ around the crossing point.\par
The set of all edges with same origin as $v=e\org$ can be cyclically ordered in such a way that a closed counterclockwise curve around $v$ crosses each of them exactly once, in that order. This cyclically ordered set is called the {\it ring of edges out of $v$}, and we can define the {\it next edge with same origin}, $e\onext$, as the one immediately following $e$ (counterclockwise) in this ring.\par
The functions $\rot$ and $\onext$ satisfy the following properties:\par
\thal{\halign{\hskip20pt #\quad\quad|\lft{#}\hfill\cr
E1.|$e\rot↑4=e$\cr
E2.|$e\rot\onext\rot\onext=e$\cr
E3.|$e\rot↑2\neq e$\cr
E4.|$e\rot\onext↑n\neq e$\ \ for any $n$\cr
}}\par
A number of useful properties can be deduced from these, as for example $e\rot↑{-1}=e\rot↑3$, $e\onext↑{-1}=e\rot\onext\rot$, and so forth. For added convenience in talking about subdivisions, we introduce some derived functions. The edge $e\rot↑2$ is called the {\it symmetric} of $e$, denoted by $e\sym$, and is the same undirected edge taken with opposite direction as $e$. Also, given $e$ we define the {\it next counterclockwise edge with same left face}, $e\lnext$, {\it with same right face}, $e\rnext$, and {\it with same destination}, $e\dnext$, by the following equations:\par
\thal{\halign{\hskip20pt\lft{$ # $}|\quad=\quad\lft{$ # $}\cr
e\lnext|e\rot↑{-1}\onext\rot\cr
e\rnext|e\rot\onext\rot↑{-1}\cr
e\dnext|e\sym\onext\sym\cr
}}\par
The names of these functions reflect the fact that they give the first edge encountered after $e$ when moving counterclockwise around its left face, right face, and destination, respectively. The direction of the returned edges is defined so that $e\lnext\lef=e\lef$, $e\rnext\rif=e\rif$, and $e\dnext\dest=e\dest$ (note that $e\rnext\dest= e\org$, rather than vice-versa). By moving clockwise around a fixed endpoint or face, we get the inverse functions, defined by\par
\thal{\halign{\hskip20pt\lft{$ # $}|\quad=\quad\lft{$ # $}|\quad=\quad\lft{$ # $}\cr
e\oprev|e\onext↑{-1}|e\rot\onext\rot\cr
e\lprev|e\lnext↑{-1}|e\onext\sym\cr
e\rprev|e\rnext↑{-1}|e\sym\onext\cr
e\dprev|e\dnext↑{-1}|e\rot↑{-1}\onext\rot↑{-1}\cr
}}\par
It is important to notice that any of the functions defined so far can be expressed as the composition of a constant number of $\rot$ and $\onext$ operations, independently of the size or complexity of the subdivision. Figure 3.1 illustrates these various functions.\par
\definition{3.1}{An {\it edge algebra} is an abstract algebra $(E, \rot, \onext)$, where $E$ is an arbitrary finite set, and $\rot$ and $\onext$ are functions from $E$ into $E$ that satisfy the properties E1-E4.}

The definitions of $\sym$, $\lnext$, $\rnext$, etc., and their inverses can be applied to arbitrary edge algebras. Any double subdivision determines a unique edge algebra, in the obvious way. The two theorems below show that edge algebras are essentially an abstraction of double subdivisions:\par
\theorem{3.1}{Every edge algebra can be realized by some double subdivision.}
The extension to non-orientable manifolds requires the introduction in the edge algebra of a third fundamental operation, $\flip$ (that ``moves to the other side’’ of the manifold, reversing the meaning of ``clockwise’’ and ``counterclockwise’’), with its associated axioms. With this extension it is possible to obtain in constant time the mirror image of a subdivision. For these extended edge algebras theorem 3.1 still holds, and in addition the following is true:\par

\theorem{3.2}{Two double subdivisions $P$ and $Q$ are equivalent iff their edge algebras are isomorphic (as algebras).}

The axioms of an edge algebra imply that $E$ can be partitioned in two sets $\set{E1̌, E2̌}$, such that $\rot$ takes one into the other, and each is closed under $\onext$. Each subset $Eǐ$ corresponds to the edges of one member $Pǐ$ of a double subdivision. The orbits of $\onext$ in $E1̌$ are in one-to-one correspondence with the vertices of $P1̌$, and therefore also with the faces of $P2̌$. We can therefore define $e\org$ in an edge algebra as the orbit of $e$ under $\onext$, and similarly $e\lef=e\rot↑{-1}\org$, $e\rif=e\rot\org$, $e\dest=e\sym\org$.\par
\sect{4. The {\quadedge} data structure.}

We represent a simple subdivision $P1̌$ (and simultaneously its dual $P2̌$) by means of the {\it {\quadedge} data structure}, which is a natural computer implementation of the corresponding edge algebra. The edges of the algebra can be partitioned in groups of four, each group consisting of an undirected edge of $P1̌$ and its dual, each in its two possible directions. Each of these groups is represented in the data structure by one {\it edge record}; by a convention, a particular one of the four edges is selected to be the {\it standard edge} for the group. Any edge $e$ can then be written as $\bar e\rot↑r$, where $\bar e$ is the standard edge of its group and $0\leq r < 4$.\par
In the data structure, $e$ is represented by an {\it edge reference}: the pair \prg{[e, $r$]}, where \prg{e} is a pointer to the appropriate edge record. This record is divided into four parts, \prg{e[0]} through \prg{e[3]}, each containing two fields \prg{e[$r$]\pdata} and \prg{e[$r$]\pnext}. The \prg{Data} field is used to hold geometrical and other non-topological information about the edge $\bar e\rot↑r$. This field neither affects nor is affected by the topological operations that we will describe, so its contents and format is entirely dependent on the application.\par

The \prg{Next} field of \prg{e[$r$]} contains a reference to the edge $\bar e\rot↑r\onext$. Given an arbitrary edge reference \prg{[e, $r$]}, the two basic edge functions $\rot$ and $\onext$ are given by the formulas

\thal{\halign{\hskip20pt\lft{$ # $}|\quad=\quad\lft{$ # $}\cr
\prg{[e, $r$]}\rot|\prg{[e, $r+1$]}\cr
\prg{[e, $r$]}\onext|\prg{e[$r$]\pnext}\cr
}}

where the $r$ component is computed modulo 4. It follows that

\thal{\halign{\hskip20pt\lft{$ # $}|\quad=\quad\lft{$ # $}\cr
\prg{[e, $r$]}\sym|\prg{[e, $r+2$]}\cr
\prg{[e, $r$]}\rot↑{-1}|\prg{[e, $r+3$]}\cr
\prg{[e, $r$]}\oprev|(\prg{e[$r+1$]\pnext})\rot,\cr
}}

and so forth.\par
The record of an arbitrary edge belongs to four circular lists, corresponding to its two endpoints and its two faces. Figure 4.1 illustrates a portion of a subdivision and its {\quadedge} data structure.\par
The {\quadedge} data structure contains no separate records for vertices or faces; a vertex is implicitly defined as a ring of edges, and the standard way to refer to it is to specify one of the edges out of it. This has the added advantage of specifying a reference point on its edge ring, which is frequently necessary when using the vertex as a parameter to topological operations. Similarly, the standard way of referring to a connected component of the edge structure is by giving one of its directed edges. In this way, we are also specifying one of the two dual subdivisions, and a ``starting place’’ and ``starting direction’’ on it. Therefore a subdivision referred to by the edge $e$ can be ``instantaneously’’ transformed into its dual by taking $e\rot$.\par
The storage space required by the {\quadedge} data structure, including the \prg{Data} fields, is (8 record pointers + 8 bits)$\null\times \leftv EP\rightv$. This compares favorably with the winged-edge representation [\Baum] and with the Muller-Preparata variant [\MuPr]. Indeed, all three representations use essentially the same pointers: each edge is connected to the four ``immediately adjacent’’ ones ($\onext$, $\oprev$, $\dnext$, $\dprev$), and the four \prg{Data} fields of our structure may be seen as corresponding to the vertex and face links of theirs.\par
Compared with the two versions mentioned above, the {\quadedge} data structure has the advantage of allowing access to the dual subdivision practically for free. As we shall see, this capability allows us to cut in half the number of primitive and derived operations, since these usually came in pairs whose members are ``dual’’ of each other. As an illustration of the flexibility of the {\quadedge} structure, consider the problem of constructing a diagram which is a cube joined to an octahedron: we can construct two cubes (calling twice the same procedure) and join one to the dual of the other.\par
The systematic enumeration of all edges in a (connected) subdivision is a straightforward programming exercise, given an auxiliary stack of size $O(\leftv EP\rightv)$ and a boolean mark bit on each directed edge [\Knuth]. With a few more bits per edge, we can do away with the stack entirely [\Even]. A slight modification of those algorithms can be used to enumerate the vertices of the subdivision, in the sense of visiting exactly one edge out of every vertex. If we take the dual subdivision, we get an enumeration of the faces. In all cases the running time is linear in the number of edges.\par
\sect{5. Basic topological operators.}

Perhaps the main advantage of the {\quadedge} data structure is that the construction and modification of arbitrary diagrams can be effected by as few as two basic topological operators, in contrast to the half-dozen or more required by the previous versions [\Braid,
\MaSu].\par
The first operator is denoted by \prg{$e$ \pgets\ MakeEdge[]}. It takes no parameters, and returns an edge $e$ of a newly-created data structure representing a subdivision of the sphere (see fig. 5.1.).\par
Apart from direction, $e$ will be the only edge of the subdivision, and will not be a loop; we have $e\org\neq e\dest$, $e\lef=e\rif$, $e\lnext=e\rnext=e\sym$, and $e\onext=e\oprev=e$. To construct a loop, we may use \prg{$e$ \pgets\ MakeEdge[]\prot}; then we will have $e\org=e\dest$, $e\lef\neq e\rif$, $e\lnext=e\rnext=e$, and $e\onext=e\oprev=e\sym$.\par
The second operator is denoted by \prg{Splice[$a$, $b$]} and takes as parameters two edges $a$ and $b$, returning no value. This operation affects the two edge rings $a\org$ and $b\org$, and, independently, the two edge rings $a\lef$ and $b\lef$. In each case,

\indent\bu\ if the two rings are distinct, they will be combined into one;

\indent\bu\ if the two are the same ring, this ring will be separated in two pieces.\par
The parameters $a$ and $b$ determine the place where the edge rings will be cut and joined. For the rings $a\org$ and $b\org$, the cuts will occur immediately {\it after} $a$ and $b$ (in counterclockwise order); for the rings $a\lef$ and $b\lef$, the cut will occur immediately {\it before} $a\rot$ and $b\rot$. Figure 5.2.a illustrates this process for the case when the origins are distinct and the left faces are the same (or vice-versa):\par
In terms of the data structure, this operation is actually quite simple. If we let $a\pr=a\onext\rot$ and $b\pr=b\onext\rot$, the effect of \prg{Splice[$a$, $b$]} is to exchange $a\onext$ with $b\onext$ and $a\pr\onext$ with $b\pr\onext$. The values of $\rot$, \prg{Data}, and of all other $\onext$s, are not affected by this operation. The apparently complex behavior of \prg{Splice} now can be recognized as the familiar effect of switching the \prg{next} links of two circular list nodes [\Knuth].\par
Figure 5.2.b illustrates the effect of \prg{Splice[$a$, $b$]} in the case where $a$ and $b$ have distinct left faces and distinct origins. In this case, \prg{Splice} will either join two components in a single one, or add an extra ``handle’’ to the manifold, depending on whether $a$ and $b$ are in the same component or not. The same figure also illustrates the case when both left faces and origins are distinct. Note that the operation is a no-op if $b=a$.\par
The operation is not defined if $a$ and $b$ are in the same component but one belongs to $E1̌$ and the other to $E2̌$.
Provided this restriction is met, it is not hard to prove that \prg{Splice} preserves the edge algebra axioms, and therefore produces a valid data structure.\par
As an example of the use of \prg{Splice}, consider the operation \prg{SwapDiagonal[$e$]} that we will be using later on: given an edge $e$ whose left and right faces are triangles, the problem is to delete $e$ and connect the other two vertices of the quadrilateral thus formed (see fig 5.3).
This is equivalent to\par

{\prog
\ \ \ \ \ \ \ \ a \pgets\ e\poprev;
\ \ \ \ \ \ \ \ b \pgets\ e\psym\poprev;
\ \ \ \ \ \ \ \ Splice[e, a]; Splice[e\psym, b];
\ \ \ \ \ \ \ \ Splice[e, a\plnext]; Splice[e\psym, b\plnext]
\endprog}\par
The first pair of \prg{Splice}s disconnects $e$ from the edge structure, and leaves it as the single edge of a separate spherical component. The last two \prg{Splice}s connect $e$ again at the required position.\par
\theorem{5.1}{An arbitrary subdivision $P$ can be transformed into a collection of $\leftv EP\rightv$ isolated edges by the application of at most $2\leftv EP\rightv$ \prg{Splice} operations.}\par
An important property of \prg{Splice} is that it is its own inverse, i.e. performing \prg{Splice[$a$, $b$]} twice in a row leaves the data structure completely unchanged. From this fact and the theorem above, we can conclude that any subdivision $\set{P1̌, P2̌}$ can be constructed, in $O(\leftv EP\rightv)$ time and space, by using only the \prg{Splice} and \prg{MakeEdge} operations. This remains true in the non-orientable case as well.\par
We have no mechanism for keeping track automatically of the components and connectivity of the manifold. There seems to be no general way of doing this at a bounded cost per operation; on the other hand, in many applications this problem is trivial or straightforward, so it is best to solve this problem independently for each case.\par
The \prg{Data} links are not affected by (and do not affect) the \prg{MakeEdge} and \prg{Splice} operations; if used at all, they can be set and updated, at any time after the edge is created, by plain assignment statements. Since they carry no topological information, there is no need to forbid or restrict assignments to them.\par
Usually each application imposes geometrical or other constraints on the \prg{Data} fields that may be affected by changes in the topology. Some care is required when enforcing those constraints; for example, the operation of joining two vertices may change the geometrical parameters of a large number of edges and faces, and updating all the corresponding \prg{Data} fields every time may be too expensive. However, even in such applications it is frequently the case that we can defer those updates until they are really needed (so that their cost can be amortized over a large number of \prg{Splice}s), or initialize the \prg{Data} links right from the beginning with the values they must have in the final structure.\par
\sect{6. Topological operators for Delaunay diagrams.}

In the Voronoi/Delaunay algorithms described further on, all edge variables refer to edges of the Delaunay diagram. The \prg{Data} field for a Delaunay edge $e$ points to a record containing the coordinates of its origin $e\org$, which is one of the sites; accordingly, we will use $e\porg$ as a synonym of $e\pdata$ in those algorithms. For convenience, we will also use $e\pdest$ instead of $e\psym\porg$. We emphasize again that these \prg{Dest} and \prg{Org} fields carry no topological meaning, and are not updated by the \prg{Splice} operation per se. The endpoints of the dual edges (Voronoi vertices) are neither computed nor used by the algorithms; if desired, they can be easily added to the structure, either during its construction or after it. The fields $e\prot\pdata$ and $e\prot↑{-1}\pdata$ are not used.\par
The topological manipulations performed by the algorithms on the Delaunay/Voronoi diagrams can be reduced to a pair of higher-level topological operators, defined here in terms of \prg{Splice} and \prg{MakeEdge}. The operation \prg{$e$ \pgets\ Connect[$a$, $b$, Side]} will add a new edge $e$ connecting the destination of $a$ to the origin of $b$, across either $a\lef$ or $a\rif$ depending on the \prg{Side} flag. For added convenience, it will also set the \prg{org} and \prg{dest} fields of the new edge to $a\pdest$ and $b\porg$, respectively.\par
{\prog
\ \ \ \ \ \ \ \ e \pgets\ MakeEdge[];
\ \ \ \ \ \ \ \ e\porg \pgets\ a\pdest;
\ \ \ \ \ \ \ \ e\pdest \pgets\ b\porg;
\ \ \ \ \ \ \ \ Splice[e, IF Side=left THEN a\plnext ELSE a\psym];
\ \ \ \ \ \ \ \ Splice[e\psym, IF Side=left THEN b ELSE b\poprev]
\endprog}\par
The operation \prg{DeleteEdge[$e$]} will disconnect the edge $e$ from the rest of the structure (this may cause the rest of the structure to fall apart in two separate components). In a sense, \prg{DeleteEdge} is the inverse of \prg{Connect}. It is equivalent to\par
{\prog
\ \ \ \ \ \ \ \ Splice[e, e\poprev];
\ \ \ \ \ \ \ \ Splice[e\psym, e\psym\poprev]
\endprog}\par

\sect{7. Voronoi and Delaunay
-- some basic facts.}
In this section we recapitulate some of the most important properties of Voronoi diagrams and their duals, with an emphasis on the results we will need later on. For a fuller treatment of these topics the reader should consult [\Sham], [\Lee], or [\ShamHo].\par
If we are given only two sites, then the associated Voronoi regions are simply the two (open) half-planes defined by the bisector of the two sites. More generally, when $n$ sites are given, the region associated with a particular site $p$ will be the intersection of all half-planes containing $p$ and defined by the bisectors between $p$ and the other points. It follows that the Voronoi regions are (possibly unbounded) convex polygons, whose edges are portions of inter-site bisectors, and whose vertices are circumcenters of triangles defined by three of the sites. An example Voronoi diagram for a small collection of sites is shown in fig. 7.1.\par
As mentioned above, most of the time we will actually deal with the dual of the Voronoi subdivision, commonly called the Delaunay triangulation. As the name implies, this is a planar subdivision whose vertices are the $n$ sites and whose (straight line) edges connect every pair of sites associated with regions that share a common edge. If no four of the sites happen to be cocircular, then this dual is in fact a triangulation and, in any case, additional edges can be introduced to make it one. Although our algorithms can handle the degeneracies implied by four cocircular points, in this section only we will assume that such degeneracies do not arise, so we can speak unambiguously of the Delaunay triangulation of the $n$ sites. If $S$ denotes our collection of sites, then $D(S)$ will denote the Delaunay triangulation of $S$.\par
We say that a circle is {\t point-free} if none of the given sites is contained in its interior. It is known that the circumcircle of each Delaunay triangle is point-free and that this property completely characterizes the Delaunay triangulation [\Lee]. Actually a more local characterization is also possible. If the circumcircle of a triangle defined by three of the sites is point-free, then that triangle is a Delaunay triangle. And if it is possible to find a point-free circle passing through two sites, then these sites will be connected by a Delaunay edge [\Lee]. We will speak of these circles as {\it witnesses} to the ‘‘Delaunayhood’’ of the corresponding triangles or edges respectively. For completeness we also state here a couple of other obvious properties of the Delaunay triangulation which will be usefull in the sequel.\par
\lemma{7.1}{Each triangulation of $n$ points of which $k$ lie on the convex hull has $2(n-1)-k$ triangles and $3(n-1)-k$ edges.}\par
\lemma{7.2}{Each convex hull edge is a Delaunay edge.}\par
\lemma{7.3}{If $S$ and $T$ are two point sets and $S \subset T$, then each Delaunay edge $e \in D(T)$, both of whose endpoints are in $S$, is also a Delaunay edge in $D(S)$.}\par
In other words, the addition of extra points does not add new edges between old points.
\sect{8. The InCircle test -- implications for Delaunay.}
We now proceed to define the main geometric primitive we will use for Delaunay computations. This test is applied to four distinct points in the plane $A$, $B$, $C$, and $D$. See fig. 8.1.\par
\definition{8.1}{The predicate $\InCircle(A,B,C,D)$ is defined to be true if and only if point D lies interior to the region of the plane bounded by the circle passing through $A$, $B$, and $C$ and lying to the left of that circle when the latter is traversed in the direction from $A$ to $B$ to $C$.}\par
In particular this implies that $D$ should be inside the circle $ABC$ if the points $A$, $B$, and $C$ define a counterclockwise oriented triangle, and {\it outside}, if they define a clockwise oriented one. (In case $A$, $B$, and $C$ are collinear we interpret the line as a circle by adding the point at infinity). If $A$, $B$, $C$, and $D$ are cocircular, then our predicate returns false. Notice that the test is equivalent to asking whether $\angle ABC + \angle CDA >\angle BCD + \angle DAB$. Another equivalent form of it follows from Ptolemy’s theorem in Euclidean plane geometry, namely
$$AB\times CD+BC\times AD > BD\times AC.$$\par
The {\InCircle} test possesses several interesting properties:\par
\lemma{8.1}{If $A$, $B$, $C$, $D$ are any four non-cocircular points in the plane, then transposing any adjacent pair in the predicate $\InCircle(A,B,C,D)$ will change the value of the predicate from true to false, or vice versa. In particular, the boolean sequence $\InCircle(A,B,C,D), \InCircle(B,C,D,A), \InCircle(C,D,A,B), \InCircle(D,A,B,C)$ is either T(rue), F(alse), T, F, or F, T, F, T.}\par
This lemma shows that in $\InCircle(A,B,C,D)$ all four points play a symmetric role, even though from the definition $D$ seems to be special. If $A$, $B$, $C$, $D$ define a counterclockwise oriented quadrilateral and $\InCircle(A,B,C,D)$ is true, then $\InCircle(C,B,A,D)$ is false, so reversing the orientation flips the value of the predicate. Whenever we consider applying this test to a (rooted) quadrilateral in the sequel, we will take the quadrilateral to be counterclockwise oriented. The next lemma shows that {\InCircle} satisfies a certain transitivity property that will be useful in some of our proofs later. See fig. 8.2.
\lemma{8.2}{If $A$, $B$, $C$, $D$, $E$ are such that $DABC$, $EACD$, and $EABC$ are simple quadrilaterals, then $\InCircle(D,A,B,C)$ and $\InCircle(E,A,C,D)$ imply $\InCircle(E,A,B,C)$.}\par
For a convex quadrilateral $ABCD$ there are two ways to triangulate it. We can either draw diagonal $AC$ or diagonal $BD$. The sides $AB$, $BC$, $CD$, and $DA$ are always Delaunay edges. If $\InCircle(A,B,C,D)$ is true, then (by lemma 2.1.) the diagonal $BD$ completes the Delaunay triangulation for the four points $A$, $B$, $C$, $D$ (else the diagonal $AC$ does). More generally, for any collection of points, all the convex hull edges are Delaunay edges. For each Delaunay edge $BD$, if the two triangles $ABD$ and $BDC$ on its two sides form a convex quadrilateral, then $\InCircle(A,B,C,D)$ is true. A converse of this also holds [\Lee]. This is commonly expressed in terms of the ‘‘swapping rule’’, an implementation of which was presented in section 5. This rule states that if there exists a convex quadrilateral $ABCD$ that has been triangulated with diagonal $AC$, but $\InCircle(A,B,C,D)$ is true, then this diagonal should be erased and diagonal $BD$ should be drawn. Success of the swapping rule on an edge ($AC$ in the previous sentence) is a witness for non-Delaunayhood. The above converse then states that a triangulation stationary under the swapping rule is the Delaunay triangulation. In fact, the swapping rule provides a new necessary and sufficient condition for the Delaunayhood of an edge, namely that the rule fails for any quadrilateral defined by the endpoints of the edge and any two other points, one on each side.\par
\sect{9. The divide and conquer algorithm.}
In this section we use the tools we have developed so far to describe, analyze, and prove correct a divide and conquer algorithm for computing the Delaunay triangulation of $n$ points in the plane. Topologically the quad-edge data structure gives us the dual for free, so by associating some relevant geometric information with our face nodes, e.g., the coordinates of the corresponding Voronoi vertices, we are simultaneously computing the Voronoi diagram as well. Our algorithm is the dual of that proposed by Shamos and Hoey [\ShamHo] and, like theirs, runs in time $O(n \lg n)$ and uses linear storage. In [\Lee] D. T. Lee proposes a dual algorithm which is similar to but more complex than ours. Unlike both of these, our algorithm need not compute straight line intersections unless the coordinates of Voronoi vertices are needed. The only geometric primitives used by our algorithm are the {\InCircle} test and the predicate $\CCW(A,B,C)$, which is true if the points $A$, $B$, and $C$ form a counterclockwise oriented triangle. The later is a standard tool in geometric algorithms, and is equivalent to reporting on which side of a line a point lies. It is equivalent to $\InCircle(A,B,C,D)$, for $D$ chosen as the barycenter of triangle $ABC$.\par
As one might expect, in the divide and conquer algorithm we start by partitioning our points into two halves, the left half ($L$) and the right half ($R$), which are separated in the $x$-coordinate. We next recursively compute the Delaunay triangulation of each half. Finally we need to marry the two half triangulations into the Delaunay triangulation of the whole set.\par
We now elaborate on this brief description in stages. First of all it is advantageous to start out by sorting our points in increasing $x$-coordinate. When there are ties we resolve them by sorting in increasing $y$-coordinate and throwing away duplicate points. This makes all future splittings constant time operations. After splitting in the middle and recursively triangulating $L$ and $R$, we must consider the merge step. Note that this may involve deleting some $L-L$ or $R-R$ edges and will certainly require adding some $L-R$ or {\cross} edges. By lemma 1.3., however, no new $L-L$ or $R-R$ edges will be added.\par
What is the structure of the cross edges? All these edges must cross a line parallel to the $y$ axis and placed at the splitting $x$ value. This establishes a linear ordering of the cross edges, so we can talk about successive cross edges, the bottom-most cross edge, etc. The algorithm we are about to present will produce the cross edges incrementally, in ascending $y$-order. See fig. 9.1.\par
\lemma{9.1}{Any two cross edges adjacent in the $y$-ordering share a common vertex. The third side of the triangle they define is either an $L-L$ or an $R-R$ edge.}\par
Lemma 9.1 has the following important consequence. Let us call the current cross edge the base, and write its directed variant going from right to left as {\basel}. The successor to {\basel} will either be an edge going from the left end-point of {\basel} to one of the $R$-neighbors of the right end-point lying ‘‘above’’ {\basel}, or symmetrically, it will be an edge from the right end-point of {\basel} to one of the $L$-neighbors of the left end-point lying ‘‘above’’ {\basel}. In the program below edges from the left end-point of {\basel} to its candidate $L$-neighbors will be held in the variable {\lcand}, and their symmetric counterparts in {\rcand}. See fig. 9.2.\par
We can intuitively view what happens by imagining that a circular bubble weaves its way in the space between $L$ and $R$ and in so doing gives us the cross edges. Inductively we have a point free circle circumscribing the triangle defined by {\basel} and the previous cross edge. Consider continuously transforming this circle into other circles having {\basel} as a chord, but lying further into the half-plane above {\basel}. There is only a single degree of freedom, as the center of the circle is constrained to lie on the bisector of {\basel}. See fig. 9.3. Our circles will be point free for a while, but unless {\basel} is the upper common tangent of $L$ and $R$, at some point the circumference of our transforming circle will encounter a new point, belonging either to $L$ or $R$, giving rise to a new triangle with a point-free circumcircle. The new $L-R$ edge of this triangle is the next cross edge determined by the body of the main loop below.\par
In more detail, edge {\lcand} is is computed so as to have as destination the first $L$ point to be encountered in this process, and {\rcand} the first $R$ point. A final test chooses the point among these two that would be encountered first. We start the cross edge iteration by computing the lower common tangent of $L$ and $R$, which defines the first cross edge. The edge {\ldi} is the clockwise convex hull edge out of the rightmost $L$ point, and symmetrically for {\rdi}. We assume these are returned by the recursive calls to $L$ and $R$. In the program below prose in $\{\}$ is comments; certain portions of code that are just symmetric variants of others shown are elided and indicated by ‘‘\prg{. . .}’’.\par
\vskip0pt plus1000pt\eject
{\prog
$\{${\rm create a first cross edge ({\basel}) by computing the lower common tangent of $L$ and $R$ }$\}$
DO WHILE CCW[ldi
\porg, ldi\pdest, rdi\porg] DO ldi ← ldi\plnext OD;
\ \ IF CCW[rdi
\pdest, rdi\porg, ldi\porg] THEN rdi ← rdi.Rprev ELSE EXIT;
\ \ OD;
\null
basel \pgets Connect[rdi
\psym, ldi, left];
\null
$\{${\rm initialize {\lcand} to the counterclockwise first edge incident to {\basel$\dest$} and above {\basel} }$\}$
$\{${\rm symmetrically for {\rcand} }$\}$
lcand \pgets basel
\psym\ponext; rcand \pgets basel\poprev;
\null
$\{${\rm let \prg{ValidL} and \prg{ValidR} be procedures which test whether {\lcand} and {\rcand} are above {\basel} }$\}$
ValidL : CCW[basel
\porg, basel\pdest, lcand\pdest];
ValidR : CCW[basel
\porg, basel\pdest, rcand\pdest];
\null
$\{${\rm this is the merge iteration }$\}$
DO
\ \
$\{${\rm advance {\lcand} counterclockwise, deleting the old {\lcand} edge, until the {\InCircle} test fails }$\}$
\ \ WHILE ValidL AND
\ \ \ \ InCircle[lcand
\pdest, lcand\ponext\pdest, lcand\porg, basel\porg] DO
\ \ \ \ t ← lcand
\ponext;
\ \ \ \ DeleteEdge[lcand];
\ \ \ \ lcand ← t;
\ \ \ \ OD;
\null
\ \ $\{${\rm symmetrically, advance {\rcand} clockwise }$\}$
\ \ . . .
\null
\ \ $\{${\rm if both {\lcand} and {\rcand} are invalid then {\basel} is the upper common tangent }$\}$
\ \ IF NOT ValidL AND NOT ValidR THEN EXIT;
\null
\ \ $\{${\rm connect to either {\lcand} or {\rcand} }$\}$
\ \
$\{${\rm if both are valid, then choose the appropriate one using the {\InCircle} test }$\}$
\ \
$\{${\rm if {\lcand} is the winner then make the next cross edge close the triangle with {\lcand} and {\basel}, }$\}$
\ \
$\{${\rm else make the next cross edge close the triangle with {\rcand} and {\basel} }$\}$
\ \
$\{${\rm advance {\basel} and reinitialize {\lcand} or {\rcand} }$\}$
\ \ IF NOT ValidL
\ \ \ \ OR (ValidR AND InCircle[lcand
\pdest, lcand\porg, rcand\porg, rcand\pdest]
\ \ \ \ THEN BEGIN
$\{${\rm connect to the right }$\}$
\ \ \ \ basel \pgets Connect[rcand, basel
\psym, left];
\ \ \ \ rcand \pgets basel
\psym\plnext;
\ \ \ \ END
\ \ ELSE . . .
$\{${\rm connect to the left }$\}$
\ \ OD
\endprog}
We now elaborate on the above program. Recall first that the number of vertices, edges, and faces in a triangulation are all linearly related. Also, the lower common tangent computation takes linear time as either {\ldi} or {\rdi} advance at each step. What about the cost of the {\lcand} computation? We can account for this loop by charging every iteration to the edge being deleted. Similarly, iterations of the {\rcand} loop can be charged to deleted edges. The rest of the body of the main loop requires constant time and may be charged to the $L-L$ or $R-R$ edge closing the next triangle. This shows that the overall cost of the merge pass is linear in the size of $L$ and $R$.\par
We now formally state the lemmas that prove the correctness of the above algorithm.\par
\lemma{9.2}{When the {\lcand} iteration stops, the circumcircle of the triangle defined by {\lcand} and {\basel} is free of other $L$ points.}\par
A symmetric statement holds for the {\rcand} iteration. Furthermore, the final choice between {\lcand} and {\rcand} is again done by the {\InCircle} test, namely {\rcand} will be chosen if \prg{InCircle[lcand\pdest, lcand\porg, rcand\porg, rcand\pdest]} is true. It can then be shown that:\par
\lemma{9.3}{The triangle defined by {\basel} and the next cross edge as computed by the above algorithm has a circumcircle free of both $L$ and $R$ points, and thus is a Delaunay triangle.}\par
\sect{10. Incremental techniques}
The algorithm of the previous section assumes that all points are known at the beginning of time. For many applications we are interested in dynamic algorithms, which allow us to update our diagrams as new points are added or deleted. The machinery we have developed so far also yelds simple and elegant update algorithms. These will be discussed in the full paper.\par
\sect{11. Conclusions.}\par
In this paper we have presented a new data structure for planar subdivisions which simultaneously represents the subdivision, its dual, and its mirror image. Our {\quadedge} structure is both general (it works for subdivisions on any two-dimensional manifold) and space efficient. We have shown that two topological operations, both simple to implement, suffice to build and dismantle any such structure.\par
We have also shown how, by using the {\quadedge} structure and the {\InCircle} primitive, we can get compact and efficient Voronoi/Delaunay algorithms. The {\InCircle} test is shown to be of value both for implementing and reasoning about such algorithms. The code for these algorithms is sufficiently simple that we have practically given all of it in this abstract.\par
\sect{12. References.}\par
\ref [\Baum] {Baumgart, B. G. {\it A Polyhedron Representation for Computer Vision}. AFIPS Conf. Proc., Vol. 44, 1975 NCC, pp. 589--596.}
\ref [\Braid] {Braid, I. C., Hillyard, R. C., and Stroud, I. A. {\it Stepwise construction of Polyhedra in Geometric Modeling}; in Brodlie, K. W. (ed.), {\it Mathematical Methods in Computer Graphics and Design}. Academic Press, London, 1980, pp. 123--141.}
\ref [\Even] {Even, S. {\it Algorithmic Combinatorics}. Macmillan, N. Y., 1973.}
\ref [\GrSi] {Green, P. J., and Sibson, R. {\it Computing Dirichlet Tesselation in the Plane.} Computer Journal, Vol. 21, no. 2, 1977, pp. 168--173.}
\ref [\Har] {Harary, F. {\it Graph Theory}. Addison-Wesley, 1972, p. 105.}
\ref [\IyKa] {Iyanaga, S., and Kawada, Y. {\it Encyclopedic Dictionary of Mathematics} (2nd. ed.). MIT Press, Cambridge, Mass., 1968.}
\ref [\Kirk] {Kirkpatrick, D. {\it Optimal Search in Planar Subdivisions}. Technical Report 81-13, Department of Computer Science, University of British Columbia, 1981.}
\ref [\Knuth] {Knuth, D. E. {\it The Art of Computer Programming}, Vol. I: Fundamental Algorithms (2nd. ed.). Addison-Wesley, 1975.}
\ref [\Lee] {Lee, D. T. {\it Proximity and Reachability in the Plane}. Report R-831, Coordinated Science Laboratory, University of Illinois, Urbana, 1978.}
\ref [\MaSu] {Mantyla, M. J., and Sulonen, R {\it GWB: a Solid Modeler with Euler Operators}. IEEE Computer Graphics and Applications, Vol. 2 N. 5 (September 1982) pp. 17--31.}
\ref [\MuPr] {Muller, D. E., and Preparata, F. P. {\it Finding the Intersection of Two Convex Polyhedra}. Theoretical Computer Science Vol. 7 (1978), pp. 217--236.}
\ref [\Sham] {Shamos, M. I. {\it Computational Geometry}. Ph. D. dissertation, Yale University, 1977.}

\ref [\ShamHo] {Shamos, M. I., and and Hoey, D. {\it Closest-Point Problems}. 16-th Annual Symposium on Foundations of Computer Science, 1975, pp. 151--162.}
\vskip 24pt plus 2pt minus 2pt
\hrule
\vskip 4pt
\vfill\eject\end