\input PLocHdr.tex
\begfile{PLoc.tex of October 6, 1983 4:35 pm --- Stolfi}
\inputtty % your chance to \def\printfilestamp{F}
% THINGS TO FIX:
% figure numbering; lemma/theorem numbering
% revise symbols for \Hor, \G, etc.
\paper{Optimal Point Location in a Monotone Subdivision}
\def\paptitle{Point location}
\author{Herbert Edelsbrunner, Leo J. Guibas, and Jorge Stolfi}
\institution{University of Graz, Xerox PARC, and Stanford University}
\def\AuEde{AE}
\def\BenMa{BM}
\def\BiPre{BP}
\def\ChazOne{C1}
\def\ChazTwo{C2}
\def\DoLi{DL}
\def\EHH{EHH}
\def\EdMa{EM}
\def\EOS{EOS}
\def\GuiSto{GS}
\def\Har{H}
\def\Kir{Ki}
\def\Knu{Kn}
\def\LeePre{LP}
\def\LiTa{LT}
\def\Pre{P}
\def\PreSup{PS}
\def\Sha{S}
\def\Tar{Tar}
\def\Wi{W}
\def\Thei{1}
\def\Theii{2}
\def\Theiii{3}
\def\Theiv{4}
\def\Thev{5}
\def\Thevi{6}
\def\Thevii{7}
\def\Theviii{8}
\def\Theix{9}
\def\Thex{10}
\def\Thexi{11}
\def\Thexii{12}
\def\proj#1{\Pi #1}
\def\below{\lsls}
\def\above{\grgr}
\def\justbelow{\prec}
\def\justabove{\succ}
\def\regionindex{\mathop{\#}}
\def\regionabove{\mathop{\char a\char b\char o\char v\char e}}
\def\regionbelow{\mathop{\char b\char e\char l\char o\char w}}
\def\LCA{\mathop{\char l\char c\char a}}
\def\MSB{\mathop{\char m\char s\char b}}
\def\REV{\mathop{\char r\char e\char v}}
% TREE, DAG, AND QUAD-EDGE OPERATORS
\def\father{\mathop{\char f\char a\char t\char h\char e\char r}}
\def\edge{\mathop{\char e\char d\char g\char e}}
\def\upptr{\mathop{\char u\char p}}
\def\downptr{\mathop{\char d\char o\char w\char n}}
\def\chain{\mathop{\char c\char h\char a\char i\char n}}
\def\xval{\mathop{\char x\char v\char a\char l}}
\def\leftptr{\mathop{\char l\char e\char f\char t}}
\def\rightptr{\mathop{\char r\char i\char g\char h\char t}}
\def\region{\mathop{\char r\char e\char g\char i\char o\char n}}
\def\lastin{\mathop{\char l\char a\char s\char t\>\char i\char n}}
\def\lastout{\mathop{\char l\char a\char s\char t\>\char o\char u\char t}}
\def\firstin{\mathop{\char f\char i\char r\char s\char t\>\char i\char n}}
\def\firstout{\mathop{\char f\char i\char r\char s\char t\>\char o\char u\char t}}
\def\forward{\mathop{\char f\char o\char r\char w\char a\char r\char d}}
\def\base{\mathop{\char b\char a\char s\char e}}
\def\onext#1{\mathop{\char o\char n\char e\char x\char t}(#1)}
\def\oprev#1{\mathop{\char o\char p\char r\char e\char v}(#1)}
\def\dnext#1{\mathop{\char d\char n\char e\char x\char t}(#1)}
\def\rot#1{\mathop{\char r\char o\char t}(#1)}
\def\org#1{\mathop{\char o\char r\char g}(#1)}
\def\dest#1{\mathop{\char d\char e\char s\char t}(#1)}
\def\sym#1{\mathop{\char s\char y\char m}(#1)}
\def\T{\hbox{\bi T\/}}
\def\Hor{S}
\def\Ver{{S^\ast}}
\def\G{G}
\def\H{H}
\def\eat#1{} % eats next token or group
\def\f#1{\hbox{\hskip1pt {\fx #1}\hskip1pt}}
\def\point#1#2{\vskip \refvskip pt\hbox to \textwidth pt{\hskip\refindent pt\hbox par \refbodywidth pt{\hbox to 0.0in {$\hskip -\refindent pt \hbox{#1} \hskip 0pt plus 1000pt minus 1000pt$}#2}}\par}
\support{The work of H.\ Edelsbrunner was supported by the Austrian Fonds zur F{\cmr\"\rm o}rderung der wissenschaftlichen Forschung. J.\ Stolfi, who is on leave from the University of S{\cmr\s\rm a}o Paulo, was partially supported by a grant from Conselho Nacional de Desenvolvimento Cient{\cmr\'\i}fico e Tecnol{\cmr\'\rm o}gico (CNPq).}
\abstract{Point location, often known in graphics as "hit detection", is one of the fundamental problems of computational geometry. In a point location query we want to identify which of a given collection of geometric objects contains a particular point. Let $\Sscr$ denote a subdivision of the Euclidean plane into monotone regions by a straight-line graph of $m$ edges. In this paper we exhibit a substantial refinement of the technique of Lee and Preparata for locating a point in $\Sscr$ based on separating chains. The new data structure, called a {\it layered dag}, can be built in $O(m)$ time, uses $O(m)$ storage, and makes possible point location in $O(\log m)$ time. Unlike previous structures that attain these optimal bounds, the layered dag can be implemented in a simple and practical way, and is extensible to subdivisions with edges more general than straight-line segments.}
\abstract{{\bf Keywords:} point location, monotone polygons, planar graphs, partial order, graph traversal, computational geometry.}
\section{1. Introduction}
{\it Point location} is one of the fundamental problems in computational geometry. In the two-dimensional case, we are given a subdivision $\Sscr$ of the plane into two or more regions, and then asked to determine which of those regions contain a given query point $p$. If the same subdivision is to be used for a large number of queries, as is often the case, we can reduce the total cost of this task by preprocessing the subdivision into a data structure suitable for the search. We will measure the performance of a proposed solution to this problem by three quantities, the preprocessing cost $P$, the storage cost $S$, and the query cost $Q$.
Optimal solutions for this search problem are known since Lipton and Tarjan [\LiTa] and Kirkpatrick [\Kir]. For a subdivision $\Sscr$ with $m$ edges these optimal solutions simultaneously attain $S = O(m)$, $Q = O(\log m)$, and, under certain assumptions, also $P = O(m)$. The Lipton-Tarjan method is based on their graph separator theorem, and so far has been only of theoretical interest. Kirkpatrick's method, which consists of building a hierarchy of coarser and coarser subdivisions, is implementable, but still the implied constants are so large as to make current implementations of little practical interest. In addition, neither of these techniques seems to extend in a natural way to non-straight-line subdivisions.
Historically Dobkin and Lipton [\DoLi] were the first to achieve $O(\log n)$ query time, while using $O(n^2)$ space. Their method was subsequently refined by Preparata [\Pre] so that $O(n\log n)$ space suffices. Later Bilardi and Preparata [\BiPre] again gave a refinement which achieves $O(n)$ space in the expected case, while retaining $O(n\log n)$ space and $O(\log n)$ query time in the worst case. These solutions are applicable to non-straight-line subdivisions and seem to admit of efficient implementations.
A substantially different approach was initiated by Shamos [\Sha] and led to the well-known point location paper of Lee and Preparata [\LeePre], based on the construction of separating chains. This data structure attains $P=O(m\log m)$, $S=O(m)$, and $Q=O(\log^2 m)$. The constants of proportionality in these expressions are quite small, and the algorithms, particularly the query one, are simpler than those of Kirkpatrick. While Kirkpatrick's algorithm requires the regions to be triangulated, that of Lee and Preparata works for regions of a fairly general shape (monotone polygons, which include the convex ones). Recently Chazelle [\ChazTwo] described a modification of this solution as a special instance of a general method for ``searching in history''. His structure needs the same amount of space and time. These techniques again are applicable to non-straight-line subdivisions, although this possibility was not examined.
In a separate development, Edelsbrunner and Maurer [\EdMa] came up with a space-optimal solution which works for general subdivisions, and even for sets of arbitrary non-overlapping regions. The query time is $Q=O(\log^3 n)$, but it can be improved to $Q=O(\log^2 n)$ for rectangular subdivisions, where the structure becomes especially simple. For this reason a generalization to rectangular point location in higher dimensions has also succeeded; see Edelsbrunner, Haring, and Hilbert [\EHH]
The purpose of this paper is to show an elegant modification to the separating chain method of Lee and Preparata that yields a new optimal point location algorithm for monotone subdivisions of the plane; the algorithm is based on a new data structure called the {\it layered dag}. In this new structure the separating chains built into a binary tree by Lee and Preparata are refined so that (1) once a point has been discriminated against a chain, it can be discriminated against a child of that chain with constant extra effort, and (2) the overall storage only doubles. The layered dag simultaneously attains $S = O(m)$ and $Q = O(\log m)$. An additional insight allows us to build this dag (as well as the original Lee-Preparata tree) in only $O(m)$ time. Not only is this new structure optimal with respect to all of preprocessing, space, and query costs, but in fact a simple implementation, with small constants of proportionality, is possible. Like its Lee-Preparata predecessor, it also extends to non-straight-line subdivisions.
In the organization of the paper we have adopted the policy that each new data structure is first introduced by how it is to be used, and then by how it is to be constructed. We have placed emphasis throughout on implementation considerations, as well as the underlying theory. Section 2 describes the basic notions surrounding monotone polygons and subdivisions. Section 3 shows how non-monotone subdivisions can be made monotone. Sections 4, 5, and 6 introduce a partial ordering of the regions and its use in getting a complete family of separators. Section 7 reviews the Lee-Preparata structure, while sections 8 and 9 introduce the layered dag, and explain its use in point location and its construction. Section 10 gives an algorithm for constructing a complete family of separators based on a traversal of the subdivision. Two key implementation issues are taken up in sections 11 and 12; these may be omitted on a first reading. Section 11 describes some bit-twiddling trickery used to give us linear preprocessing time, while section 12 discusses how subdivisions can be traversed without auxiliary storage. Finally section 13 contains some further applications of the layered dag to problems in computational geometry.
\section{2. Monotone polygons and subdivisions}
An {\it interval} is a convex subset of a straight line, i.e., the whole line, a ray, a segment, a single point, or the empty set. An interval is {\it proper} if it contains more than one point, and is {\it closed} or {\it open} depending on whether it contains or not its endpoints (if any). A subset of the plane is said to be {\it $y$-monotone} if its intersection with any line parallel to the $y$ axis is a single interval (possibly empty). Every convex polygon is $y$-monotone, but the converse is obviously false. For any direction $d$ the concept of a $d$-monotone polygon can similarly be defined. Throughout this paper the regions, polygons, etc., we will be dealing with will all be monotone in the {\it same} direction. Without loss of generality we can then assume that they are all monotone in $y$. From now on the term ``monotone'' will be used as a synonym for ``$y$-monotone''.
We define a {\it polygon} as an open and simply connected subset of the plane whose boundary can be decomposed into a finite number of points ({\it vertices}) and open intervals ({\it edges}). Note that, according to these definition, polygons may have infinite extent. A {\it subdivision} is a partition of the plane into a finite number of disjoint polygons ({\it regions}), edges, and vertices; these parts are collectively called the {\it elements} of the subdivision. It is not hard to see that every edge of a subdivision is on the boundary between two regions (not necessarily distinct), that every vertex is incident to some edge, that every finite endpoint of an edge is a vertex, and that every region (unless there is only one) has some edge on its boundary. From these facts and from Euler's theorem, we can conclude that a subdivision with $m$ edges has $O(m)$ vertices and regions, thus justifying our use of $m$ as the measure of problem size.
A subdivision is said to be {\it monotone} if all its regions are monotone and it has no vertical edges. The last condition is a technical one: it is imposed only to simplify the proofs and algorithms and can be removed with some care. Figure 1 illustrates a monotone subdivision (arrowheads denote vertices at infinity).
With minor caveats and modifications, monotone subdivisions include many interesting subcases, such as triangulations, subdivisions of the plane into convex pieces, and so forth. The lemma below shows that monotone subdivisions are precisely the ``regular planar straight-line graphs'' as defined by Lee and Preparata [\LeePre]:
\lemma{\Thei}{A subdivision with no vertical edges is monotone if and only if every vertex is incident to at least two distinct edges, one at its left and one at its right.}
\proof{First, assume there is a vertex $v$ such that all edges incident to it lie to the right of $v$. Let $R$ be the region that is first encountered by moving down from $v$; it must also be the same region we encounter first when moving up from $v$. Then $R$ is not monotone, for its intersection with the vertical line passing through $v$ consists of at least two distinct intervals separated by $v$. A similar argument holds if all edges incident to $v$ lie to its left.
\pp For the converse, assume $R$ is a region that is not monotone, and let $\lscr$ be a vertical line whose intersection with $R$ consists of two or more components, as in figure 2. Since $R$ is connected, any two components $I1$, $I2$ of the intersection are connected by a simple path $\pi$ in $R$. We can always choose $I1$, $I2$, and $\pi$ so that $I1$ is above $I2$, and $\pi$ does not meet $\lscr$ except at its endpoints $p1\in I1$ and $p2\in I2$. Then $\pi$ and the interval $[p1, p2]$ delimit a closed and bounded region $S$ of the plane.
\pp Suppose $\pi$ lies to the left of $\lscr$. The boundary of $R$ must enter $S$ at the lower endpoint of $I1$ (and also at the upper endpoint of $I2$). Since the boundary cannot meet the path $\pi$, there must be some vertex of the subdivision in the interior of $S$. Let $v$ be the {\it leftmost} such vertex; clearly all edges incident to $v$ lie to the right of it, and we have proved the lemma. A similar argument holds if $\pi$ lies to the right of $\lscr$.}
Therefore, we can check whether any subdivision is monotone by enumerating all the edges incident to each vertex, and checking whether they satisfy lemma {\Thei}. Each edge will be examined at most twice, and each vertex once, so this algorithm runs in $O(m)$ time. In fact, it is possible to determine within this time bound whether there is any rotation of the subdivision that makes it monotone, as was shown by Preparata and Supowit [\PreSup].
\section{3. Regularization}
Lee and Preparata [\LeePre] have shown that an arbitrary subdivision with $m$ edges can be transformed into a monotone subdivision having $\leq 2m$ edges in $O(m\log m)$ time, a process they termed {\it regularization}. (They define a vertex as being {\it regular} if it satisfies the condition of lemma {\Thei}.) For the sake of completeness we include here an algorithm for regularization attaining these bounds. We believe that our explanation is more complete and at the same time more succint than that of [\LeePre]. The problem of whether regularization can be done in time faster than $O(m\log m)$ remains open.
The task of the regularization process is to add additional edges to the subdivision so as to make each vertex regular. These new edges must maintain the ``subdivisionhood'' of our structure: they must connect existing vertices, may not intersect other edges except at endpoints, and will therefore subdivide existing regions into subregions. An example of a non-monotone subdivision and a possible regularization of it is shown in figure 3.
The algorithm we propose in this section is based on the {\it sweep-line} paradigm, that has extensively been used in computational geometry. We imagine that a vertical line sweeps the plane from left to right. We call {\it active} those vertices, edges, and regions currently intersecting the sweeping line. The active edges have an obvious vertical ordering, except possibly at a vertex. In fact this ordering can change only when the sweep-line passes through a vertex. At that instant some edges may end and others may start. Each interval of the sweep-line between successive intersections by edges can be assigned a vertex we call its {\it generator}, which represents the last vertex that gave rise to this interval. Formally this is defined as the last vertex swept over that happened to lie in the interval between the two edges in question. This vertex may be a left endpoint of one of the two edges, as figure 4 shows, or it may an irregular vertex with no right-going edge, as figure 5 shows.
The regularization algorithm starts out by sorting all the vertices of the subdivision $\Sscr$ according to their $x$-coordinate. Conceptually we imagine that the sweep line jumps from one corridor between successive vertices to the next, because within a corridor neither the ordering of the active edges nor the generators of any intervals can change. In practice we maintain a balanced tree of all the active edges, ordered according to the $y$-coordinate where they intersect the sweeping line. As we pass each vertex $v$, the list of active edges is updated by standard balance tree insertion and deletion operations. It is also straightforward to see how to update the generators as we pass through the vertex $v$. The new vertex simply becomes the generator for each interval bounded by an edge starting at $v$. If, however, $v$ has no right-going edges, it becomes the generator for its immediately enclosing interval. The regularization algorithm can now be stated as follows: when a vertex $v$ is encountered that has no left-going edges, connect it by an edge to the generator of the interval that $v$ happened to fall in. This satisfies all the left-irregular vertices, and a symmetric pass of the sweep line from right to left will satisfy the right-irregular vertices. We omit here the actual code for this algorithm as it is quite simple and not in the mainstream of the results of this paper.
\lemma{\Theii}{The edges added in the regularization process connect vertices and do not intersect any previously existing edges. Thus the result of regularization is a new monotone subdivision.}
\proof{Consider the sweep-line when it has just arrived at the left-irregular vertex $v$, as in figure 6. Let $a$ and $b$ denote the edges immediately above and below $v$ in the vertical ordering, and let $w$ denote the generator of the interval $[a,b]$. We claim that the straight-line segment from $v$ to $w$ does not intersect any other edge of the subdivision. This follows because the hatched region in figure 6 is free of other vertices or edges of the subdivision, by the definition of the generator $w$.}
The dominant term in the complexity of this regularization process is the sorting of the vertices and maintaining the balanced tree; thus the cost in time is $O(m\log m)$. We have not dealt with the case of vertical edges, but the required modifications to the above algorithm are not difficult.
\section{4. The vertical ordering}
We denote by $\proj{A}$ the orthogonal projection of a set $A$ on the $x$-axis. The projection $\proj{R}$ of a monotone polygon is an open interval of the $x$-axis, whose endpoints, if any, are the projections of at most two {\it extremal vertices} of $R$. The {\it frontier} of $R$ is the boundary of $R$ minus its extremal vertices. This frontier consists of two disjoint pieces, the {\it top} and the {\it bottom} of the region $R$. Each is either empty, or a connected polygonal line, which may extend to infinity in one or both directions.
Given two subsets $A$ and $B$ of the plane, we say that $A$ is above $B$ ($A\above B$), and $B$ is below $A$ ($B\below A$), if for every pair of vertically aligned points $(x, ya)$ of $A$ and $(x, yb)$ of $B$ we have $ya \geq yb$, with strict inequality holding at least once. Some of these concepts are illustrated in figure 7. For the rest of this section, we will restrict $\above$ to the elements of a fixed monotone subdivision $\Sscr$, with $n$ regions and $m$ edges. Then $\above$ has the following properties:
\lemma{\Theiii}{For any two elements $A$ and $B$ of a monotone subdivision, $A\above B$ if and only if $A\neq B$ and there is some vertical line segment with lower endpoint in $B$ and upper endpoint in $A$.}
\proof{The ``only if'' part is obvious from the definition. For the converse, assume there is such a segment with endpoints $(x, ya)\in A$ and $(x, yb)\in B$, $ya>yb$, but suppose $A\not\above B$. Then there should be two other points $(x\pr, ya\pr)\in A$ and $(x\pr, yb\pr)\in B$ such that $ya\pr<yb\pr$. Let's first suppose $x=x\pr$: either we have $ya\pr<yb<ya$, or $yb<ya\pr<yb\pr$. Since all elements of a monotone subdivision are monotone, a vertical segment with both endpoints in the same element is entirely contained in that element. Therefore in the first case we conclude that $(x, yb)\in A$, and in the second that $(x\pr, ya\pr)\in B$, contradicting the fact that $A\inte B=\empty$.
\pp Now suppose $x\neq x\pr$. Since $A$ and $B$ are connected, there is a simple path $\pia$ in $A$ that connects $(x, ya)$ to $(x\pr, ya\pr)$, and an analogous one $\pib$ for $B$; see figure 8. There are pieces $\gammaa$ of $\pia$ and $\gammab$ of $\pib$ that connect the vertical line at $x$ to the vertical line at $x\pr$ while staying always between the two. Since $A$ and $B$ are monotone, the endpoint $pa$ of $\gammaa$ at $x$ must be above the endpoint $pb$ of $\gammab$ at $x$; for the same argument, the endpoints $pa\pr$ and $pb\pr$ of the two paths at $x\pr$ must be in the opposite order. We conclude the two paths must cross at some point $q$; but then $q\in A\inte B$, a contradiction.}
\lemma{\Theiv}{Any two elements $A$ and $B$ of a monotone subdivision satisfy exactly one of $A=B$, $A\below B$, $A\above B$, or $\proj{A}\inte\proj{B}=\empty$.}
\proof{Immediate from the definition and lemma {\Theiii}.}
If three elements $A$, $B$ and $C$ are intercepted by a common vertical line, then from $A\below B$ and $B\below C$ we can conclude $A\below C$. Therefore, the relation $\below$ is transitive when restricted to all the elements intercepted by a given vertical line; indeed, in that case it is a linear order, as figure 9 illustrates. In general transitivity does not hold; however, the following is true:
\lemma{\Thev}{The relation $\below$ is acyclic.}
\proof{Suppose that is not true. Let $E0 \below E1 \below E2 \below \ldots \below Ek=E0$ be a cycle of $\below$ with minimum length, where each $Ei$ is an element of the subdivision $\Sscr$. From lemma {\Theiv}, it follows immediately that $k>2$.
\pp The $x$-projections of any two consecutive elements $Ei$ and $E{i+1}$ in this cycle must have some abscissa $xi$ in common. If for some $i$ we had $\proj{E{i}}\subset \proj{E{i-1}}$, then the vertical line through $xi$ would meet $E{i-1}$, $Ei$, and $E{i+1}$; transitivity would hold, and we would have $E{i-1}\below E{i+1}$. See figure 10. But then we could omit $Ei$ and still get a cycle, contradicting the assumption that $k$ is minimum. For analogous reasons, we cannot have $\proj{E{i}}\subset \proj{E{i+1}}$.
\pp Let $Ei$ be one of the ``leftmost'' cycle elements, i.e., such that none of the intervals $\proj{Ej}$ contains a point to the left of $\proj{Ei}$. The projections $\proj{E{i-1}}$ and $\proj{E{i+1}}$ of its neighbors must both meet $\proj{E{i}}$, and, by what we have just said, extend beyond its right endpoint. But then there would be again a vertical line meeting all three elements, implying $E{i-1}\below E{i+1}$ and $k$ is not minimum. We conclude that no such cycle exists.}
We say that a region $A$ is {\it immediately above} a region $B$ ($A\justabove B$) if $A\above B$ and the frontiers of the two regions have at least one common edge; see figure 11. Clearly, $\justabove$ implies $\above$, but the converse is not true in general. The following result may then seem somewhat surprising:
\lemma{\Thevi}{The transitive closure of $\below$ is the same as that of $\justbelow$}
\proof{Clearly, since $\justbelow$ implies $\below$, the same will be true of their transitive closures, so we only have to prove the other direction.
\pp If $B\below A$, for every $x\in \proj{A}\inte\proj{B}$ there is a vertical segment with abcissa $x$, lower endpoint in $B$, and upper endpoint in $A$. We can always choose $x$ so that the segment passes through no vertex of $\Sscr$. The regions $A, C1, C2, \ldots, Ck, B$ intercepted by this segment, from bottom to top, clearly satisfy $A\justbelow C1\justbelow C2 \justbelow \ldotss \justbelow Ck\justbelow B$. We conclude that $\below$, and therefore its transitive closure, imply the transitive closure of $\justbelow$.}
\section{5. Separators}
A {\it separator} for $\Sscr$ is a polygonal line $s$, consisting of vertices and edges of $\Sscr$, with the property that it meets every vertical line at exactly one point. Since $s$ extends from $x=-\infty$ to $x=+\infty$, any element of $\Sscr$ that is not part of $s$ is either above or below it. See figure 12. The elements of $s$ have pairwise disjoint projections on the $x$-axis, and so can be ordered from left to right; the first and last elements are infinite edges.
A {\it complete family of separators} for a monotone subdivision $\Sscr$ with $n$ regions is a sequence of $n-1$ distinct separators $s1\below s2 \below \ldotss \below s{n-1}$. It is clear that there must be at least one region between any two consecutive separators, and also one below $s1$ and one above $s{n-1}$. If a region is below a separator $si$, it must also be below any separator $sj$ with $j>i$. We can conclude that, if $\Sscr$ admits a complete family of separators, its regions can be enumerated as $R0, R1, \ldots, R{n-1}$ in such a way that $Ri\below sj$ if and only if $i< j$; in particular,
$$R0\below s1 \below R1 \below s2 \below \ldots \below s{n-1} \below R{n-1}.\eqno(5.1)$$
Given a complete family of separators and an enumeration of the regions as in (5.1), let us denote by $\regionindex R$ the index of a region $R$ in the enumeration. Then $s{\regionindex R}\below R\below s{\regionindex R + 1}$. Also, if we let $\regionabove(e)$ be the region above the edge or vertex $e$, and $\regionbelow(e)$ the one below it, then the following property holds:
\lemma{\Thevii}{If $i=\regionindex{\regionbelow(e)}$ and $j=\regionindex{\regionabove(e)}$, then the separators containing $e$ will be exactly $s{i+1}, s{i+2}, \ldots, sj$.}
\proof{The edge $e$ will belong to a separator $sk$ if and only if the latter lies above $Ri$ and below $Rj$, i.e., if and only if $i< k\leq j$.}
\section{6. Existence of a complete family of separators}
It is a well-known result [\Knu] that any acyclic relation over a finite set can be extended to a linear (or total) order. Therefore, by using lemma {\Thev} we can conclude that there is an enumeration $R0, R1, \ldots, R{n-1}$ of the the regions of $\Sscr$ that is compatible with $\below$, i.e. such that $Ri\below Rj$ implies $i<j$. Furthermore, any enumeration $R0, R1, \ldots, R{n-1}$ of the regions that is compatible with $\justbelow$ is also compatible with $\below$, and vice-versa.
Since a region with nonempty bottom frontier is immediately above some other region, and therefore not minimal under $\justbelow$, the first region in any such enumeration has no bottom frontier, and extends infinitely in the ``down'' direction. Since vertical edges are not allowed, its $x$-projection is the whole $x$-axis. Similarly, the last region $R{n-1}$ is the only one with no top frontier, and also projects onto the whole $x$-axis. Therefore, we always have $R0\below Ri\below R{n-1}$ for $0<i<n-1$.
We are ready to prove the main result of this section:
\theorem{\Theviii}{Every monotone subdivision $\Sscr$ admits a complete family of separators.}
\proof{Let $R0, R1, \ldotss, R{n-1}$ be a linear ordering of the regions of $\Sscr$ that is compatible with the $\below$ relation, i.e. $Ri\below Rj$ only if $i<j$. For $i=1, 2, \ldots, n-1$, let $si$ be the ``boundary'' between the regions with indices $\leq i$ and those with indices $>i$. That is, a vertex or edge $e$ belongs to $si$ if and only if it is on the frontiers of two regions $Rk$ and $Rj$ with $k< i \leq j$. See figure 12.
\pp Now consider any vertical line $\lscr$, and let $R{i1}, R{i2}, \ldots, R{iq}$ be the regions it meets, from bottom to top. Since $\lscr$ meets $R0$ and $R{n-1}$, and $R{i1}\below R{i2}\below \ldots\below R{iq}$, we will have $0=ii<i2<\ldots<iq=n-1$. Therefore, there is exactly one point on $\lscr$ that is on the frontier between a region with index $< i$ and a region with index $\geq i$, that is, on $si$. Furthermore, the intersection of $\lscr$ with $si$ will be equal to or above that with $s{i-1}$, and for some such lines (those that meet $R{i-1}$) the two intersections will be distinct. So, we have $s1\below s2\below \ldots\below s{n-1}$.
\pp Clearly, the elements of $si$ have disjoint $x$-projections, and therefore can be ordered from left to right; they must be alternately edges and vertices, the first and last being infinite edges. To prove that $si$ is a separator, it remains only to show that $si$ is connected; if that were not the case, we would have some vertex $v$ of $si$ that is distinct from, but has the same $x$-coordinate as, one endpoint of an adjacent edge $e$ of $si$; see figure 13. But then we would have $u\below R\below v$ (or vice-versa), for some region $R$; and therefore $e\below R\below v$ (or vice-versa), contradicting the construction of $si$. Therefore, each $si$ is a separator, and $s1, s2, \ldots, sn$ is a complete family of them.}
\section{7. A point location algorithm}
Later on we will tackle the problem of how to efficiently compute a complete family of separators for a monotone subdivision. Let's therefore assume for now that we have such a family $s1, s2, \ldots, s{n-1}$, with a corresponding enumeration of the regions $R0, R1, \ldots, R{n-1}$ satisfying (5.1); we will show next how they can be used to determine, in time $O(\log n\log m)$, the unique element of $\Sscr$ that contains a given point $p$.
The algorithm we will describe is essentially that of Lee and Preparata [\LeePre], and uses two levels of binary search. The inner loop takes a separator $si$ (as a linear array of edges and vertices, sorted by $x$-coordinate), and determines by binary search an edge or vertex $e$ of $si$ whose $x$-projection contains $x$. By testing $p$ against $e$, we will know whether $p$ is above $si$ or below $si$ (or, possibly, on $e$ itself, in which case the search terminates). The outer loop performs binary search on $i$, so as to locate $p$ between two consecutive separators $si$ and $s{i+1}$, that is to say in a region $Ri$.
Besides the separators, the algorithm assumes the index $\regionindex R$ can be obtained for any given region $R$, and similarly the adjacent regions $\regionabove(e)$ and $\regionbelow(e)$ can be obtained from an edge $e$, all in constant time. We will see that the construction of these tables is part of the process of constructing the family of separators. The search algorithm uses these tables to reduce substantially the storage requirements (and also speed up the search a little bit).
Let $\T$ be the infinite, complete binary search tree with internal nodes $1, 2, 3, \ldots$ and leaves $0, 1, 2, 3, \ldots$, as in figure 14. The tree $\T$ is used as a ``flowchart'' for the outer loop of the search algorithm, with the convention that each internal node $i$ represents a test of $p$ against the separator $si$, and each leaf $j$ represents the output ``$p$ is in $Rj$''. While reading the algorithm it should be borne in mind that ``left'' in the tree corresponds to ``down'' in the subdivision. The left and right children of an internal node $u$ of $\T$ will be denoted by $\lscr(u)$ and $r(u)$, respectively. We let $\LCA(i, j)$ be the {\it lowest common ancestor} in $\T$ of the leaves $i$ and $j$, that is, the root of the smallest subtree of $\T$ that contains both $i$ and $j$.
When testing $p$ against a separator, we adopt the convention that each edge contains its right endpoint but not its left. This is unambiguous since there are no vertical edges. If the algorithm detects that $p$ lies on some edge $e$ during a discrimination against a separator, it can terminate the search and, by comparing $p$ with the right endpoint of $e$, determine if our point is a vertex of the subdivision.
\def\loc{{\char l\char o\char c}}
\algorithm{1}{Point location in a monotone subdivision.}{
\comm{The output of the algorithm is a reference, stored in the variable $\loc$, to the vertex, edge, or region of $\Sscr$ containing $p$.}
\begblock
\step {\sno1 Set $i\gets 0$, $j\gets n-1$.}
\step {\sno2 While $i<j$, do:}
\begblock
\comm {At this point we know $p$ is above the separator
$s{i}$ and below $s{j+1}$ (whenever those separators exist).
That is, $p$ is in one of the regions
$Ri, R{i+1}, \ldots, Rj$, or in some edge or vertex
between those two separators. At each iteration of this loop,
either $i$ increases or $j$ decreases, and their lowest common
ancestor in $\T$ moves down at least by one level.}
\step {\sno3 Set $\lscr\gets \LCA(i, j)$.}
\step {\sno4 Find (by binary search) an edge
$e$ in $s\lscr$ such that $\proj{p}\in \proj{e}$.
Let $a=\regionindex{\regionabove(e)}$,
$b=\regionindex{\regionbelow(e)}$.}
\comm {By lemma {\Thevii}, $e$ belongs also to
the separators $s{b+1}, s{b+2}, \ldots, s{a}$.
Therefore, by testing $p$ against $e$ we
conclude it is either below $s{b+1}$ or above $s{a}$.}
\step {\sno5 If $p$ is below $e$, set $j\gets b$;
if $p$ is above $e$, set $i\gets a$; else set $\loc\gets e$ and
terminate the search.}
\comm {We can now test if $p$ is the right endpoint of $e$.}
\endblock
\step {\sno6 Set $\loc\gets Ri$ and terminate the search.}
\endblock}
If we were to represent each separator independently as a linear array with all of its edges and vertices, we would have to store $n-1$ chains, whose average length can be as large as $\Theta(n)$ for some classes of subdivisions. So, the storage requirement of this basic method is $\Theta(n^2)$ in the worst case. However, after $p$ has been tested against the edge $e$ in step 5, $i$ and $j$ are updated in such a way that we will never again look at the edges of any other separator that contains $e$. Therefore, if an edge $e$ is in the common frontier of regions $Ri$ and $Rj$, it needs to be stored only once, in $s\lscr$, where $\lscr$ is the least common ancestor of $i$ and $j$. By lemma {\Thevii} any other chain containing $e$ will be a descendant of $s\lscr$, so $e$ has been stored with the {\it first} chain containing it that would be encountered in a search down the tree.
Note that only those edges assigned to $s\lscr$ according to the above rule are actually stored in such a structure. In general these will form a proper subset of all the original edges of $s\lscr$, so between successive stored edges of $s\lscr$ there may be {\it gaps}. See figure 15. The ordered list of edges stored with separator $s\lscr$ in $\T$ will be termed the {\it chain} $c\lscr$. The total storage required by this method is clearly only $O(m)$; its implementation is explained in detail in section 10. An important tool in this process is an efficient way to compute the function $\LCA(i,j)$ in $O(1)$ time, which is presented in section 11.
The binary search along each separator $si$ can be performed in $O(\log m)$ time, if the edges of $si$ are stored in a linear array or balanced binary search tree sorted from left to right. By the first iteration, the internal node $\LCA(0,n-1)$ of $\T$ is at level $\lceil \log n\rceil$; at each iteration it descends at least one level, so we have $O(\log n)$ iterations, and a total time bound of $O(\log n\log m)=O(\log^2 m)$. This bound is tight: figure 16 shows a subdivision with $m$ edges that realizes it. This example has ${\sqrt m}+1$ regions and a family of $\sqrt m$ disjoint separators with $\sqrt m$ edges each.
\section{8. A faster point location method}
Our hope in obtaining a faster algorithm for point location comes from the fact that there is some obvious information loss in the method of section 7. Specifically, when we discriminate a point against a chain $cu$, we must localize it in the $x$ coordinate to within an edge or a gap of $cu$. Yet when we continue the search down some child of $u$, we start this localization process all over again. It would be nice if each edge or gap in a chain pointed to the place on each child chain where the $x$ search on that child will finish. The trouble is that an edge or gap of the parent can cover many edges or gaps of the child.
The novel idea in the technique we present in this section is to refine the above chains so that the localization in $x$ of a point with respect to one chain allows us to do the same localization in its children with only {\it constant} extra effort. In its ultimate form, this idea leads to breaking up each chain at the $x$ coordinates of all vertices of the subdivision. A bit of thought will convince the reader, however, that such a fine refinement will require quadratic storage for its representation. Instead, we describe below a refinement that produces for each chain $cu$ a list $Lu$ of $x$-values, defining a partitioning of the $x$ axis into {\it $x$-intervals}. Each such interval of $Lu$ overlaps the $x$-projection of exactly one edge or gap of $cu$ and {\it at most two} $x$-intervals of the lists $L{\lscr(u)}$ and $L{r(u)}$. As we will see, this last condition is compatible with keeping the overall storage linear.
A convenient way to represent the lists $Lu$ and their interconnections is by means of a data structure we call the {\it layered dag}. The ``leaf'' nodes of the dag correspond to the regions of $\Sscr$. The internal nodes correspond to tests of three kinds: {\it $x$-tests}, {\it edge tests}, and {\it gap tests}. A list $Lu$ is represented in the dag by a collection of such nodes: each $x$-value of $Lu$ gives rise to an $x$-test, and each interval between successive $x$-values to an edge or gap test. See figure 17. An $x$-test node $t$ contains the corresponding $x$-value of $Lu$ ($\xval(t)$) and two pointers, $\leftptr(t)$ and $\rightptr(t)$, to the adjacent edge or gap nodes of $Lu$. An edge or gap test node $t$ contains two links $\downptr(t)$ and $\upptr(t)$ to appropriate nodes of $L{\lscr(u)}$ and $L{r(u)}$. In addition, an edge test contains a reference $\edge(t)$ to the edge of $cu$ whose projection covers the $x$-interval represented by $t$. A gap test node contains instead the chain number $\chain(t)=u$. The various types of nodes present are illustrated in figure 18.
Let us define more precisely the meaning of the links $\downptr(t)$ and $\upptr(t)$. The properties of the refined lists ensure that the $x$-interval of $Lu$ corresponding to an edge or gap test $t$ covers either one $x$-interval $I$ of $L{\lscr(u)}$, or two such intervals $I1$, $I2$ separated by some element $xk$ of $L{\lscr(u)}$. In the first case, $\downptr(t)$ points to the edge or gap test of $L{\lscr(u)}$ corresponding to the interval $I$; in the second case, $\downptr(t)$ points to the $x$-test corresponding to the separating abcissa $xk$. Similarly, the link $\upptr(t)$ points to a node of $L{r(u)}$ defined in an analogous manner. In the special case when $r(u)$ and $\lscr(u)$ are leaves of $\T$, we let $\downptr(t)$ and $\upptr(t)$ point to the corresponding region nodes.
The layered dag then consists of the test nodes for all lists $L1, L2, \ldots, L{n-1}$, linked together in this fashion. We can use this dag to simulate algorithm 1; the main difference is that each time we move from a separator to one of its children, the $\downptr$ or $\upptr$ pointers (possibly together with an $x$-test) allow us to locate $xp$ in the new chain in $O(1)$ time. As before, we maintain an interval $(i,j]$ of chains between which our point is known to lie (i.e., the point is above the chain $si$ and below chain $s{j+1}$). This interval is updated after each edge test exactly as in the previous algorithm, and is used during a gap test. When we come to a gap test we know that the chain number of the gap is not interior to the interval in question. Since our interval, if the search has not yet terminated, is not a single chain, we have an unambiguous test as to whether we are above or below the chain of the gap.
We should remark here that, in contrast to algorithm 1 where gap tests are completely bypassed, algorithm 2 is forced to deal with them, since it descends the layered dag only one level at a time. With considerably more effort, using special properties of our region numbering, it is possible to excise the gap tests from the layered dag, thus rendering the maintainance of the interval $(i,j]$ superfluous. We will not pursue these remarks further here.
We have now presented enough details about the structure of the layered dag, that we can give the code for the point-location algorithm. The layered dag contains a distinguished node, the {\it root} node, where the point location search begins. This node is the root of a balanced tree of $x$-tests whose leaves are the edge tests corresponding to the list for the root node of $\T$.
\algorithm{2}{Fast point location in a monotone subdivision.}{
\begblock
\comm{This algorithm takes as input the root node {\it root} of the layered dag and the number $n$ of regions. Its output, as in algorithm 1, is placed in the variable $\loc$.}
\step {\sno1 Set $i\gets 0$, $j\gets n-1$, $c\gets \hbox{\it root}$.}
\step {\sno2 While $c$ is not a leaf node of the layered dag do:}
\begblock
\comm {At this point we know $p$ is above the separator
$s{i}$ and below $s{j+1}$ (whenever those separators exist).
That is, $p$ is in one of the regions
$Ri, R{i+1}, \ldots, Rj$, or in some edge or vertex
between those two separators.}
\step {\sno3 If $c$ is an edge test then let $e\gets \edge(c)$ and do:}
\begblock
\comm {The point $p$ is known to have an $x$ coordinate within $\proj{e}$.}
\step {\sno4 If $p$ is on $e$, then set $\loc\gets e$ and
terminate the algorithm.}
\comm{As in algorithm 1, we can now test if $p$ is the right
endpoint of $e$.}
\step {\sno5 Otherwise, if $p$ is above $e$ then
$c\gets \upptr(c)$ and $i\gets \regionindex{\regionabove(e)}$, else
$c\gets \downptr(c)$ and $j\gets \regionindex{\regionbelow(e)}$.}
\endblock
\step {\sno6 Else if $c$ is an $x$-test then do:}
\begblock
\comm {The following $x$-test helps route us to the appropriate
edge of the next chain we need to test against.}
\step {\sno7 If $px\xval(c)$ then $c\gets \leftptr(c)$
else $c\gets \rightptr(c)$.}
\endblock
\step {\sno8 Else if $c$ is a gap test then do:}
\begblock
\comm {We have already compared $p$ against the appropriate edge of the
chain of the gap test. We just need to reconstruct how that comparison
went.}
\step {\sno9 If $j<\chain(c)$ then $c\gets \downptr(c)$
else $c\gets \upptr(c)$.}
\endblock
\step {\sno{10} Loop back to step 2.}
\endblock
\step {\sno{11} Set $\loc\gets \region(c)$ and terminate the search.}
\endblock}
\section{9. Computing the layered dag}
Now that we understand how the layered dag is to be used, we proceed to describe how it is to be constructed. Our starting point will be the tree $\T$ and the chains $cu$ defined in section 7; recall that $cu$ consists of those edges of $su$ that do not belong to any ancestor of $su$ (that is, to any separator whose index is an ancestor of $u$ in $\T$). Also, note that to each subtree of $\T$ rooted at a node $u$ there corresponds a ``partial subdivision'' consisting of a range of regions and the edges and vertices between them. The separator $su$ splits this partial subdivision in two others, each having half the regions; the gaps of $cu$ correspond to edges of $su$ that lie on the upper or lower boundary of the partial subdivision. See figure 15.
Our construction of the layered dag proceeds from the bottom up and happens simultaneously with the refinement of the chains $cu$. We first describe how the $x$ values in $Lu$ are obtained. Note that we already have at our disposal three sorted lists of $x$ values: those corresponding to $L{\lscr(u)}$, to $L{r(u)}$, and also to the endpoints of the edges stored with the chain $cu$ associated with node $u$ in the chain tree. The $x$ values in $Lu$ are a merge of those in $cu$, and {\it every other one} of those present in each of $L{\lscr(u)}$ and $L{r(u)}$. By convention, if $u$ is a terminal node of the chain tree (so it corresponds to a region), or $u\geq n$, then $Lu$ is empty. We imagine now that, in a bottom-up fashion, this is done for every node $u$ of the chain tree. The propagation of every other value from the children to the father constitutes the chain refinement we had mentioned in section 8. This refinement has two desirable properties.
\lemma{\Theix}{An interval between two successive $x$ values in $Lu$ overlaps at most two intervals in $L{\lscr(u)}$, or $L{r(u)}$, respectively.}
\proof{By construction.}
\lemma{\Thex}{The total number of $x$ values in the lists $Lu$, summed over all $u$, is at most $4m$.}
\proof{If $au$ denotes the number of edges in $cu$, then clearly
$$\sum{u \in T}{au} = m,$$
since each edge of the subdivision occurs in exactly one chain $cu$. Let $bu$ denote the number of $x$ values in $Lu$, and $Au$ (resp. $Bu$) denote the sum of $av$ (resp. $bv$) over all nodes $v$ in the subtree rooted at $u$. To prove the lemma then it suffices to show that
$$B{r}\leq 4A{r} = 4m$$
for the root node $r=\LCA(0,n-1)$.
\pp As an inductive hypothesis now assume that
$$Bv+bv 4Av\eqno(9.1)$$
for $v=\lscr(u)$ or $v=r(u)$. This hypothesis is trivially true for the leaves of $\T$. Observe now that
$$Bu = B{\lscr(u)} + B{r(u)} + bu,$$
and that
$$bu 2au + (b{\lscr(u)}+b{r(u)})/2,$$
since each edge in $cu$ contributes at most two $x$-values to $Lu$.
Applying the inductive hypothesis (9.1) yields
$$Bu + bu 4Au,$$
which proves the same conclusion for $u$. This concludes the proof of the lemma.}
Intuitively, half of the $x$ values of $cu$ propagate to the father chain, a quarter to the grandfather, an eighth to the grand-grandfather, and so on. Although this argument is not strictly correct, it illustrates why we can expect the combined length of the $L$ lists to remain linear in $m$, and in fact just twice as much as the total number of $x$ values in the chains $cu$.
The construction of the $x$-test, edge test, and gap test nodes representing the list $Lu$ in the layered dag is now straightforward, as well as the setting of the $\leftptr$ and $\rightptr$ fields of the $x$-tests. The $\downptr$ links of $Lu$ can be set according to the rules stated in section 8, by traversing simultaneously the two lists $Lu$ and $L{\lscr(u)}$ from left to right. The $\upptr$ pointers are analogous. In fact, it is possible to construct $Lu$ and link it to the rest of the dag in a single simultaneous traversal of $cu$, $L{\lscr(u)}$, and $L{r(u)}$. This bottom-up process terminates with the construction of the root chain $Lr$, where $r=\LCA(0,n-1)$. As a final step we produce a tree of $x$-test nodes corresponding in a natural way to a binary search of $Lr$ based on the $x$ coordinate of our test point. The leaf pointers of that tree are made to point to the appropriate edge test nodes of $Lr$. The resulting dag, together with a pointer to the root of this tree, is the layered dag we want. The global structure of the algorithm is given below. Due to lack of space we omit the code for the inner calls.
\algorithm{3}{Construction of the layered dag.}{
\begblock
\comm {In the code below it is convenient to assume that the leaves of the
chain tree are numbered by the half-integers $1/2, 3/2, 5/2, \ldotss$ which
exceed by $1/2$ the leaf numbers we have used up to now. This makes the numbering
consistent with that of the internal nodes and avoids consideration of special
cases.}
\step {\sno1 $s\gets 1/2$, $b\gets 1$, $z\gets n-1$.}
\step {\sno2 For $k\gets 1$ to $\lceil\lg n\rceil$ do:}
\begblock
\comm {At this point we are about to build the lists
corresponding to all nodes in level $k$ by combining
successive pairs of lists from level $k-1$.
The first node of level $k-1$ is $s$, the last one is $z$,
while $b$ is the distance between sucessive nodes.}
\step {\sno3 Set $\lscr\gets s$ and $u\gets 2s$.}
\step {\sno 4 While $\lscr\leq z$ do:}
\begblock
\comm {Now $u$ is a node of level $k$, and $\lscr$
is its left child.}
\step {\sno5 Let $r\gets \lscr+b$. Construct the list $Lu$ from
the lists $L\lscr$, $Lr$, and the chain $cu$.}
\step {\sno6 Set $\lscr\gets \lscr+2b$, $u\gets u+2b$.
Loop back to step 4.}
\endblock
\step {\sno7 Set $z\gets u$, $s\gets 2s$ and $b\gets 2b$.
Loop back to step 2.}
\endblock
\step {\sno8 Set $r\gets s$ and construct the tree of
$x$-tests corresponding to the root list $Lr$.}
\endblock}
Note that several different nodes of $Lu$ may point to the same node of a child. An example is shown in figure 19. Thus the resulting dags are not trees. We remark that the structure built by Kirkpatrick [\Kir] also corresponds to a dag. This ``sharing'' of subtrees seems to be an essential feature of algorithms for point location that simultaneously attain optimal space and query time.
As a way to cut down on the number of links, we may consider storing the nodes of each list $Lu$ in consecutive memory locations, in their natural left-to-right order. This option allows the initial location of $xp$ in $Lr$ to be carried out by standard binary search. Also, in the construction of $Lu$ this representation allows the lists $L{\lscr(u)}$ and $L{r(u)}$ to be scanned without any auxiliary pointers or tables. Finally, this sequential allocation obviates the need for the $\leftptr$ and $\rightptr$ links of $x$-tests, a fact that may reduce the total storage used by pointers in the dag by about one third.
In either case, the initial location of $xp$ in $L{r}$ can be found in $O(\log m)$ time. After that, algorithm 2 obviously executes exactly $\lceil \lg n\rceil$ edge or gap tests (one at each level of $\T$), and at most that many $x$-tests. So the total query time $Q$ is $O(\log m)$. A list $Lu$ with $k$ $x$-values is represented in the layered dag by $k$ $x$-tests and $k+1$ edge/gap tests and, as we have seen, it can be constructed in $O(k)$ time. Using lemma {\Thex} we conclude that the layered dag contains at most $4m$ $x$-tests and $4m+n-1$ edge and gap tests, and can be built in total time $O(m+n)$ from the chain tree $\T$. In summary, we have shown that
\theorem{\Thexi}{Assuming that the chain tree for a subdivision with $m$ edges is given, the layered dag data structure can be constructed in $O(m)$ time, takes $O(m)$ space, and allows point location to be done in $O(\log m)$ time.}
\section{10. Constructing a complete family of separators}
We proceed now to the description of an efficient algorithm for constructing the tree $\T$ representing a complete family of separators for a given monotone subdivision $\Sscr$. As suggested by the proof of theorem {\Theviii}, the first pass of this algorithm enumerates the regions in a linear order compatible with $\below$. A second pass enumerates the edges and vertices of $\Sscr$, in such a way that the edges and vertices of each separator are visited in order of increasing $x$-coordinate (even though this may not be true for elements belonging to different separators). Therefore, by just appending each visited element to its appropriate separator, we will get the sorted lists required in algorithms 1 and 3.
If we add two dummy vertices at $x=\pm \infty$, and orient every edge of $\Sscr$ from right to left, we obtain a planar embedding of a directed, acyclic graph $\Hor$ with exactly one source and one sink (solid lines in figure 20). The enumeration we need in the second pass is basically a {\it compatible traversal} of this graph, in which we visit a vertex only after visiting all its outgoing edges, and we visit an edge only after visiting its destination. This is a form of topological sorting, as discussed in [\Knu].
Consider now the dual graph $\Ver$ whose vertices are the regions of $\Sscr$, and where for each edge $e$ of $\Sscr$ there is an edge $e\pr$ from $\regionabove(e)$ to $\regionbelow(e)$. By what we have seen in the previous section, $\Ver$ too is acyclic and has exactly one source and one sink. We can draw $\Ver$ on top of $\Hor$ (dotted lines in figure 20) in such a way that each edge $e\pr$ of it crosses only the corresponding edge $e$ of $\Hor$, and that exactly once (going down). Therefore, $\Ver$ is planar, and corresponds to the topological dual of $\Hor$. It turns out that $\Ver$ represents for the first pass what $\Hor$ does for the second: a post-order traversal of $\Ver$ will visit the regions in an order compatible with $\justbelow$ and $\below$.
Therefore, both passes reduce to a generic graph traversal algorithm, applied first to $\Ver$ and then to $\Hor$. In the first pass, as each region $R$ is visited, we store in a table (or in a field of the region's record) its serial index $\regionindex R$ in the enumeration. In the second pass, as each edge or vertex $e$ is visited, we obtain the indices $a=\regionindex{\regionabove (e)}$ and $b=\regionindex{\regionbelow (e)}$ of the regions immediately above and below it, and append $e$ to the separating chain $ci$ where $i={\LCA(a, b)}$. With an appropriate representation for subdivisions, it is possible to accomplish this graph traversal without any auxiliary storage. This topic is discussed in section 12.
\section{11. The lowest common ancestor function}
The problem of computing the function $\LCA(i, j)$, the least common ancestor of two leaves $i$ and $j$ of $\T$, is quite interesting by itself. This function has a simple interpretation in terms of the binary representations of $i$ and $j$: if we have
$$i=(a{\nu-1}\,a{\nu-1}\,\ldots\, a{k+1}\,\f0\,a\pr{k-1}\,\ldots\, a\pr1\,a\pr0)2$$
and
$$\null\,j=(a{\nu-1}\,a{\nu-1}\,\ldots\, a{k+1}\,\f1\,a\prr{k-1}\,\ldots\, a\prr1\,a\prr0)2,$$
where $\nu=\lceil\lg n\rceil$ is the number of bits needed to represent any number from $0$ through $n-1$, then
$$\LCA(i,j)=(a{\nu-1}\,a{\nu-1}\,\ldots\, a{k+1}\,\f1\,\f0\,\ldots\,\f0\,\f0)2.$$
Loosely speaking, $\LCA(i, j)$ is the longest common prefix of $i$ and $j$, followed by $\f1$ and padded with zeros. From the recursive formula below
$$\LCA(i, j)=\alter{$j$|if $j=i+1$,\cr\vc
$2\LCA(i/2,j/2)$|otherwise\cr}$$ we get a straightforward ``bottom-up'' algorithm requiring $O(\log n)$ iterations in the worst case. A ``top-down'' version is also not difficult to obtain and was used by Lee and Preparata in their original paper. In order to guarantee that the preprocessing algorithm of section 10 is linear, however, we must exhibit a way to compute $\LCA$ in $O(1)$ time. A solution for this problem, in a much more general setting, has been published by Dov Harel [\Har]. The regular structure of $\T$ allows us to greatly simplify the problem.
An efficient formula for computing $\LCA(i, j)$ is based on the function $\MSB(k)=\LCA(0,k)=2^{\lceil\lg k\rceil-1}$, the {\it most significant bit of $k$}. The values of this function for $k=1, 2, \ldots, n-1$ are $1, 2, 2, 4, 4, 4, 4, 8, 8, \ldots, 2^{\nu-1}$; these values can be easily precomputed in $O(n)$ time and stored in a table with $n-1$ entries, so we can compute $\MSB(k)$ in $O(1)$ time. Then we can express the $\LCA$ function as
$$\LCA(i,j)=j\land\lnot(\MSB(i\xor j)-1)$$
where $\xor$, $\and$, and $\lnot$ are the boolean operations of bitwise ``exclusive or'', ``and'', and complement\foot\dag{We assume these boolean operations can be computed in $O(1)$ time, like addition and subtraction. We feel their inclusion in the model is justified, since their theoretical complexity is no greater than that of addition, and most real machines have such instructions. In fact, some machines even have instructions for computing $\lceil\lg k\rceil-1$ from $k$ (``find first $\f1$ bit'') and $2^q$ from $q$ (``shift left $q$ bits''), in a single execution cycle. Under a more rigorous $\log$-cost complexity model, all time and space bounds in this paper and in the main references should be multiplied by an extra $\log m$ factor.}. For use in this formula, it is preferable to tabulate $\MSB(k)-1$ instead of $\MSB(k)$.
Another way of computing $\LCA$ is based on the bit-reversal function $\REV(k)$, that reverses the order of the last $\nu$ bits in the binary expansion of $k$. For example, for $n=16$ and $k=0, 1, \ldots, 15$, the values of $\REV(k)$ are $0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15$. Using this function we get the formula $$\LCA(i,j)=\REV(x\xor (x-1))\and j,$$where $x=\REV(i\xor j)$.
\section{12. Compatible traversal of an acyclic graph}
The input to the compatible graph traversal routine we mentioned in section 10 is a planar embedding $\G$ of a directed, acyclic, and connected graph with exactly one sink and one source, both on the exterior face of $\G$. See figure 21. Such an embedding defines a ``counterclockwise'' circular ordering of the edges incident to any given vertex $u$. In this ordering all outgoing edges occur in consecutive positions, and the same is true of the incoming ones. To see why, consider any two edges $e1, e2$ entering $u$, and any two edges $g1, g2$ leaving $v$. Let $\pi1,\pi2$ be two paths from the source vertex of $G$ that end with the edges $e1$ and $e2$, and let $\sigma1, \sigma2$ be the two paths to the sink vertex that begin with $g1, g2$. See figure 22. Since $G$ is acyclic, both $\pi1$ and $\pi2$ are disjoint from $\sigma1$ and $\sigma2$ (except for $u$ itself). Now, the paths $\pi1$ and $\pi2$ together divide the plane in two (or more) regions. If the two pairs of edges were interleaved, at least one of the paths $\sigma1$ and $\sigma2$ would have to cross $\pi1$ or $\pi2$, since they start on different regions but have a common destination. This proves the above assertion.
If $u$ has both incoming and outgoing edges, this result establishes a {\it linear} order for each class with well-defined ``first'' and ``last'' elements, which will be denoted by $\firstin(u)$, $\lastin(u)$, $\firstout(u)$, and $\lastout(u)$. See figure 23. To make this definition meaningful also for the source and sink of $\G$, we will introduce a dummy edge $\base(\G)$ that connects the sink back to the source across the exterior face of $\G$. We may consider the resulting graph $\G\pr$ as embedded on the upper half of a sphere, with $\base(\G)$ running across the bottom half, as in figure 24. In the graph $\G\pr$, the first and last outgoing edges of the source of $\G$ will be those incident to the exterior face of $\G$. A similar statement applies to the sink of $\G$.
The {\it post-order traversal} of $\G\pr$ is defined as an enumeration (or {\it visitation}) of every vertex and edge of $\G$, in such a way that
\begitems
\itemno{(i)} an edge is visited only after its destination,
\itemno{(ii)} a vertex is visited only after all its outgoing edges, and
\itemno{(iii)} edges with same origin are visited in counterclockwise order.
\enditems
This is clearly a compatible traversal as defined in section 10. The post-order enumeration is unique, and is a particular case of the general depth-first graph traversal described by Tarjan [\Tar]. The recursive algorithm below takes as parameter a vertex $u$, which on the first call should be the source of $\G$. The variable $e$ is local to each invocation of the procedure.
\algorithm{4}{Post-order traversal of an acyclic planar graph with one source and one sink {\rm (recursive version)}.}{
\begblock
\step{\sno1 If $u$ is not the sink of $\G$, then, for
each directed edge $e$ out of $u$ (in counterclockwise order
from $\firstout(u)$ to $\lastout(u)$) do}
\begblock
\step {\sno2 If the destination $v$ of $e$
has not been visited yet, then}
\begblock
\step{\sno3 apply the algorithm recursively
to the vertex $v$.}
\endblock
\step {\sno4 Visit the edge $e$.}
\endblock
\step{\sno5 Visit $u$.}
\step{\sno6 Return to the caller.}
\endblock
}
The auxiliary storage required by Algorithm 4 consists of the implicit recursion stack (to hold the arguments and local variables of the various recursive calls) and one mark bit per vertex (to distinguish the nodes that have already been visited). We claim that algorithm 4 visits every edge and every vertex of $\G$ exactly once, in post-order. In the rest of this section we will prove this claim, and show that both the stack and the mark bits are superfluous, provided the data structure used to represent the subdivision is rich enough.
The procedure below is a ``mechanical'' translation of algorithm 4 to eliminate the use of recursion. It uses an explicit stack $Q$ to save the the current value of the variable $e$ when simulating recursive calls (the value of $u$ need not be saved, since it is always the origin of $e$).
\algorithm{5}{Post-order traversal of an acyclic planar graph with one source and one sink {\rm (iterative version)}.}{
\begblock
\step{\sno1 Set $Q\gets\empty$, $u\gets\null$source of $\G$,
$e\gets\firstout(u)$.}
\step{\sno2 Repeat the following steps:}
\begblock
\comm{At this point $u$ is unvisited, and $e$ is the first
unvisited edge out of $u$.}
\step{\sno3 While $e\neq\base(\G)$
and the destination $v$ of $e$ has not been visited yet,}
\begblock
\step{\sno4 \sna{Simulate recursive call} Set $u\gets v$,
push $e$ onto the stack $Q$,
and set $e\gets\firstout(u)$.}
\endblock
\step{\sno5 If $e\neq\base(\G)$, visit $e$.}
\step{\sno6 While $e=\lastout(u)$ do}
\begblock
\step{\sno7 Visit $u$.}
\step{\sno{8} \sna{Simulate return} If $Q$ is empty,
the algorithm terminates. Else,}
\begblock
\step{\sno{9} Pop the topmost edge of $Q$, assign it to $e$.}
\step{\sno{10} Set $u$ to the origin of $e$.}
\endblock
\step{\sno{11} Visit $e$.}
\endblock
\step{\sno{12} \sna{Advance $e$\eat} Set $e$ to the next edge
out of $u$.}
\endblock
\endblock
}
We make the following claims about the behavior of algorithm 5:
\begitems
\itemno{(I)} Before every step, the edges in the stack $Q$ form a path in $\G$ from the source to $u$;
\itemno{(II)} an edge is stacked onto $Q$ (step 4) only if its destination is still unvisited;
\itemno{(III)} an edge is unstacked (step 9) only after its destination has been visited;
\itemno{(IV)} an edge is visited only after its destination has been visited;
\itemno{(V)} a vertex is visited only after its last outgoing edge has been visited;
\itemno{(VI)} every edge is stacked at most once;
\itemno{(VII)} for any given vertex, at most one incoming edge ever gets stacked; and
\itemno{(VIII)} an edge is visited only after the previous edge with same origin (if any) is visited.
\enditems
Assertion (I) is easily proved by induction on the number of executed steps. Assertions (II)--(V) are then obvious; (VI) and (VII) follow from (II), (III), and the acyclicity of $\G$. In order to prove (VIII) consider two consecutive edges $e1, e2$ of $\G$ with same origin $u1$. Suppose algorithm 5 visits $e2$ at some time $t$, either in step 5 or step {11}. This can only happen if $e2$ was assigned to $e$ in step 12 at some earlier time $t\pr$ (and possibly stacked/unstacked since then). The edge $e1$ then must have been the last edge visited before $t\pr$ (in step 5 or {11}). Claim (VIII) follows. From (IV), (V) and (VIII) we conclude that all vertices and edges of $\G$ are visited, and conditions (i)--(iii) are satisfied.
Algorithm 5 defines a subgraph $\H$ of $\G$, consisting of all the edges that ever get stacked onto $Q$. Every vertex of $\G$ is reachable from the source via a directed path in $\H$ (the contents of the $Q$ at any instant when the variable $u$ is that vertex), and has at most one incoming edge in $\H$. It follows that $\H$ is a spanning oriented tree of $\G$, as illustrated in figure 25. We remark that the order in which the vertices of $\G$ are visited corresponds to the post-order traversal of the tree $\H$, as defined by Knuth [\Knu]. We will show now that the only edge of $\H$ (if any) incident to a vertex $u$ is $\lastin(u)$. More precisely, the following lemma holds:
\lemma{\Thexii}{Before any step of algorithm 5, if the stack $Q$ is not empty, its topmost edge is $\lastin(u)$; otherwise $u$ is the source of $\G$.}
\proof{The second part of the lemma is obvious, since the contents of $Q$ is always a path in $\G$ from the source to $u$. Let us then assume $Q$ is not empty. Let $u1$ be the current value of $u$, $\pi$ be the path in $Q$, and $e1$ be the last edge of $\pi$ (i.e., the top of $Q$). See figure 26.
\pp Suppose $e2$ is an edge distinct from $e1$ but having same destination $u1$. From assertions (II) and (III) above we conclude that $e1$ and $e2$ have yet to be visited. By the time $e2$ is visited by algorithm 5, the variable $u$ will contain the origin of $e2$; the contents of $Q$ at that time, plus the edge $e2$, will define another directed path $\pi\pr$ from the source to $u1$. Since $\G$ is acyclic, the path $\pi\pr$ neither contains nor is contained in $\pi$; in fact, because of property (VII) the paths $\pi$ and $\pi\pr$ must diverge exactly once, at some vertex $w\neq u1$, and converge exactly once at $u1$. Therefore all edges of $\pi$ between $w$ and $u1$ must be unstacked (and visited) before $e2$ is visited. In particular, the edge $\alpha$ through which $\pi$ leaves $w$ is visited before the corresponding edge $\alpha\pr$ of $\pi\pr$, and therefore $\alpha\pr$ must follow $\alpha$ in counterclockwise order around $w$.
\pp Let $R$ be the complement of the outer face of $\G$, and let $\sigma$ be any directed path from $u1$ to the sink of $\G$. The concatenation of $\pi$ and $\sigma$ divides $R$ in two (not necessarily connected) regions, which we call {\it left} and {\it right} (with the direction of $\pi\sigma$ being taken as ``forwards''). The path $\pi\pr$ cannot cross $\sigma$ (otherwise the two would give rise to a cycle), and its first edge $\alpha\pr$ lies in the left region; therefore, after leaving $w$ the path $\pi\pr$ must lie entirely in the left region, and in particular $e2$ precedes $e1$ in the counterclockwise order around $u1$. By repeating this argument for all possible edges $e2\neq e1$ into $u1$, we conclude that $e1=\lastin(u)$.}
Therefore, instead of retrieving $e$ from the stack in step 9, we can simply set $e\gets \lastin(u)$. Note also that the test ``$e\neq\base(\G)$ and $v$ has not been visited yet'' in step 3 is evaluated and yelds true if and only if $e$ is pushed into $Q$ by the following step, so we can replace that test by the condition ``$e\neq\base(\G)$ and $e=\lastin(v)$''. These observations enable us to dispense with the recursion stack and the ``visited'' bits on the vertices.
A representation for the embedded graph $\G$ that allows the efficient computation of $\firstout$ and its companions is the {\it quad-edge} data structure [\GuiSto]. This representation has the important advantage of encoding simultaneously both $\G$ and its dual embedding, in precisely the same format, at a negligible extra cost in storage. This allows the post-order traversals of both $\Hor$ and $\Ver$ to be performed by the same procedure, applied to the same data structure.
The quad-edge structure by itself can only represent an {\it undirected} embedded graph, such as the undirected subdivision $\Sscr$. However, every edge $\bar e$ of $\Sscr$ is represented by two distinct records in the structure, corresponding to the two possible orientations of $\bar e$. Therefore, to refer to the edge $\bar e$ of the structure we must actually refer to a specific {\it directed} version of $\bar e$. Given such a directed edge $e$, the quad-edge data structure gives immediate access to:
\begitems
\item $\org{e}$, the orgin vertex of $e$
\item $\dest{e}$, the destination vertex of $e$
\item $\onext{e}$, the next counteclockwise directed edge with the same origin,
\item $\dnext{e}$, the next counteclockwise directed edge with the same destination, and
\item $\sym{e}$, the same edge directed the other way.
\enditems
We also have immediate access to the dual $\rot{e}$ of the edge $e$, directed from the right face of $e$ to its left face, in the dual embedded graph.
A {\it directed} graph $\G$, such as $\Hor$ or $\Ver$, can be represented by the quad-edge encoding of the corresponding undirected graph, plus a predicate $\forward(e,\G)$ that tells whether the directed edge $e$ of the structure is actually an edge of $\G$. Clearly, $e$ is in $\G$ if and only if $\sym{e}$ is not in $\G$, so $\forward(e,\G)\eqv\lnot\forward(\sym{e}, \G)$. In our case, $\forward(e,\Hor)$ is simply the test of whether the $x$-coordinate of $\dest{e}$ is smaller than that of $\org{e}$. Similarly, in the dual graph $\Ver$ the predicate $\forward(e,\Ver)$ tests whether the region $\dest{e}$ is immediately below the region $\org{e}$. This turns out to be the same as $\lnot\forward(\rot{e},\Hor)$. The dummy edges that we must add to $\Hor$ and $\Ver$ are the only exception: we have $\rot{\base(\Ver)}=\base(\Hor)$, and yet both are $\forward$. For both graphs, we also have
$$\eqalign{
e=\lastin(\dest{e})\quad\eqv\quad\forward(e, \G)
\and \lnot\forward(\dnext{e}, \G),\cr
e=\lastout(\org{e})\quad\eqv\quad\forward(e, \G)
\and \lnot\forward(\onext{e}, \G),\ \hbox{and}\cr
e=\lastin(u)\quad\implies\quad\sym{\dnext{e}}=\firstout(u).\cr}$$
These identities allow us to remove also the calls to $\firstout$ and its companions from algorithm 5. Algorithm 6 below incorporates all these modifications.
\algorithm{6}{post-order traversal of an acyclic planar graph with one source and one sink {\rm (with $O(1)$ auxiliary storage)}.}{
\begblock
\step{\sno1 Set $u\gets\null$source of $\G$ and
$e\gets\sym{\dnext{\base(\G)}}$.}
\step{\sno2 Repeat the following steps:}
\begblock
\comm{At this point $u$ is unvisited,
$\forward(e, \G)$ is true, and $e$ is the first
unvisited edge out of $u$.}
\step{\sno3 While $e\neq\base(\G)\and \lnot\forward(\dnext{e}, \G)$,
do}
\begblock
\comm{Here $e$ is the last edge into its destination $v$, so
$v$ has not been visited yet.}
\step{\sno4 \sna{Recursive call} Set $u\gets \dest{e}$,
and set $e\gets\sym{\dnext{e}}$.}
\comm{This sets $e$ to $\firstout(u)$.}
\endblock
\comm{Now $e$ is the dummy edge, or its destination
has already been visited.}
\step{\sno5 If $e\neq\base(\G)$, visit $e$.}
\step{\sno6 While $\lnot\forward(\onext{e}, \G)$ do}
\begblock
\comm{Here $e$ has been visited and is $\lastout(u)$.}
\step{\sno7 Visit $u$.}
\step{\sno{8} \sna{Recursive return} If $u$ is the source of $\G$,
the algorithm terminates. Else,}
\begblock
\step{\sno{9} \sna{Compute $\lastin(u)$\eat}
Set $e\gets\sym{\onext{e}}$. Then while
$\forward(\dnext{e},\G)$, set $e\gets\dnext{e}$.}
\step{\sno{10} Set $u\gets\org{e}$.}
\endblock
\step{\sno{11} Visit $e$.}
\endblock
\step{\sno{12} \sna{Advance $e$\eat} Set $e\gets\onext{e}$.}
\endblock
\endblock
}
The equivalence between algorithms 5 and 6 is straightforward. It is also easy to show that the latter runs in $O(m)$ time for a subdivision with $m$ edges. Every edge of the graph is assigned to $e$ at most twice: once in the enumeration of the edges out of a vertex $v$ (step 12), and once in the determination of $\lastin(v)$ (step 9).
\section{13. Conclusions and applications}
We have introduced a new data structure, the layered dag, which solves the point location problem for a monotone subdivision of the plane in optimal time and space. The main idea has been to refine the chains introduced by Lee and Preparata and connect the refined chains by links. The latter concept originates with Willard [\Wi] and has found frequent application since then. The layered dag can be built from standard subdivision representations in linear time, as follows. We use the graph traversal algorithm of section 12 to enumerate the regions of the subdivision in a way compatible with the vertical ordering presented in section 4. Another traversal of the subdivision then allows us to build the chain tree representing a complete family of separators, as in section 10. Finally, the layered dag is built from the chain tree, as explained in section 9. The point location algorithm using this structure has been given in section 8. Compared to previous optimal solutions, the advantage of the layered dag is that
\begitems
\item it admits of a simple implementation and would definitely be the method of choice in practice, and
\item it can be extended to subdivisions with non-straight-line edges.
\enditems
We will not discuss in detail here how to generalize our method to work for non-straight-line subdivisions. Certain requirements for such a generalization to work are clear. We must be able to break up edges into monotone pieces, to introduce the additional edges required by regularization, and to test on what side of (a monotone segment of) an edge a point lies. Our time bounds will be maintained as long as we are able to in constant time:
\begitems
\itemno{(i)} cut an edge into monotone pieces,
\itemno{(ii)} add a monotone regularization edge between two existing monotone edges, and
\itemno{(iii)} test if a point $p$ is above or below a monotone edge $e$.
\enditems
The layered dag also yields improved solutions for several other problems in computational geometry. All of these problems are reduced to the subdivision search problem treated earlier. Some examples follow.
Let $U$ denote a set of $n$ points in the Euclidean plane and let $r$ denote a fixed positive real number. The {\it fixed-radius near neighbors} problem asks for all points in $U$ whose distance to a query point $p$ is smaller than $r$. This problem has been reduced to a subdivision search in Bentley and Maurer [\BenMa] using regions with circular edges. Together with the method of Chazelle [\ChazOne], the layered dag yields a solution which requires $O(n^2)$ space and $O(t + \log n)$ query time, where $t$ denotes the number of points to be reported. This improves the $O(n^2\log n)$ space bound which is obtained when the data structure in [\Pre] is used in the search.
Subdivisions with circular edges also occur in the {\it weighted Voronoi diagram} of a point set [\AuEde]. There, each point $p$ in a finite set $U$ has associated a positive weight $w(p)$ and the region $R(p) = \rset{x \relv d(x,p)/w(p) d(x,q)/w(q), \hbox{for all } q\in U}$. The layered dag offers the first optimal method for locating a point in the diagram defined by these regions.
Finally, certain problems related to windowing a two-dimensional picture given as a collection of line segments have been reduced to subdivision search by Edelsbrunner, Overmars, and Seidel [\EOS]. The layered dag provides a way to extend their methods to more general curves without losing efficiency.
\section{14. References}
\ref[\AuEde] {\refauth{Aurenhammer, F., and Edelsbrunner, H.}
\reftit{An optimal algorithm for constructung the weighted Voronoi diagram in the plane.}
\refpub{Rep. F109, Inst. for Inf. Proc., Techn. Univ. of Graz, Austria (1983).}}
\ref[\BenMa] {\refauth{Bentley, J. L., and Maurer, H. A.}
\reftit{A note on Euclidean near neighbor searching in the plane.}
\refpub{Information Processing Letters, vol. 8 (1979), 133--136.}}
\ref[\BiPre] {\refauth{Bilardi, G., and Preparata, F. P.}
\reftit{Probabilistic analysis of a new geometric searching technique.}
\refpub{Manuscript, Dept. of EECS, Univ. of Ill., Urbana (1982).}}
\ref[\ChazOne] {\refauth{Chazelle, B. M.}
\reftit{An improved algorithm for the fixed-radius neighbor problem.}
\refpub{Information Processing Letters, vol. 16 (1983), 193--198.}}
\ref[\ChazTwo] {\refauth{Chazelle, B. M.}
\reftit{How to search in history.}
\refpub{Rep. CS-83-08, Dept. of Comp. Sci., Brown Univ., Providence, RI (1983).}}
\ref[\DoLi] {\refauth{Dobkin, D. P., and Lipton, R. J.}
\reftit{Multidimensional searching problems}
\refpub{SIAM Journal on Computing, vol. 5 (1976), 181--186.}}
\ref[\EHH] {\refauth{Edelsbrunner, H., Haring, G., and Hilbert, D.}
\reftit{Rectangular point location in $d$ dimensions with applications.}
\refpub{Manuscript.}}
\ref[\EdMa] {\refauth{Edelsbrunner, H., and Maurer, H. A.}
\reftit{A space-optimal solution of general region location.}
\refpub{Theoretical Computer Science, vol. 16 (1981), 329--336.}}
\ref[\EOS] {\refauth{Edelsbrunner, H., Overmars, M. H., and Seidel, R.}
\reftit{Some methods of computational geometry applied to computer graphics.}
\refpub{Manuscript, Inst. for Inf. Proc., Techn. Univ. of Graz, Austria (1983).}}
\ref [\GuiSto] {\refauth{Guibas, L. J., and Stolfi, J.}
\reftit{Primitives for the manipulation of general subdivisions and the computation of Voronoi diagrams.}
\refpub{Proc. 15th Annual ACM Symp. on Theory of Computing (1983), 221--234.}}
\ref [\Har] {\refauth{Harel, D.}
\reftit{A linear time algorithm for the lowest common ancestor problem.}
\refpub{Proc. 21st IEEE Symp. on Foundations of Computer Science (1980), 308--319.}}
\ref [\Kir] {\refauth{Kirkpatrick, D. G.}
\reftit{Optimal search in planar subdivisions.}
\refpub{SIAM Journal on Computing. vol. 12 (1983), 28--35.}}
\ref [\Knu] {\refauth{Knuth, D. E.}
\reftit{The Art of Computer Programming{\rm, Vol. 1: } Fundamental Algorithms {\rm (2nd. ed.)}}
\refpub{Addison-Wesley, 1975.}}
\ref [\LeePre] {\refauth{Lee, D. T., and Preparata, F. P.}
\reftit{Location of a point in a planar subdivision and its applications.}
\refpub{SIAM Journal on Computing, vol. 6 (1977), 594--606.}}
\ref[\LiTa] {\refauth{Lipton, R. J., and Tarjan, R. E.}
\reftit{Applications of a planar separator theorem.}
\refpub{Proc. 18th IEEE Symp. on Foundations of Computer Science (1977), 162--170.}}
\ref[\Pre] {\refauth{Preparata, F. P.}
\reftit{A new approach to planar point location.}
\refpub{SIAM Journal on Computing, vol. 10 (1981), 473--482.}}
\ref [\PreSup] {\refauth{Preparata, F. P., and Supowit, K. J.}
\reftit{Testing a simple polygon for monotonicity.}
\refpub{Information Processing Letters, vol. 12 (1981), 161--164.}}
\ref[\Sha] {\refauth{Shamos, M. I.}
\reftit{Geometric complexity.}
\refpub{Proc. 7th ACM Symp. on Theory of Computing (1975), 224--233.}}
\ref[\Tar] {\refauth{Tarjan, R. E.}
\reftit{Depth first search and linear graph algorithms.}
\refpub{SIAM Journal on Computing, vol. 1 no. 2 (1972), 146--160.}}
\ref[\Wi] {\refauth{Willard, D. E.}
\reftit{New data structures for orthogonal queries.}
\refpub{To appear in SIAM Journal on Computing.}}
\vfill\eject
\section{Appendix: Possible exercises}
\itemno{1.} for tree $\T$: $\father(x) \gets [y \gets (x \oplus (x-1)) + 1; \father \gets (x \or 2*y) \and \lnot y]$
\itemno{2.} do more carefully the curved edge version
\itemno{3.} find a way to remove the gap tests: show that if in the region numbering we always resolve ambiguities by $y$ dominance, then no gap in a chain can switch from being a left child to a right child more than once; so compute these break values, break the gaps, and remove the gap tests.
\itemno{4.} Use the Edelsbrunner non-unique numbering and show that it uses the smallest number of labels.
\itemno{5.} This formula becomes slightly faster than (section 11) if we reverse the bit sequences of all indices in algorithms 1 or 2. That is, we use the variables $i\pr=\REV(i)$, $j\pr=\REV(j)$, and $\lscr\pr=\REV(\lscr)$ instead of $i$, $j$, and $\lscr$. Then in step 3 we can compute $\lscr\pr\gets\REV(\LCA(i, j))$ by doing
$$\eqalign{x|\gets i\pr\xor j\pr\cr
\lscr\pr|\gets (x\xor (x-1))\land j\pr.\cr}$$
\par\vfill\eject\end