\input PapWHdr.tex
\begfile{MonSep.tex of March 18, 1983 4:15 PM --- Stolfi}
\inputtty % your chance to \def\printfilestamp{F}
\paper{Finding Separators in a Monotone Subdivision}
\def\paptitle{Finding separators}
\author{Leo J. Guibas and Jorge Stolfi}
\institution{Xerox PARC and Stanford University}
\support{The work of Jorge Stolfi, who is on leave from the University of S{\cmr\s\rm a}o Paulo (S{\cmr\s\rm a}o Paulo, Brazil), was partially supported by a grant from Conselho Nacional de Desenvolvimento Cient{\cmr\’\i}fico e Tecnol{\cmr\’\rm o}gico (CNPq).}
\abstract{The Lee-Preparata algorithm for point location in a monotone subdivision requires a suitable family of separating chains. We show how to construct such a family in $O(m)$ time for a subdivision with $m$ edges.}
\abstract{{\bf Keywords:} point location, monotone polygons, planar graphs, partial order, graph traversal, computational geometry.}
\def\Kir{Ki}
\def\LeePre{LP}
\def\GuiSto{GS}
\def\Knu{Kn}
\def\Har{H}
\def\PreSup{PS}
\def\Thei{1}
\def\Theii{2}
\def\Theiii{3}
\def\Theiv{4}
\def\Thev{5}
\def\Thevi{6}
\def\Thevii{7}
{\it Point location} is one of the fundamental problems of computational geometry. In the two-dimensional case, we are given a subdivision of the plane into two or more polygonal regions, and then asked to determine which of those regions contain a given query point $p$. If the same subdivision is going to be used for a large number of queries, as is often the case, we can reduce the total cost of this task by preprocessing the subdivision into a data structure suitable for the search.
The algorithm that has the best asymptotic running time so far is one due to D. Kirkpatrick [\Kir], which for a subdivision with $m$ edges and triangular regions uses $O(m)$ storage, $O(m)$ preprocessing time, and thereafter can answer each query in $O(\log m)$ time. However, for problems of moderate size its cost may be greater than that of asymptotically slower but simpler methods.
One of these algorithms, due to D. T. Lee and F. Preparata [\LeePre], requires $O(m)$ storage, $O(m\log m)$ preprocessing time, and $O(\log↑2 m)$ time per query. The constants of proportionality in these expressions are quite small, and the algorithms, particularly the query one, are simpler than those of Kirkpatrick. While Kirkpatrick’s algorithm requires the regions to be decomposed into triangles, that of Lee and Preparata works for regions of a fairly general shape (monotone polygons, which include the convex ones).
We present here an improved $O(m)$ preprocessing algorithm for Lee and Preparata’s method, and also describe in detail the implementation of their point location algorithm, adapted to our data structures and notation. As ususal, the $O(m)$ time bound is derived under the unit cost model, in which it is assumed that a number with $O(\log m)$ bits can be represented in $O(1)$ storage, and any arithmetic operation on such numbers can be performed in $O(1)$ time. Under the more rigorous $\log$-cost complexity model, all time and space bounds in this paper and in the main references cited above should be multiplied by an extra $\log m$ factor.
\section{1. Monotone polygons and subdivisions}
\def\proj#1{\Pi #1}
An {\it interval} is a convex subset of a straight line, i.e. the whole line, a ray, a segment, a single point, or the empty set. An interval is {\it proper} if it contains more than one point, and is {\it closed} or {\it open} depending on whether it contains or not its endpoints (if any). A subset of the plane is said to be {\it monotone} if its intersection with any line segment is a single interval (possibly empty). Every convex polygon is monotone, but the converse is obviously false.
We define a {\it polygon} as an open and simply connected subset of the plane whose boundary can be decomposed into a finite number of points ({\it vertices}) and open intervals ({\it edges}). Note that according to these definitions polygons may have infinite extent. A {\it subdivision} is a partition of the plane into a finite number of disjoint polygons ({\it regions}), edges, and vertices; these parts are collectively called the {\it elements} of the subdivision. It is not hard to see that every edge of a subdivision is on the boundary between two regions (not necessarily distinct), that every vertex is incident to some edge, that every endpoint of an edge is a vertex, and that every region (unless there is only one) has some edge on its boundary. From these facts and from Euler’s theorem we conclude that a subdivision with $m$ edges has $O(m)$ vertices and regions, justifying our use of $m$ as the measure of problem complexity.
A subdivision is said to be {\it monotone} if all its regions are monotone and it has no vertical edges. The last condition is not essential: it is imposed only to simplify the proofs and algorithms. The figure below illustrates a monotone subdivision (arrowheads denote vertices at infinity).\fig4
With minor caveats and modifications, monotone subdivisions include many interesting subcases, like triangulations, subdivisions of the plane into convex pieces, and so forth. It is not hard to see that in a monotone subdivision every edge is incident to exactly two distinct regions. The lemma below shows that monotone subdivisions are precisely the ``regular planar straight-line graphs’’ as defined by Lee and Preparata [\LeePre]:
\lemma{\Thei}{A subdivision with no vertical edges is monotone if and only if every vertex is incident to at least two distinct edges, one at its left and one at its right.}
\proof{First, assume there is a vertex $v$ such that all edges incident to it lie at the right of $v$. Let $R$ be the region that is first encountered by moving down from $v$; it must also be the same region we encounter first when moving up from $v$. Then $R$ is not monotone, for its intersection with the vertical line passing through $v$ consists of at least two distinct intervals separated by $v$. A similar argument holds if all edges incident to $v$ lie to its left.
\mfig{3.9}
\pp For the converse, assume $R$ is a region that is not monotone, and let $\lscr$ be a vertical line whose intersection with $R$ consists of two or more components. Since $R$ is connected, any two of components $I1̌$, $I2̌$ of the intersection are connected by a simple path $\pi$ in $R$. We can always choose $I1̌$, $I2̌$ and $\pi$ so that $I1̌$ is above $I2̌$, and $\pi$ does not meet $\lscr$ except at its endpoints $p1̌\in I1̌$ and $p2̌\in I2̌$. Then $\pi$ and the interval $[p1̌\span p2̌]$ delimit a closed and bounded region $S$ of the plane.
\endmfig
\pp Suppose $\pi$ lies to the left of $\lscr$. The boundary of $R$ must enter $S$ at the lower endpoint of $I1̌$ (and also at the upper endpoint of $I2̌$). Since the boundary cannot meet the path $\pi$, there must be some vertex of the subdivision in the interior of $S$. Let $v$ be the leftmost such vertex; clearly all edges incident to $v$ lie to the right of it, and we have proved the theorem. A similar argument holds if $\pi$ lies to the right of $\lscr$.}
Therefore, we can check whether any subdivision is monotone by enumerating all the edges incident to each vertex, and checking whether they satisfy lemma {\Thei}. Each edge will be examined at most twice, and each vertex once, so this algorithm runs in time $O(m)$ for a subdivision with $m$ edges. In fact, it is possible to determine within this time bound whether there is any rotation of the subdivision that makes it monotone, as was shown by Preparata and Supowit [\PreSup]. Lee and Preparata [\LeePre] have shown also that an arbitrary subdivision with $m$ edges can be transformed into a monotone subdivision having $\leq 2m$ edges in $O(m\log m)$ time.
\section{2. The vertical ordering}
\def\below{\lsls}
\def\above{\grgr}
\def\adjbel{\prec}
\def\adjabo{\succ}
\def\S{\hbox{\bf S}}
\mfig4
We denote by $\proj{A}$ the projection of a set $A$ on the $x$-axis. The projection $\proj{R}$ of a monotone polygon on the $x$-axis is an open interval, whose endpoints, if any, are the projections of at most two {\it extremal vertices} of $R$. The {\it frontier} of $R$ is the boundary of $R$ minus its extremal vertices. This frontier consists of two disjoint pieces, the {\it top} and the {\it bottom} of the region $R$. Each is either empty, or a connected polygonal line, which may extend to infinity in one or both directions.
\endmfig
\mfig4
Given two subsets $A$ and $B$ of the plane, we say that $A$ is above $B$ ($A\above B$), and $B$ is below $A$, iff for every pair of points $(x, yǎ) \in A$ and $(x, yb̌)\in B$ we have $yǎ \geq yb̌$, with inequality holding at least once. For the rest of this section, we will restrict $\below$ to the elements of a fixed monotone subdivision $\S$, with $n$ regions and $m$ edges. Then $\below$ has the following properties:
\endmfig
\lemma{\Theii}{For any two elements $A$ and $B$, $A\above B$ if and only if $A\neq B$ and there is some vertical line segment with lower endpoint in $B$ and upper endpoint in $A$.}
\proof{The ``only if’’ part is obvious from the definition. For the converse, assume there is such a segment with endpoints $(x, yǎ)\in A$ and $(x, yb̌)\in B$, $yǎ>yb̌$, but suppose $A\not\above B$. Then there should be two other points $(x\pr, yǎ\pr)\in A$ and $(x\pr, yb̌\pr)\in B$ such that $yǎ\pr<yb̌\pr$. Let’s first suppose $x=x\pr$: either we have $yǎ\pr<yb̌<yǎ$, or $yb̌<yǎ\pr<yb̌\pr$. Since all elements of a monotone subdivision are monotone, a vertical segment with both endpoints in the same element is entirely contained in that element. Therefore in the first case we conclude that $(x, yb̌)\in A$, and in the second that $(x\pr, yǎ\pr)\in B$, contradicting the fact that $A\inte B=\ety$.
\mfig5
\pp Now suppose $x\neq x\pr$. Since $A$ and $B$ are connected, there is a simple path $\piǎ$ in $A$ that connects $(x, yǎ)$ to $(x\pr, yǎ\pr)$, and an analogous one $\pib̌$ for $B$. There are pieces $\gammaǎ$ of $\piǎ$ and $\gammab̌$ of $\pib̌$ that connect the vertical line at $x$ to the vertical line at $x\pr$ while staying always between the two. Since $A$ and $B$ are monotone, the endpoint $pǎ$ of $\gammaǎ$ at $x$ must be above the endpoint $pb̌$ of $\gammab̌$ at $x$; for the same argument, the endpoints $pǎ\pr$ and $pb̌\pr$ of the two paths at $x\pr$ must be in the opposite order. We conclude the two paths must cross at some point $q$; but then $q\in A\inte B$, a contradiction.}
\endmfig
\lemma{\Theiii}{Any two elements $A$ and $B$ satisfy exactly one of $A=B$, $A\below B$, $A\above B$, or $\proj{A}\inte\proj{B}=\ety$.}
\proof{Immediate from the definition and lemma {\Theii}.}
\mfig4
If three elements $A$, $B$ and $C$ are intercepted by a common vertical line, then from $A\below B$ and $B\below C$ we can conclude $A\below C$. Therefore, the relation $\below$ is transitive when restricted to all the elements intercepted by a given vertical line; indeed, in that case it is a linear order. In general transitivity does not hold; however, the following is true:
\endmfig
\lemma{\Theiv}{The relation $\below$ is acyclic.}
\proof{Suppose that is not true. Let $E0̌ \below E1̌ \below E2̌ \below \ldots \below Eǩ=E0̌$ be a cycle of $\below$ with minimum length, where each $Eǐ$ is an element of the subdivision $\S$. From lemma {\Theiii}, it follows immediately that $k>2$.
\pp The $x$-projections of any two consecutive elements $Eǐ$ and $E{̌i+1}$ in this cycle must have some abscissa $xǐ$ in common. If for some $i$ we had $\proj{E{̌i}}\subset \proj{E{̌i-1}}$, then the vertical line through $xǐ$ would meet $E{̌i-1}$, $Eǐ$, and $E{̌i+1}$; transitivity would hold, and we would have $E{̌i-1}\below E{̌i+1}$.\fig3 But then we could omit $Eǐ$ and still get a cycle, contradicting the assumption that $k$ is minimum. For analogous reasons, we cannot have $\proj{E{̌i}}\subset \proj{E{̌i+1}}$.
\pp Let $Eǐ$ be one of the ``leftmost’’ cycle elements, i.e. such that none of the intervals $\proj{Eǰ}$ contains a point to the left of $\proj{Eǐ}$. The projections $\proj{E{̌i-1}}$ and $\proj{E{̌i+1}}$ of its neighbors must both meet $\proj{E{̌i}}$, and, by what we have just said, extend beyond its right endpoint. But then there would be again a vertical line meeting all three elements, implying $E{̌i-1}\below E{̌i+1}$ and $k$ is not minimum. We conclude that no such cycle exists.}
\mfig4
We say that a region $A$ is {\it immediately above} a region $B$ ($A\adjabo B$) if $A\above B$ and the frontiers of the two regions have at least one common edge. Clearly, $\adjabo$ implies $\above$, but the converse is not true in general. The following result is may then seem somewhat surprising:
\endmfig
\lemma{\Thev}{The transitive closure of $\below$ is the same as that of $\adjbel$}
\proof{Clearly, since $\adjbel$ implies $\below$, the same will be true of their transitive closures, so we only have to prove the other direction.
\pp If $B\below A$, for every $x\in \proj{A}\inte\proj{B}$ there is a vertical segment with abcissa $x$, lower endpoint in $B$, and upper endpoint in $A$. We can always choose $x$ so that the segment passes through no vertex of $\S$. The regions $A, C1̌, C2̌, \ldots, Cǩ, B$ intercepted by this segment, from bottom to top, clearly satisfy $A\adjbel C1̌\adjbel C2̌ \adjbel \ldotss \adjbel Cǩ\adjbel B$. We conclude that $\below$, and therefore its transitive closure, imply the transitive closure of $\adjbel$.}

\section{3. Separators}
\def\index#1{\# #1}
\def\aboreg#1{\mathop{\char a\char b\char o\char v\char e}(#1)}
\def\beloreg#1{\mathop{\char b\char e\char l\char o\char w}(#1)}
A {\it separator} for $\S$ is a polygonal line $s$, consisting of vertices and edges of $\S$, with the property that it meets every vertical line at exactly one point. Since $s$ extends from $x=-\infty$ to $x=+\infty$, any element of $\S$ that is not part of $s$ is either above or below it.\fig3
The elements of $s$ have pairwise disjoint projections on the $x$-axis, and so can be ordered from left to right; the first and last elements are infinite edges.
A {\it complete family of separators} for a monotone subdivision $\S$ with $n$ regions is a sequence of $n-1$ distinct separators $s1̌\below s2̌ \below \ldotss \below s{̌n-1}$. It is clear that there must be at least one region between any two consecutive separators, and also one below $s1̌$ and one above $s{̌n-1}$. If a region is below a separator $sǐ$, it must also be below any separator $sǰ$ with $j>i$. We can conclude that, if $\S$ admits a complete family of separators, its regions can be enumerated as $R0̌, R1̌, \ldots, R{̌n-1}$ in such a way that $Rǐ\below sǰ$ if and only if $i< j$; in particular,
$$R0̌\below s1̌ \below R1̌ \below s2̌ \below \ldots \below s{̌n-1} \below R{̌n-1}.\eqno(1)$$
Given a complete family of separators and an enumeration of the regions like in (1), let us denote by $\index{R}$ the indices of a region $R$ in the enumeration. Then $s{̌\index R}\below R\below s{̌\index R + 1}$. Also, if we let $\aboreg{e}$ be the region above the edge or vertex $e$, and $\beloreg e$ the one below it, then the following property holds:
\lemma{\Thevi}{If $i=\index{\beloreg e}$ and $j=\index{\aboreg e}$, then the separators containing $e$ will be exactly $s{̌i+1}, s{̌i+2}, \ldots, sǰ$.}
\proof{The edge $e$ will belong to a separator $sǩ$ if and only if the latter lies above $Rǐ$ and below $Rǰ$, i.e. if and only if $i< k\leq j$.}

\vskip0pt plus50pt
\section{4. The point location algorithm}
\def\T{\hbox{\bi T\/}}
\def\LCA{\mathop{\char l\char c\char a}}
\def\MSB{\mathop{\char m\char s\char b}}
\def\REV{\mathop{\char r\char e\char v}}
Later on we will show that every monotone subdivision admits a complete family of separators, and we will give efficient algorithms for constructing it. Let’s therefore assume we have such a family $s1̌, s2̌, \ldots, s{̌n-1}$, with a corresponding enumeration of the regions $R0̌, R1̌, \ldots, R{̌n-1}$ satisfying (1); we will show next how they can be used to determine, in time $O(\log n\log m)$, the unique element of $\S$ that contains a given point $p$.
The algorithm we will describe is essentially that of Lee and Preparata [\LeePre], and uses two levels of binary search. The inner loop takes a separator $sǐ$ (as a linear array of edges and vertices, sorted by $x$-coordinate), and determines by binary search an edge or vertex $e$ of $sǐ$ whose $x$-projection contains $x$. By testing $p$ against $e$, we will know whether $p$ is above $sǐ$ or below $sǐ$ (or, possibly, on $e$ itself, in which case the search terminates). The outer loop performs binary search on $i$, so as to locate $p$ between two consecutive separators $sǐ$ and $s{̌i+1}$, that is to say in a region $Rǐ$.
Besides the separators, the algorithm assumes the index $\index R$ can be obtained for any given region $R$, and similarly the adjacent regions $\aboreg e$ and $\beloreg e$ can be obtained from an edge $e$, all in constant time. We will see that the construction of these tables is part of the process of constructing the family of separators. The search algorithm uses these tables to reduce substantailly the storage requirements (and also speed up the search a little bit).
Let $\T$ be the infinite, complete binary search tree with internal nodes $1, 2, 3, \ldots$ and leaves $0, 1, 2, 3, \ldots$ (see figure).\fig6 We let $\LCA(i, j)$ to be the {\it lowest common ancestor} in $\T$ of the leaves $i$ and $j$, that is, the root of the smallest subtree of $\T$ that contains both $i$ and $j$. This tree is used as a ``flowchart’’ for the outer loop of the search algorithm, with the convention that each internal node $i$ represents a test of $p$ against the separator $sǐ$, and each leaf $j$ represents the output ``$p$ is in $Rǰ$’’.
\algorithm{1}{Point location in a monotone subdivision.}{
\begblock
\step {\sno1 Set $i\gets 0$, $j\gets n-1$.}
\step {\sno2 While $i<j$, do:}
\begblock
\comm {At this point we know $p$ is above the separator
$s{̌i}$ and below $s{̌j+1}$ (whenever those separators exist).
That is, $p$ is in one of the regions
$Rǐ, R{̌i+1}, \ldots, Rǰ$, or in some edge or vertex
between those two separators. At each iteration of this loop,
either $i$ increases or $j$ decreases, and their lowest common
ancestor in $\T$ moves down at least by one level.}
\step {\sno3 Set $\lscr\gets \LCA(i, j)$.}
\step {\sno4 Find (by binary search) an edge
or vertex $e$ in $s\̌lscr$ such thet $\proj{p}\in \proj{e}$.
Let $a=\index{\aboreg e}$, $b=\index{\beloreg e}$.}
\comm {By lemma {\Thevi}, $e$ belongs also to
the separators $sb̌, s{̌b+1}, \ldots, s{̌a-1}$.
Therefore, by testing $p$ against $e$ we
conclude it is either below $sb̌$ or above $s{̌a-1}$.}
\step {\sno5 If $p$ is below $e$, set $j\gets b$;
If $p$ is above $e$, set $i\gets a$; else
terminate the search ($P\in e$).}
\endblock
\step {\sno6 Terminate the search ($s{̌i-1}\below p\below sǐ$, i.e., $p\in Rǐ$).}
\endblock}
If we were to represent each separator independently as a linear array with all of its edges and vertices, we would have to store $n-1$ chains, whose average length can be as large as $\Theta(n)$ for some classes of subdivisions. So, the storage requirement of this basic method is $\Theta(n↑2)$ in the worst case. However, after $p$ has been tested against the edge $e$ in step 5, $i$ and $j$ are updated in such a way that we will never again look at the edges of any other separator that contains $e$. Therefore, if an edge $e$ appears in some separator $sř$, it it need not be included in any other separator descendant of $sř$.
But this in turn implies that each edge or vertex $e$ has to be stored only once; for, if $e$ belongs to the distinct separators $sř$ and $sť$, by lemma 1 it belongs also to any separator in between the two, and therefore also to the root of the smallest subtree of $\T$ that contains both. In fact, if it is in the common frontier of regions $Rǐ$ and $Rǰ$, $e$ needs to be stored only in $s\̌lscr$, where $\lscr$ is the least common ancestor of $i$ and $j$. The total storage required by this method is therefore only $O(m)$.
\mfig{5.6}
The binary search along each separator $sǐ$ can be performed in $O(\log m)$ time, if the edges of $sǐ$ are stored in a linear array or balanced binary search tree sorted from left to right. As we shall see, $\LCA(i,j)$ can be computed quite easily in $O(\log n)$ (actually, in $O(1)$) operations; therefore, each iteration of the outer loop requires $O(\log m)$ time. By the first iteration, the internal node $\LCA(i,j)$ of $\T$ is at level $\lceil \log n\rceil$; at each iteration it descends at least one level, so we have $O(\log n)$ iterations, and a total time bound of $O(\log n\log m)=O(\log↑2 m)$. This bound is tight: the figure at right shows a subdivision with $m$ edges that realizes it. This example has ${\sqrt m}+1$ regions and a family of $\sqrt m$ disjoint separators with $\sqrt m$ edges each.
\endmfig

\section{5. The lowest common ancestor function}
\def\r{\nu}
\def\f#1{\hbox{\hskip1pt {\fx #1}\hskip1pt}}
The problem of computing the function $\LCA(i, j)$, the least common ancestor of two leaves $i$ and $j$ of $\T$, is quite interesting by itself. This function has a simple interpretation in terms of the binary representations of $i$ and $j$: if we have
$$i=(a{̌\r-1}\,a{̌\r-1}\,\ldots\, a{̌k+1}\,\f0\,a\pr{̌k-1}\,\ldots\, a\pr1̌\,a\pr0̌)2̌$$
and
$$\null\,j=(a{̌\r-1}\,a{̌\r-1}\,\ldots\, a{̌k+1}\,\f1\,a\prr{̌k-1}\,\ldots\, a\prr1̌\,a\prr0̌)2̌,$$
where $\r=\lceil\lg n\rceil$ is the number of bits needed to represent any number from $0$ through $n-1$, then
$$\LCA(i,j)=(a{̌\r-1}\,a{̌\r-1}\,\ldots\, a{̌k+1}\,\f1\,\f0\,\ldots\,\f0\,\f0)2̌.$$
Loosely speaking, $\LCA(i, j)$ is the longest common prefix of $i$ and $j$, followed by $\f1$ and padded with zeros. From the recursive formula below
$$\LCA(i, j)=\alter{$j$|if $j=i+1$,\cr\vc
$2\LCA(i/2,j/2)$|otherwise\cr}$$ we get a straightforward ``bottom-up’’ algorithm requiring $O(\log n)$ iterations in the worst case.
A ``top-down’’ version can be obtained by starting at the node $\lscr\gets \LCA(0, n-1)=2↑{\r-1}$, and proceeding down the tree $\T$ one level at a time. During the descent, we keep track of the difference $u$ between $\lscr$ and its parent; at the beginning $u$ is also equal to $2↑{\r-1}$. At each step, we first divide $u$ by two; then the left child of $\lscr$ is $\lscr-u$, and the right child is $\lscr+u$. We descend to the left when $\lscr>j$, to the right when $\lscr<i$, and we stop when $i\leq\lscr\leq j$. The starting value $2↑{\r-1}$ for $u$ and $\lscr$ is a constant that has to be computed only once, as part of the preprocessing algorithm. In the case of algorithm 1, the values of $\lscr$ and $u$ resulting from one call to $\LCA$ can be used as starting values for the next call; i.e., the evaluations of $\LCA$ can be computed incrementally, so their {\it total} cost is $O(\log n)$. This is precisely the version used by Lee and Preparata in their original paper.
This is more than enough to give an $O(\log↑2 n)$ time bound for algorithm 1. Since step 4 takes $O(\log n)$ time, the overall time bound will not be reduced by any further improvements in the computation of $\LCA$. However, for the preprocessing algorithm it turns out that the evaluation of $\LCA$ is an asymptotic bottleneck, so we will spend the rest of this section to show how it can be computed in $O(1)$ time. A solution for this problem, in a much more general setting, has been published by Dov Harel [\Har]. The regualr structure of $\T$ allows us to greatly simplify the problem.
An efficient formula for computing $\LCA(i, j)$ is based on the function $\MSB(k)=\LCA(0,k)=2↑{\lceil\lg k\rceil-1}$, the {\it most significant bit of $k$}. The values of this function for $k=1, 2, \ldots, n-1$ are $1, 2, 2, 4, 4, 4, 4, 8, 8, \ldots, 2↑{\r-1}$; these values can be easily precomputed in $O(n)$ time and stored in a table with $n-1$ entries, so we can compute $\MSB(k)$ in $O(1)$ time. Then we can express the $\LCA$ function as
$$\LCA(i,j)=j\land\lnot(\MSB(i\xor j)-1)\eqno(2)$$
where $\xor$, $\and$, and $\lnot$ are the boolean operations of bitwise ``exclusive or’’, conjuncction, and complement\foot1{We assume that those boolean operations can be computed in $O(1)$ time, like addition and subtraction. We feel their inclusion in the model is justified, since their theoretical complexity is no greater than that of addition, and most real machines have such instructions.\ftpar
In fact, some machines even have instructions for computing $\lceil\lg k\rceil-1$ from $k$ (``find first $\f1$ bit’’) and $2↑q$ from $q$ (``shift left $q$ bits’’), in a single execution cycle.}. For use in this formula, it is preferrable to tabulate $\MSB(k)-1$ instead of $\MSB(k)$.
Another way of computing $\LCA$ is based on the bit-reversal function $\REV(k)$, that reverses the order of the last $\r$ bits in the binary expansion of $k$. For example, for $n=16$ and $k=0, 1, \ldots, 15$, the values of $\REV(k)$ are $0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15$. Using this function we get the formula $$\LCA(i,j)=\REV(x\xor (x-1))\and j,$$where $x=\REV(i)\xor\REV(j)$. This formula becomes slightly faster than (2) if we reverse the bit sequences of all indices in algorithm 1. That is, we use the variables $i\pr=\REV(i)$, $j\pr=\REV(j)$, and $\lscr\pr=\REV(\lscr)$ instead of $i$, $j$, and $\lscr$. Then in step 3 we can compute $\lscr\pr\gets\REV(\LCA(i, j))$ by doing
$$\eqalign{x|\gets i\pr\xor j\pr\cr
\lscr\pr|\gets (x\xor (x-1))\land j\pr.\cr}$$

\section{6. Existence of a complete family of separators}
It is a well-known result [\Knu] that any acyclic relation over a finite set can be extended to a linear (or total) order. Therefore, by using lemma {\Theiv} we can conclude that there is an enumeration $R0̌, R1̌, \ldots, R{̌n-1}$ of the the regions of $\S$ that is compatible with $\below$, i.e. such that $Rǐ\below Rǰ$ implies $i<j$. Furthermore, any enumeration $R0̌, R1̌, \ldots, R{̌n-1}$ of the regions that is compatible with $\adjbel$ is also compatible with $\below$, and vice-versa.
Since a region with nonempty bottom frontier is immediately above some other region, and therefore not minimal under $\adjbel$, the first region in any such enumeration has no bottom frontier, and extends infinitely in the ``down’’ direction. Since vertical edges are not allowed, its $x$-projection is the whole $x$-axis. Similarly, the last region $R{̌n-1}$ is only one one with no top frontier, and also projects onto the whole $x$-axis. Therefore, we always have $R0̌\below Rǐ\below R{̌n-1}$ for $0<i<n-1$.
We are ready to prove the main result of this section:
\theorem{\Thevii}{Every monotone subdivision $\S$ admits a complete family of separators.}
\proof{Let $R0̌, R1̌, \ldotss, R{̌n-1}$ be a linear ordering of the regions of $\S$ that is compatible with the $\below$ relation, i.e. $Rǐ\below Rǰ$ only if $i<j$. For $i=1, 2, \ldots, n-1$, let $sǐ$ be the ``boundary’’ between the regions with indices $\leq i$ and those with indices $>i$. That is, a vertex or edge $e$ belongs to $sǐ$ if and only if it is on the frontiers of two regions $Rǩ$ and $Rǰ$ with $k< i \leq j$.\fig3
\pp Now consider any vertical line $\lscr$, and let $R{̌i1̌}, R{̌i2̌}, \ldots, R{̌iq̌}$ be the regions it meets, from bottom to top. Since $\lscr$ meets $R0̌$ and $R{̌n-1}$, and $R{̌i1̌}\below R{̌i2̌}\below \ldots\below R{̌iq̌}$, we will have $0=iǐ<i2̌<\ldots<iq̌=n-1$. Therefore, there is exactly one point on $\lscr$ that is on the frontier between a region with index $< i$ and a region with index $\geq i$, that is, on $sǐ$. Furthermore, the intersection of $\lscr$ with $sǐ$ will be equal to or above that with $s{̌i-1}$, and for some such lines (those that meet $R{̌i-1}$) the two intersections will be distinct. So, we have $s1̌\below s2̌\below \ldots\below s{̌n-1}$.
\mfig5
\pp Clearly, the elements of $sǐ$ have disjoint $x$-projections, and therefore can be ordered from left to right; they must be alternately edges and vertices, the first and last being infinite edges. To prove that $sǐ$ is a separator, it remains only to show that $sǐ$ is connected; if that were not the case, we would have some vertex $v$ of $sǐ$ that is distinct from, but has the same $x$-coordinate as, one endpoint of an adjacent edge $e$ of $sǐ$. But then we would have $u\below R\below v$ (or vice-versa), for some region $R$; and therefore $e\below R\below v$ (or vice-versa), contradicting the construction of $sǐ$. Therefore, each $sǐ$ is a separator, and $s1̌, s2̌, \ldots, sň$ is a complete family of them.}
\endmfig

\section {7. Constructing a complete family of separators}
\def\Z{\hskip2pt }
\def\lastin#1{\mathop{\char l\char i\char n}(#1)}
\def\lastout#1{\mathop{\char l\char o\char u\char t}(#1)}
\def\firstin#1{\mathop{\char f\char i\char n}(#1)}
\def\firstout#1{\mathop{\char f\char o\char u\char t}(#1)}
\def\onext#1{\mathop{\char o\char n\char e\char x\char t}(#1)}
\def\oprev#1{\mathop{\char o\char p\char r\char e\char v}(#1)}
\def\dnext#1{\mathop{\char d\char n\char e\char x\char t}(#1)}
\def\good#1{\mathop{\char f\char o\char r\char w\char a\char r\char d}(#1)}
\def\org#1{\mathop{\char o\char r\char g}(#1)}
\def\dest#1{\mathop{\char d\char e\char s\char t}(#1)}
\def\sym#1{\mathop{\char s\char y\char m}(#1)}
\def\etab{\mathop{edge}}
\def\bsep{\mathop{sep}}
\def\Hor{S}
\def\Ver{{S↑\ast}}
\def\root{g}
\def\G{G}
We proceed now to the description of an efficient algorithm for constructing a complete family of separators for a given monotone subdivision $\S$. As suggested by the proof of theorem {\Thevii}, the first pass of our algorithm enumerates the regions in a linear order compatible with $\below$. A second pass enumerates the edges and vertices of $\S$, in such a way that the edges and vertices of each separator are visited in order of increasing $x$-coordinate (even though this may not be true for elements belonging to different separators). Therefore, by just appending each visited element to its appropriate separator, we will get the sorted lists required by the binary search in algorithm 1.
If we add two dummy vertices at $x=\pm \infty$, and orient every edge of $\S$ from right to left, we obtain a planar embedding of a directed, acyclic graph $\Hor$ with exactly one source and one sink (solid lines in the figure below). The enumeration we need in the second pass is basically a {\it post-order traversal} of this graph, in which we visit a vertex only after visiting all its outgoing edges, and we visit an edge only after visiting its destination.\fig5
Consider now the dual graph $\Ver$ whose vertices are the regions of $\S$, and where for each edge $e$ of $\S$ there is an edge $e\pr$ from $\aboreg e$ to $\beloreg e$. By what we have seen in the previous section, $\Ver$ too is acyclic and has exactly one source and one sink. We can draw $\Ver$ on top of $\Hor$ (dotted lines in the figure above) in such a way that each edge $e\pr$ of it crosses only the corresponding edge $e$ of $\Hor$, and that exactly once (going down). Therefore, $\Ver$ is planar, and corresponds to the topological dual of $\Hor$. It turns out that $\Ver$ represents for the first pass what $\Hor$ does for the second: a post-order traversal of $\Ver$ will visit the regions in an order compatible with $\adjbel$ and $\below$.
\mfig4
Therefore, both passes reduce to a generic graph traversal algorithm, applied first to $\Ver$ and then to $\Hor$. In the first pass, as each region $R$ is visited, we store in a table (or in a field of the region’s record) its serial index $\index{R}$ in the enumeration. In the second pass, as each edge or vertex $e$ is visited, we obtain the indices $a=\index{\aboreg e}$ and $b=\index{\beloreg e}$ of the regions immediately above and below it, and append $e$ to the separator $sǐ$ where $i={\LCA(a, b)}$. Since algorithm 1 discriminates the query point against a separator by binary search, each separator must be organized as a balanced binary search tree, rather than a linked list. The construction of such a tree in linear time is an elementary programming exercise, considering that the edges are given in the correct order.
\endmfig
Alternatively, we can represent all separators by a single linear array of edges $\etab$ with $m$ entries, and an auxiliary table $\bsep$ of $n$ entries, with the convention that separator $sǐ$ consists of the consecutive edges $\etab[j]$ with $\bsep[i]\leq j<\bsep[i+1]$.\fig4
This representation is more space-effective than the linked tree structure described above. However, its construction presents an additional proiblem: we must know the total number of edges of separator $i$ before we can store any edge of separator $i+1$. One solution is to build each separator temporarily as a linked list, and convert those lists to the linear array format after the second traversal is complete. Another solution is to compute the size of each separator during the first traversal: as soon as each region $R$ is visited (and numbered), we enumerate the edges on its bottom frontier, and for each such edge $e$ we add one to the size of the separator number $\LCA(\index{R}, \index{\beloreg{e}})$. These sizes allow us to construct the $\bsep$ table at the end of the first pass, and therefore to fill the $\etab$ array ``on the fly’’ during the second pass.

\section{8. Post-order traversal of an acyclic graph}
\mfig8
The post-order graph traversal routine takes as input a planar embedding $\G$ of a directed, acyclic graph, with exactly one sink and one source that lie on the same face of $\G$. A high-level recursive description of this routine is given below. The procedure takes as parameter a vertex $v$, which on the first call should be the source of $\G$.
\algorithm{2}{Compatible traversal of an acyclic planar graph with one source and one sink.}{
\begblock
\step{\sno1 For each edge $e$ out of $v$,
in counterclockwise order from first to last, do}
\begblock
\step {\sno2 Let $u$ be the destination of $e$.
If $u$ has not been visited yet, apply recursively
algorithm 2 to the vertex $u$.}
\step {\sno3 Visit the edge $e$.}
\endblock
\step{\sno4 Visit $v$.}
\endblock
}
\endmfig
It is clear that algorithm 2 will visit every edge and every vertex of the graph once, in the correct order. This algorithm can easily be implemented with the use of a stack of size $O(n)$ (to hold the arguments and local variables of the various recursive calls) and one mark bit per vertex (to distinguish the nodes that have already been visited). In the remainder of this section, we will show that both the stack and the mark bits are superfluous, provided the data structure used to represent the subdivision is rich enough.
\mfig4
In the given embedding of $\G$, there is a definite ``counterclockwise’’ circular ordering of the edges incident to each vertex. In this ordering all outgoing edges occur in consecutive positions, and the same is true of the incoming ones. Therefore, whenever a vertex $v$ has both incoming and outgoing edges, there is a well-defined linear order among its incoming edges, and likewise for the outgoing ones. We can speak of the ``first’’ and ``last’’ edges of each class, which we denote by $\firstin{v}$, $\lastin{v}$, $\firstout{v}$, and $\lastout{v}$.
\endmfig
Note that for the source and sink of $\G$ these concepts are in general undefined or ambiguous, since there is no intrinsic, topological way to distinguish the ``first’’ edges into or out of of those vertices. To make the concepts above meaningful for all vertices of $\G$, we must specify explicitly the first edge out of the source and the first one into the sink.
\mfig4
We will do this by augmenting $\G$ with a ``dummy’’ edge $\root$ that connects the source directly to the sink to obtain a new graph $\G\pr$. By definition, $\root$ is the first edge out of the source, and the first one into the sink. We may consider t$\G\pr$ as being $\G$ embedded on the upper half of a sphere, and $\root$ running across the bottom half. Note that in this interpretation $\Hor\pr$ and $\Ver\pr$ are still the dual of each other (except for the direction of their dummy edges).
\endmfig
We say that an outgoing edge $e$ lies to the {\it left} of another edge $f$ out of the same vertex $v$ if it follows $f$ in the counterclockwise order around $v$. By extension, we say that a path $\alpha$ lies to the {\it left} of another path $\beta$ with same endpoints if, at every vertex $v$ where the two paths diverge, $\alpha$ exits through an edge that {\it follows} that taken by $\beta$ in the counterclockwise order around $v$. This happens if and only if at every vertex $v$ where the two paths converge, the path $\alpha$ enters through an edge that {\it precedes} the edge used by $\beta$ in the counterclockwise order around $v$.
To see why, extend $\alpha$ in $\G$ by a path $\sigma$ from its destination to the sink of $\G$. Then the concatenation of $\alpha$, $\sigma$, and the reverse of the dummy edge $\root$ is a simple closed path that divides the sphere in two ``hemispheres’’, which we call {\it left} and {\it right} as seen from an observer moving along $\alpha$. The path $\beta$ cannot enter the right hemisphere, and every edge in the right hemisphere that terminates in a vertex $v$ of $\alpha$ follows the corresponding edge of $\alpha$ around $v$.\fig4
Now consider the state of the computation of algorithm 2 just before step 2. The $i$-th entry from the bottom of the stack contains a vertex $vǐ$ and an edge $eǐ$ out of it; the destination of $eǐ$ is the vertex $v{̌i+1}$ stored in the next entry, or the current $v$ if there is no such entry. Therefore, the contents of the stack plus the curent vertex $v$ and the current edge $e$ constitute a path $\pi$ from the source of $\G$ to the destination $u$ of $e$.
Suppose the vertex $u$ is examined twice; from the structure of the algorithm, the paths $\pi$ and $\pi\pr$ corresponding to the two occasions must diverge only once, and converge only once at $u$. Because of the order in which the edges out of each vertex are examined, the first path $\pi$ must lie to the left of the second. Therefore the last edge $e$ of $\pi$ follows the last edge $e\pr$ of $\pi\pr$ in the counterclockwise order around $u$. We conclude that the first time $u$ is examined, the edge $e$ is the last edge into $u$, i.e. $\lastin{u}$. On that occasion algorithm 2 is applied recursively to $u$; because the graph $\G$ is acyclic, $u$ will not be examined again until it is visited, upon exit of that call.
In conclusion, the vertex $u$ in step 2 will not have been visited yet if and only if $e=\lastin{u}$. If this is the case, then $v$ and $e$ will be stacked, and the algorithm will be called again with $v=u$. It follows that after step 4, the edge that is on top of the stack is precisely $\lastin{v}$. These two observations allow us to rewrite algorithm 2 using only a bounded amount of extra storage, provided we can compute $\lastin{v}$ easily. A data structure for representing subdivisions for which this is the case is the quad-edge data structure [\GuiSto]. This data structure is tailored for embeddings of undirected graphs, and it allows us to refer to every edge of the subdivision in any of its two directions. only one of which corresponds to a directed edge of $\G\pr$. The traversal routine must be given also a predicate $\good{e}$ that, for each directed edge $e$ of the data structure, tells whether it is an edge of $\G$. If not, then the symmetric of $e$, $\sym{e}$, is an edge of $\G$. Therefore, the test $e=\lastin{u}$ is equivalent to $e=\root\or(\good{e} \and \lnot\good{\dnext{e}})$. We can rewrite algorithm 2 as follows:
\eject
\algorithm{3}{post-order traversal of an acyclic graph in $O(1)$ space.}{
\begblock
\step{\sno1 Set $e\gets {\root}$, and visit $\dest{e}$.}
\step{\sno2 Repeat the following steps:}
\begblock
\comm{At this point we know that $e$ is an edge out of $v$ that has not been visited yet, but $\dest{e}$ and all edges out of $v$ before $e$ have been visited.}
\step{\sno3 Visit $e$. Set $e\gets\onext{e}$. If $e=\root$, then stop.}
\step{\sno4 If $\good{e}$, then}
\begblock
\step{\sno5 Set $u\gets \dest{e}$. If $\lnot\good{\dnext{e}}$, then set $v\gets u$ and $e\gets\sym{e}$.}
\endblock
\step{\nono Else}
\begblock
\comm{At this point $e=\sym{\firstin{v}}$, i.e. we have exausted all edges out of $v$.}
\step{\sno6 Visit $v$, and set $e\gets\sym{e}$.}
\step{\sno7 While $\good{\dnext{e}}$, set $e\gets \dnext{e}$ and repeat this step.}
\comm{At this point $e=\lastin{v}$.}
\step{\sno8 Set $v\gets\org{e}$.}
\endblock
\endblock
\endblock
}
It is not hard to see that algorithm 3 implements algorithm 2 with the modifications discussed above. It is also easy to show that it runs in $O(m)$ time for a subdivision with $m$ edges. Every edge of the graph is assigned to $e$ at most twice: once in the enumeration of the edges out of a vertex $v$ (step 2), and once in the determination of $\lastin{v}$ (step 7).
\vfill\eject

\section {REFERENCES}
\def\refindent{35}
\ref [\GuiSto] {\refauth{L. J. Guibas and J. Stolfi}
\reftit{Primitives for the manipulation of planar subdivisions and the computation of Voronoi diagrams.}
\refpub{To appear in Proc. 15th Annual ACM Symp. on Theory of Computing (Apr. 1983)}}
\ref [\Har] {\refauth{D. Harel}
\reftit{A linear time algorithm for the lowest common ancestor problem.}
\refpub{Proc. 21st IEEE Symp. on Foundations of Computer Science (Oct. 1980), 308--319}}
\ref [\Kir] {\refauth{D. G. Kirkpatrick}
\reftit{Optimal search in planar subdivisions.}
\refpub{Tech. Rep. 81-13, Dept. of Comp. Science, Univ. of British Columbia, 1981}}
\ref [\Knu] {\refauth{D. E. Knuth}
\reftit{The Art of Computer Programming{\rm, Vol. 1: } Fundamental Algorithms {\rm (2nd. ed.)}}
\refpub{Addison-Wesley, 1975}}
\ref [\LeePre] {\refauth{D. T. Lee and F. Preparata}
\reftit{Location of a point in a planar subdivision and its applications.}
\refpub{SIAM Journal on Computing Vol. 6 No. 3 (Sep. 1977) 594--606}}
\ref [\PreSup] {\refauth{Preparata, F. P., and Supowit, K. J.}
\reftit{Testing a simple polygon for monotonicity.}
\refpub{Information Processing Letters Vol. 12 (1981), 161--164.}}
\par\vfill\eject\end