\input PapWHdr.tex
\begfile{PLocExtras.tex of September 23, 1983 4:05 pm --- Stolfi}

% Additions/changes to PLoc.tex
% See also some garbage 

\inputtty % your chance to \def\printfilestamp{F}
\def\srm{\:p}
\def\omit#1{}

% things to fix:
% figure numbering; lemma/theorem numbering
% eliminate useless macros

\paper{Optimal Point Location in a Monotone Subdivision}
\def\paptitle{Point location}

\author{Herbert Edelsbr{\cmr\"\it u}nner, Leo J. Guibas, and Jorge Stolfi}
\institution{University of Graz, Xerox PARC, and Stanford University}

\def\AuEde{AE}
\def\BenMa{BM}
\def\BiPre{BP}
\def\ChazOne{C1}
\def\ChazTwo{C2}
\def\DoLi{DL}
\def\EHH{EHH}
\def\EdMa{EM}
\def\EOS{EOS}
\def\GuiSto{GS}
\def\Har{H}
\def\Kir{Ki}
\def\Knu{Kn}
\def\LeePre{LP}
\def\LiTa{LT}
\def\Pre{P}
\def\PreSup{PS}
\def\Sha{S}
\def\Wi{W}

\def\Thei{1}
\def\Theii{2}
\def\Theiii{3}
\def\Theiv{4}
\def\Thev{5}
\def\Thevi{6}
\def\Thevii{7}
\def\Theviii{8}
\def\Theix{9}
\def\Thex{10}
\def\Thexi{11}
\def\Thexii{12}

\def\proj#1{\Pi #1}

\def\below{\lsls}
\def\above{\grgr}
\def\adjbel{\prec}
\def\adjabo{\succ}

\def\index#1{\# #1}
\def\aboreg#1{\mathop{\char a\char b\char o\char v\char e}(#1)}
\def\beloreg#1{\mathop{\char b\char e\char l\char o\char w}(#1)}

\def\T{\hbox{\bi T\/}}
\def\LCA{\mathop{\char l\char c\char a}}
\def\MSB{\mathop{\char m\char s\char b}}
\def\REV{\mathop{\char r\char e\char v}}
\def\loc{\mathop{\char l\char o\char c}}

\def\leaf{\mathop{\char l\char e\char a\char f}}
\def\edgetest{\mathop{\char e\char d\char g\char e\char t\char e\char s\char t}}
\def\edge{\mathop{\char e\char d\char g\char e}}
\def\up{\mathop{\char u\char p}}
\def\down{\mathop{\char d\char o\char w\char n}}
\def\chain{\mathop{\char c\char h\char a\char i\char n}}
\def\xtest{\mathop{\char x\char t\char e\char s\char t}}
\def\xcoord{\mathop{\char x\char c\char o\char o\char r\char d}}
\def\xval{\mathop{\char x\char v\char a\char l}}
\def\lt{\mathop{\char l\char e\char f\char t}}
\def\rt{\mathop{\char r\char i\char g\char h\char t}}
\def\gaptest{\mathop{\char g\char a\char p\char t\char e\char s\char t}}

\def\Z{\hskip2pt }
\def\lastin#1{\mathop{\char l\char i\char n}(#1)}
\def\lastout#1{\mathop{\char l\char o\char u\char t}(#1)}
\def\firstin#1{\mathop{\char f\char i\char n}(#1)}
\def\firstout#1{\mathop{\char f\char o\char u\char t}(#1)}
\def\onext#1{\mathop{\char o\char n\char e\char x\char t}(#1)}
\def\oprev#1{\mathop{\char o\char p\char r\char e\char v}(#1)}
\def\dnext#1{\mathop{\char d\char n\char e\char x\char t}(#1)}
\def\good#1{\mathop{\char f\char o\char r\char w\char a\char r\char d}(#1)}
\def\org#1{\mathop{\char o\char r\char g}(#1)}
\def\dest#1{\mathop{\char d\char e\char s\char t}(#1)}
\def\sym#1{\mathop{\char s\char y\char m}(#1)}

\def\etab{\mathop{edge}}
\def\bsep{\mathop{sep}}

\def\Hor{S}
\def\Ver{{S↑\ast}}
\def\root{g}
\def\G{G}

\def\r{\nu}
\def\f#1{\hbox{\hskip1pt {\fx #1}\hskip1pt}}

\def\point#1#2{\vskip \refvskip pt\hbox to \textwidth pt{\hskip\refindent pt\hbox par \refbodywidth pt{\hbox to 0.0in {$\hskip -\refindent pt \hbox{#1} \hskip 0pt plus 1000pt minus 1000pt$}#2}}\par}

\def\refindent{35}

{\bf SUGGESTED CHANGES OF July 12, 1983 7:23 am --- Stolfi}

\section{3. Regularization}

\xx{Last sentence of third paragraph:}

This vertex may be a left endpoint of one of the two edges, or it may an irregular vertex with no right-going edge. See figure 4.

\section{6. Existence of a complete family of separators}

\xx{The reference to figure 13 should be to figure 12 in the old numbering, i.e. to figure 11 of new numbering.}

\section{7. A point location algorithm}

\xx{Fourth paragraph: break in two before "We make the convention that..."}

Let $\T$ be the infinite, complete binary search tree with internal nodes $1, 2, 3, \ldots$ and leaves $0, 1, 2, 3, \ldots$, as in figure 15. The tree $\T$ is used as a ``flowchart'' for the outer loop of the search algorithm, with the convention that each internal node $i$ represents a test of $p$ against the separator $si$, and each leaf $j$ represents the output ``$p$ is in $Rj$''. While reading the algorithm it should be borne in mind that ``left'' in the tree corresponds to ``down'' in the subdivision. The left and right children of an internal node $u$ of $\T$ will be denoted by $\lscr(u)$ and $r(u)$, respectively. We let $\LCA(i, j)$ be the {\it lowest common ancestor} in $\T$ of the leaves $i$ and $j$, that is, the root of the smallest subtree of $\T$ that contains both $i$ and $j$.  

When testing $p$ against a separator, we proceed as if each edge included its right endpoint but not its left, so the vertices of the separator need not be explicitly stored or tested against. This is justified since there are no vertical edges. Under this convention, if the algorithm decides that $p$ lies on some edge $e$ of the separator, we must still check whether our point is really on $e$, or is its right endpoint.

\xx{Algorithm 1: The following is a version that does not skip levels, so it is more similar to algorithm 2. Now that I have seen it I think it looks awful, mais enfin... If you don't use it, remember to fix small typos in algorithm 2.}

\algorithm{1}{Point location in a monotone subdivision.}{
\begblock
  \comm{The input of this algorithm is the query point $p$
    and a complete family of separators as described above.
    The output is the vertex, edge, or region $\loc$ of
    $\Sscr$ containing $p$.}
  \step {\sno1 Set $i\gets 0$, $j\gets n-1$, $u\gets\LCA(i,j)$.}
  \step {\sno2 While $i<u$ or $u<j$, do:}
  \begblock
    \comm {At this point we know $p$ is above the separator
     $s{i}$ and below $s{j+1}$ (whenever those separators exist).
     That is, $p$ is in one of the regions
     $Ri, R{i+1}, \ldots, Rj$, or in some edge or vertex
     between those two separators. The node $u$ is some common ancestor 
     the leaves $i$ through $j$ of $\T$, which moves down one level at
     each iteration of this loop. The next step tests $p$ against
     $su$, if it is worth to do so.}  
    \step {\sno3 If $i<u\leq j$, then}
    \begblock
      \step {\sno4 Find (by binary search) an edge
       $e$ in $s\lscr$ such thet $xp\in \proj{e}$.
       Let $a=\index{\aboreg e}$, $b=\index{\beloreg e}$.}
      \comm {By lemma {\Thevii}, $e$ belongs also to
       the separators $s{b+1}, s{b+2}, \ldots, s{a}$.
       Therefore, by testing $p$ against $e$ we
       conclude it is either below $s{b+1}$ or above $s{a}$.}
      \step {\sno5 If $p$ is below $e$, set $j\gets b$;
       If $p$ is above $e$, set $i\gets a$. Otherwise, $p$ is on $e$
       or on its right endpoint; set $\loc$ accordingly, and
       terminate the algorithm.}
     \endblock
    \comm {Now $u$ is outside the interval $(i,j]$; move down one level.}
    \step{\sno6 If $j<u$, set $u\gets \lscr(u)$; if $u\leq i$,
      set $u\gets r(u)$.}
  \endblock
  \step {\sno7 Set $\loc\gets Ru$ and terminate the search.}
\endblock}

\xx{First paragraph after algorithm 1: insert reference to new figure.}

...so between successive stored edges of $s\lscr$ there may be {\it gaps}. See figure 15a. We will...

\xx{Consider breaking this paragraph in two.}

\section{8. A faster point location method}

\xx{Second paragraph: does complete propagation really give quadratic storage? or just $n\log n$?}

\xx{Alternative to the third and fourth paragraphs:}

More precisely, this refinement produces for each chain $cu$ a list $Lu$ of $x$-values, such that each interval between consecutive $x$-values of $Lu$ overlaps the $x$-projection of exactly one edge or gap of $cu$. Furthermore, each $x$-interval of $Lu$ overlaps at most two $x$-intervals of the lists $L{\lscr(u)}$ and $L{r(u)}$.

The point location algorithm then proceeds as follows. Suppose we are about to test $p$ against the separator $su$, and we have located $xp$ in some $x$-interval of $Lu$\foot\dag{As in the previous section, we adopt here also the convention that an edge contains its right endpoint but not its left one. This requires we treat the $x$-intervals of $Lu$ in the same manner.}. If that interval corresponds to an edge $e$ of $cu$, we only have to test $p$ against that edge. If it corresponds to a gap of $cu$, that means the edges of $su$ that would fit in that gap belong to some chain higher up on the tree; but then we have already tested $p$ against those edges before reaching node $u$, and therefore we have either $i\leq j<u$ (meaning we are below $su$) or $u\leq i\leq j$ (meaning we are above $su$). In every case, constant time suffices to decide whether we are above or below $su$. Now suppose the result is ``below'', and so the next step is to test $p$ against the separator $s{\lscr(u)}$. Thanks to the second property, constant time now suffices to locate $xp$ in $L{\lscr(u)}$. All that is required is that, in the preprocessing phase, we connect by pointers each interval of $L{u}$ to the overlapping intervals of thee children lists $L{\lscr(u)}$ and $L{r(u)}$. 

A convenient way to represent the lists $Lu$ and their interconnections by means of a data structure we call the {\it layered dag}. The ``leaf'' nodes of the dag correspond to the regions of $\Sscr$. The internal nodes correspond to tests of three kinds: {\it $x$-tests}, {\it edge tests}, and {\it gap tests}. A list $Lu$ is represented in the dag by an collection of such nodes: each $x$-value of $Lu$ is represented by an $x$-test, and each interval between successive $x$-values by an edge or gap test. See figure 16a. 

An $x$-test node $t$ contains the corresponding $x$-value of $Lu$ ($\xval(t)$) and two pointers, $\lt(t)$ and $\rt(t)$, to the adjacent  edge or gap nodes of $Lu$. An edge or gap test node $t$ contains two links $\down(t)$ and $\up(t)$ to appropriate nodes of $L{\lscr(u)}$ and $L{r(u)}$. In addition, an edge test contains a pointer $\edge(t)$ to the edge of $cu$ whose projection covers the $x$-interval represented by $t$. A gap test node contains instead the chain number $\chain(t)=u$. The various types of nodes present are illustrated in figure 17.

Le us define more precisely the meaning of the links $\down(t)$ and $\up(t)$. The properties of the refined lists ensure that the $x$ interval of $Lu$ corresponding to an edge or gap test $t$ covers either one $x$-interval $I$ of $L{\lscr(u)}$, or two such intervals separated by some element $xk$ of $L{\lscr(u)}$. In the first case, $\down(t)$ points to the edge or gap test of $L{\lscr(u)}$ corresponding to the interval $I$; in the second case, $\down(t)$ points to the $x$-test corresponding to the separating abcissa $xk$. Similarly, the link $\up(t)$ points to a node of $L{r(u)}$, defined in precisely the same way. In the special case when $r(u)$ and $\lscr(u)$ are leaves of $\T$, we let $\down(t)$ and $\up(t)$ point to the corresponding region nodes.   

The layered dag then consists of the test nodes for all lists $L1, L2, \ldots, L{n-1}$, linked together in this fashion. We can use this dag to simulate algorithm 1; the main difference is that each time we move from a separator to one of its children, the $\down$ or $\up$ pointers (possibly followed by discrimination against an $x$-test) allow us to locate $xp$ in the new chain, in $O(1)$ time. As before, we maintain an interval $(i,j]$ of chains in which our point is known to lie (i.e., the point is above the chain $si$ and below chain $s{j+1}$). This interval is updated after each edge test exactly as in the previous algorithm, and is used during a gap test. When we come to a gap test we know that the chain number of the gap is not interior to the interval in question. Since our interval, if the search has not yet terminated, is not a single chain, we have an unambiguous test as to whether we are above or below the chain of the gap. The point location algorithm then proceeds as follows:

\algorithm{2}{Fast point location in a monotone subdivision.}{
\begblock
  comm{This algorithm takes as input the the point $p$ and
    the layered dag as described above. Its output, as in
    algorithm 1, is the element $\loc$ of $\Sscr$ that contains $p$.}
  \step {\sno1 Set $i\gets 0$, $j\gets n-1$, $u\gets \LCA(i,j)$.}
  \step {\sno2 Locate $xp$ by binary search in some $x$-interval of
    the list $Lu$. Let $t$ be the edge or gap test of $Lu$
    corresponding to that interval.}
  \step {\sno2 While $t$ is not a leaf node of the layered dag do:}
  \begblock
    \comm {At this point we know $p$ is above the separator
      $s{i}$ and below $s{j+1}$ (whenever those separators exist).
      That is, $p$ is in one of the regions
      $Ri, R{i+1}, \ldots, Rj$, or in some edge or vertex
      between those two separators. The integer $u$ is a common ancestor
      in $\T$ of the leaves $i$ through $j$, which goes down by one
      level at each iteration of this loop. The variable $t$ points to an
      edge or gap test corresponding to an $x$-interval of
      $Lu$ that is known to contain $xp$.
      The next step tests $p$ against the separator $su$,
      if it is worth to do so.}  
    \step {\sno3 If $i<u\leq j$, then}
    \begblock
      \comm{Here we know that $t$ is an edge test.
        If it were a gap test, then the edge of $su$ that
        overlaps $xp$ would be stored in some chain higer up in the tree,
        and we would have tested $p$ against it on our way down
        the tree $\T$. But then the interval $(i, j]$ would record the
        outcome of this comparison, and the test of step 3 would
        have failed.}
      \step {\sno4 Let $e=\edge(t)$.
       Let $a=\index{\aboreg e}$, $b=\index{\beloreg e}$.}
      \comm {By lemma {\Thevii}, $e$ belongs also to
       the separators $s{b+1}, s{b+2}, \ldots, s{a}$.
       Therefore, by testing $p$ against $e$ we
       conclude it is either below $s{b+1}$ or above $s{a}$.}
      \step {\sno5 If $p$ is below $e$, set $j\gets b$;
       If $p$ is above $e$, set $i\gets a$. Otherwise, $p$ is on $e$
       or on its right endpoint; set $\loc$ accordingly, and
       terminate the algorithm.}
     \endblock
    \comm {Now $u$ is outside the interval $(i,j]$; move down one level.}
    \step{\sno6 If $j<u$, set $u\gets \lscr(u)$ and $t\gets\down(t)$.
      If not, we must have$u\leq i$;
      set $u\gets r(u)$ and $t\gets\up(t)$.}
    \comm{Now $t$ may be an $x$-test we must discriminate $xp$ against.}
    \step{\sno7 If $t$ is an $x$-test node, then
      set $t\gets \left(t)$ if $xp\leq \xval(t)$,
      or $t\gets\right(t)$ if $xp>\xval(t)$.}
  \endblock
  \step {\sno8 Set $\loc\gets t$ and terminate the search.}
\endblock}}}

\section{9. Computing the layered dag}

Now that we understand how the layered dag is to be used, we proceed to describe how it is to be constructed. Our starting point will be the tree $\T$ and the chains $cu$ defined in section 7; recall that $cu$ consists of those edges of $su$ that do not belong to any ancestor of $su$ (that is, to any separator whose index is an ancestor of $u$ in $\T$). Also, note that to the subtree of $\T$ rooted at $u$ there corresponds a ``partial subdivision'' consisting of a range or regions and the edges and vertices between them. The separator $su$ splits this partial subdivision in two others, each having half the regions; the gaps of $cu$ correspond to edges of $su$ that lie on the upper or lower boundary of the partial subdivision. See figure 17a.

\xx{Perhaps the second part of the paragraph above should go somewhere else}

Our construction of the layered dag proceeds from the bottom up and happens simultaneously with the refinement of the chains $cu$. More precisely, $Lu$ is computed from $L{\lscr(u)}$, $L{r(u)}$, and $cu$.  The $x$ values in $Lu$ are the $x$-projections of the endpoints of all edges in $cu$, plus {\it every other one} of the $x$ values present in each of $L{\lscr(u)}$ and $L{r(u)}$. By convention, if $\lscr(u)$ is a leaf of $\T$, $L{\lscr(u)}$ has no $x$ values; the same caveat applies to $L{r(u)}$. We must also compute the lists $Lu$ for all internal nodes $u$ that lead to some leaf, and so it may happen that $u\geq n$. In such cases we must take $L{r(u)}$ and $cu$ as being empty, so $Lu$ will be just a sequence of gap tests separated by every other $x$ value of $L{\lscr(u)}$. \xx{Smooth this comment out!} The propagation of every other value from the children to the father constitutes the chain refinement we had mentioned in section 8. This refinement has two desirable properties.

\lemma{\Theix}{An interval between two successive $x$ values in $L(u)$ overlaps at most two intervals in $L(\lscr(u))$, or $L(r(u))$, respectively.}

\proof{By construction.}

\lemma{\Thex}{The total number of $x$ values in the lists $Lu$, summed over all $u$, is at most $4m$.}

\proof{Let $au$ denote the number of edges in $cu$. Clearly
$$\sum{u \in T}{au} = m,$$
since each edge of the subdivision occurs in exactly one chain $cu$. Let now $bu$ denote the number of $x$ values in $Lu$. Let $Au$ denote the sum of $av$ over all nodes $v$ in the subtree rooted at $u$. Define analogously $Bu$. To prove the lemma then it suffices to show that
$$B{root}\leq 4A{root}$$
for the root node $root=\LCA(0,n-1)$.

\pp As an inductive hypothesis now assume that
$$Bv+bv 4Av\eqno(9.1)$$
for $v=\lscr(u)$ or $v=r(u)$. This hypothesis is trivially true for the leaves of $\T$. Observe now that
$$Bu = B{\lscr(u)} +  B{r(u)} + bu,$$
and that
$$bu  2au + (b{\lscr(u)}+b{r(u)})/2,$$
since each edge in $cu$ contributes at most two $x$-values to $Lu$.
Applying the inductive hypothesis (9.1) yields
$$Bu + bu  4Au,$$
which proves the same conclusion for $u$. This concudes the proof of the lemma.}

Intuitively, half of the $x$ values of $cu$ propagate to the father chain, a quarter to the grandfather, an eighth to the grand-grandfather, and so on. Although this argument is not strictly correct, it illustrates why we can expect the combined length of the $L$ lists to remain linear in $m$, and in fact just twice as much as the total number of $x$ values in the chains $cu$.

The construction of the $x$-tests, edge tests, and gap tests representing the list $Lu$ is straightforward, as well as the setting of the $\left$ and $\right$ fields of the $x$-tests. The $\down$ links of $Lu$ can be set accoding to the rules stated in section 8, by traversing simultaneously the two lists $Lu$ and $L{\lscr(u)}$ from left to right. The $\up$ pointers are analogous. In fact, it is possible to do construct $Lu$and link it to the rest of the dag in a single simultaneous traversal of $cu$, $L{\lscr(u)}$, and $L{r(u)}$.

Note that several different nodes of $Lu$ may point to the same node of a child. An example is shown in figure 19. Thus the resulting dags are truly not trees. We remark that the structure build by Kirkpatrick [\Kir] also corresponds to a dag. This ``sharing'' of subtrees seems to be an essential feature of algorithms for point location that simultaneously attain optimal space and query time.

This bottom-up process terminates with the construction of the ``root chain'' $L{root}$, where $root=\LCA(0,n-1)$. Algorithm 2 begins by locating $xp$ among the $x$-values of this chain. By appropriately extending the layered dag, we can use it to guide also this initial search. More specifically, we build a tree of $x$-test nodes corresponding in a natural way to a binary search of $L{root}$ based on the $x$ coordinate of our test point, and having for leaves the edge tests of $L{root}$.

Another convenient alternative is to store the nodes of each list $Lu$ in consecutive memory locations, in their natural left-to-right order. This option allows the initial location of $xp$ in $L{root}$ to be carried out by standard binary search. Also, in the construction of $Lu$ this represnetation allows the lists $L{\lscr(u)}$ and $L{r(u)}$ to be scanned without any auxiliary pointers or tables. Finally, observe that sequential allocation obviates the need for the $\left$ and $\right$ links pointers of $x$-tests, which may reduce the total storage used by the dag by more than one third.

In either case, the initial location of $xp$ in $L{root}$ can be done in $O(\lg n)$ time. After that, algorithm 2 obviously executes exactly $\lceil \lg n\rceil$ edge or gap tests (one at each level of $\T$), and at most that many $x$-tests. So, the total query time $Q$ is $O(\lg n)$.

The global structure of the dag construction algorithm is given below.

\algorithm{3}{Construction of the layered dag.}{
\begblock
  \step {\sno1 $s\gets 0$, $b\gets 1$, $z\gets n-1$.}
  \\step {\sno2 For $k\gets 1$ to $\lceil\lg n\rceil$ do:}
  \begblock
    \comm {At this point we are about to build the lists
      corresponding to all nodes in level $k$ by combining
      successive pairs of lists from level $k-1$.
      The first node of level $k-1$ is $s$, the last one is $z$,
      and $b$ is the distance between sucessive nodes.}
    \step {\sno3 Set $\lscr\gets s$ and $u\gets 2s$.}
    \step {\sno 4 While $\lscr\leq z$ do:}
    \begblock
      \comm {Now $u$ is a node of level $k$, and $\lscr$
        is its left child.}
      \step {\sno5 Let $r\gets \lscr+b$. Construct the list $Lu$ from
        the lists $L\lscr$, $Lr$, and the chain $cu$.}
      \step {\sno6 Set $\lscr\gets \lscr+2b$, $u\gets u+2b$.
        Loop back to step 4.}
    \endblock
    \step {\sno7 Set $z\gets u$, $s\gets 2s$ and $b\gets 2b$.
      Loop back to step 2.}
  \endblock
\endblock}

Because of space considerations we omit the actual code. A list $Lu$ with $k$ $x$-values is represented in the layered dag by $k$ $x$-tests and $k+1$ edge/gap tests, and as we have seen it can be constructed in $O(k)$ time. Using lemma {\Thex}, we conclude that the layered dag contains at most $4m$ $x$-tests and $4m+n-1$ edge and gap tests \xx{Fix this bound! take into account the extra chains $Lu$ with $u\geq n$ created by algorithm 3}, and can be constructed in total time $O(m)+O(n)$. In summary, we have shown that

\theorem{\Thexi}{The layered dag data structure for a subdivision with $m$ edges can be constructed in $O(m)$ time, takes $O(m)$ space, and allows point location to be done in $O(\log m)$ time.\xx{Move to conclusions}}

\section{10. Constructing a complete family of separators}

\xx{Delete reference to figure 21 in fourth paragraph}

\xx{Delete last two paragraphs}

\section{11. The lowest common ancestor function}

\xx{Should comment on computing $\lscr(u)$ and $r(u)$ for algorithms 1 (revised) and 2. The simplest solution is to have them in a table, but they are easily computed from the top down by keeping track of the difference $u-\lscr(u)=r(u)-u$.}

\section{12. Post-order traversal of an acyclic graph}

\xx{Fix first paragraph:}

The post-order graph traversal routine takes as input a planar embedding $\G$ of a directed, acyclic graph with exactly one sink and one source, both on the outer face of $\G$. See figure 22a. A high-level recursive description of this routine is given below. The procedure takes as parameter a vertex $v$, which on the first call should be the source of $\G$.

\algorithm{4}{Compatible traversal of an acyclic planar graph with one source and one sink.}{
  \begblock
     \step{\sno1 For each directed edge $e$ out of $v$,
      in counterclockwise order from first to last, do}
     \begblock
       \step {\sno2 If the destination $\dest{e}$ of $e$.
        has not been visited yet, apply recursively
        algorithm 4 to $\dest{e}$.}
       \step {\sno3 Visit the edge $e$.}
     \endblock
     \step{\sno4 Visit $v$ and return.}
   \endblock
}

Algorithm 4 visits every edge and every vertex of $G$ exactly once, in the correct order. The auxiliary storage required by algorithm 4 reduces to a stack of $m$ entries (to hold the arguments and local variables of the various recursive calls) and one mark bit per vertex (to distinguish the nodes that have already been visited). In the rest of this section we will give outline the formal proof of these assertions, and show that both the stack and the mark bits are superfluous, provided the data structure used to represent the subdivision is rich enough.

In the given embedding of $\G$, there is a definite ``counterclockwise'' circular ordering of the edges incident to each vertex. In this ordering all outgoing edges occur in consecutive positions, and the same is true of the incoming ones. This establishes a {\it linear} order for each class of edges, with well-defined first and last elements (denoted by $$\firstin{v}$, $\lastin{v}$, $\firstout{v}$, and $\lastout{v}$). See figure 23. The source of $G$ is somewhat special: its first and last outgoing edges are distinguished by being incident to the outer face. A similar caveat applies for the sink of $G$.

Now consider the subgraph $H$ of $G$ consisting of the edges $\lastin(v)$ for all vertices $v$ except the source, as illustrated in figure 24a. This graph is acyclic and has $\leftv VG\rightv$ edges, so it is an oriented tree rooted at the source of $G$. The counterclockwise order of the edges out of any vertex $v$ extends naturally to the subtrees of $H$ hanging from $v$. We claim that the order in which the vertices of $G$ are visited by algorithm 4 corresponds to a post-order traversal of the tree $H$, in the usual definition [\Kn]. This is a consequence of the next lemma:

\lemma{\Thexii}{In the post-order traversal of $H$, the destination of any edge $e$ of $G$ occurs before the origin of $e$.}

\proof{If $\dest(e)$ is a descendant of $\org(e)$ in $H$, this lemma follows trivialy from the definition of post-order traversal of trees. If that is not the case, consider the path $\pi$ in $H$ that goes from the root to $u$, and any path $\sigma$ in $G$ from $u$ to the sink of $G$. See figure 25a. Let $R$ be the complement of the outer face of $G$. The concatenation of $\pi$ and $\sigma$ divides $R$ in two regions; if we call the direction of $\pi\sigma$ ``down'', then $e$ must lie on the ``right'' region. The origin of $e$ can lie neither on $\sigma$ nor on $\pi$, so it must also lie in the right region.

\pp Now consider path $\pi\pr$ in $H$ from the root to $\org{e}$. This path must diverge from $\pi$ exactly once, at some vertex $w$, and ultimately it must enter the right region. Since $G$ is acyclic, $\pi\pr$ cannot cross $\sigma$; therefore, after leaving $w$ the path $\pi\pr$ must lie entirely in the right region. In particular, among the edges going out of $w$, the one used by $\pi$ must precede the one used by $\pi\pr$. We conclude that in the post-order traversal of $H$, the subtree rooted at $w$ containing $\dest{e}$ will be traversed before the one containing $\org{e}$.}

Each time we enter algorithm 4, the recursion stack contains a sequence of previous values of $v$ and $e$, that have been placed there previous executions of step 2. We claim this sequence defines a path in $H$ from the source vertex to $v$. Also, the vertices visited so far will be exactly those preceding $v$ in the post-order traversal of $H$, except for the descendants of $v$ in the tree $H$. These invariants can easily be proved true, using lemma {\Thexii} and induction on the number of vertices already visited. From them follow our previous claims about algorithm 4. 

Moreover, the first invariant above implies that the last edge $e$ pushed onto it is always the (only) edge of $H$ with destination $v$, that is, $\lastin{v}$. It implies also that the condition ``$\dest{e}$ has not ben visited yet'' in step 2 can be replaced by the test ``$e$ is an edge of $H$'', that is, $e=\lastin{\dest{e}}$. So, if we can compute $\lastin{v}$  easily for any vertex $v$, we can rewite algorithm 4 to use neither the recursion stack nor any ``mark'' fields on the vertices.

One representation for graph embeddings that satisfies this requirement is {\it quad-edge} data structure [\GuiSto]. This data structure has the important advantage of simultaneously representing the dual embedding in precisely the same format, at a negligible extra cost in storage. This allows the post-order traversals of both $\Hor$ and $\Ver$ to be performed by the same procedure, applied to the same data structure: the quad-edge representation of the subdivision $\Sscr$.

The quad-edge structure is tailored for {\it undirected} graphs: it allows us to refer to any edge of $\Sscr$, in any of its two directions. For a given directed edge $e$ in the quad-edge data structure, we have immediate access to $\org{e}$, the orgin vertex of $e$, $\dest{e}$, the destination vertex of $e$, $\onext{e}$, the next counteclockwise directed edge with the same origin, $\dnext{e}$ the next counteclockwise directed edge with the same destination, and $\sym{e}$, the same edge directed the other way. Also, $\rot{e}$ denoted the dual of the edge $e$, directed from the right face of $e$ to its left face.

To specify the {\it directed} graph $G$ that is input to the post-order traversal routine, we must also give a predicate $\good{e,G}$ that tells whether the directed edge $e$ of the structure is an edge of $G$. If not, then $\sym{e}$ is an edge of $\G$. In particular, for the second pass of the separator construction we have $G=\Hor$, so this predicate is simply the test that $\dest{e}$ is to the left of $\org{e}$. As for the first pass, when $e$ is a directed edge of the dual subdivision and $G=\Ver$, we have $\good{e, G}$ if and only if the region $\dest{e}$ is above the region $\org{e}$; which turns out to be the same as $\good{\rot{e}, \Hor}$.

In any case, the test $e=\lastin{u}$ is equivalent to $\good{e, G} \and \lnot\good{\dnext{e}, G}$. In order for this correspondence to hold also for the sink of $G$, we will introduce a dummy edge $\root$ that connects the sink back to the source. We may consider the resulting graph $G\pr$ as embedded on the upper half of a sphere, with $\root$ running across the bottom half\foot\dag{This extension of the plane to a sphere is also required by the definition of the quad-edge data structure.}. The rewritten version of  algorithm 4, below, then takes such a graph $G\pr$ (represented by the quad-edge data structure and the predicate $\good{e, G}$ and visits the acyclic graph $G=G\pr-\root$ in post-order:

\algorithm{5}{post-order traversal of an acyclic graph in $O(1)$ space.}{
\begblock
  \step{\sno1 Set $e\gets \root$, and $v\gets \org{\root}$.}
  \step{\sno2 Repeat the following steps:}
    \begblock
      \comm{At this point we know that either $e=\root$ and $v$
        is the  sink of $G$, or $e$ is an edge out of $v$ that
        has just been visited.}
      \step{\sno3 Set $e\gets\onext{e}$.}
      \step{\sno4 If $\good{e, G}$, then}
        \begblock
          \comm{Here $e$ is the next unvisited edge out of $v$.
            If its destination has not been visited yet,
            set $v$ to it and $e$ to its first outgoing edge. This
            simulates the recursive call of algorithm 4, step 2.} 
          \step{\sno5 Set $u\gets \dest{e}$.
            If $\lnot\good{\dnext{e}, G}$,
          then set $v\gets u$ and $e\gets\sym{\dnext{e}}$.}
        \endblock
      \step{\nono Else}
        \begblock
          \comm{At this point $e=\sym{\firstin{v}}$, i.e.,
            we have exausted all edges out of $v$.
            We then visit $v$, locate the edge $\lastin(v)$
            that we used to enter $v$, and backtrack through it.
            This simulates step 4 of algorithm 4.}
          \step{\sno6 Visit $v$, and set $e\gets\sym{e}$.
            If $e=\root$, terminate the algorithm.} 
          \step{\sno7 While $\good{\dnext{e}, G}$, set $e\gets \dnext{e}$
          and repeat this step.}
          \comm{At this point $e=\lastin{v}$.}
          \step{\sno8 Set $v\gets\org{e}$.}
        \endblock
      \step{\sno9 Visit $e$ and loop back to step 2.}
    \endblock
\endblock
}

The equivalence between algorithms 4 and 5 is straightforward. It is also easy to show that the latter runs in $O(m)$ time for a subdivision with $m$ edges. Every edge of the graph is assigned to $e$ at most twice: once in the enumeration of the edges out of a vertex $v$ (step 3), and once in the determination of $\lastin{v}$ (step 7).  

\par\vfill\eject\end

\algorithm{4}{Compatible traversal of an acyclic planar graph with one source and one sink.}{
  \begblock
     \step{\sno1 If $v$ has already been visited, then return. Else,}
     \step \sno2. For each directed edge $e$ out of $v$,
        in counterclockwise order from first to last, do}
       \begblock
         \step {\sno3 Apply recursively
          algorithm 4 to the destination of $e$.}
         \step {\sno4 Visit the edge $e$.}
       \endblock
     \step{\sno5 Visit $v$.}
   \endblock
}

{\bf GARBAGE GENERATED IN September 22, 1983 9:24 pm}

Note also that between any two consecutive assignments to $u$ the procedure either visits the vertex $u$ or stacks some edge out of it. It follows that every vertex asigned to $u$ is eventually visited. Similarly, every edge $\epsilon$ assigned to $e$ (except $\root$) is eventually visited. In fact, since the path $\pi$ is acyclic, the edge $\epsilon$ will be visited before any other edge with same origin as $\epsilon$ gets assigned to $e$. When this happens, the next outgoing edge with same origin as $\epsilon$ (if any) will be assigned to $e$. Moreover, the first edge with a given origin $\upsilon$ to be assigned to $e$ is always $\firstout{\upsilon}$ (steps 1 and 4), and this always happens the first time $\upsilon$ is assigned to $u$. It follows that (5) the edges with common origin $\upsilon$ are all visited in order, from $\firstout{\upsilon}$ to $\lastout{\upsilon}$. We conclude that algorithm 5 visits every edge and vertex of $\G$    

Furthermore, algorithm 5 visits every vertex and every edge of $\G$. First, notice that we always visit a vertex (step 7) immediately after visiting its last outgoing edge, so a vertex $u$ would not be visited only if some of its edges  $H$ from the root to $u$.

Now consider the subgraph $\H$ of $\G$ consisting of the edges $\lastin{u}$ for all vertices $u$ except the source, as illustrated in figure 24a. The number of edges in this subgraph is one less than the number of its vertices, and furthermore it contains no undirected cycles\foot\dag{Since $\H$ has at most one edge entering each vertex, any undirected cycle in $\H$ would actually be a directed cycle in $\G$, a contradiction.}. It follows that $\H$ is an oriented tree rooted at the source of $\G$. The counterclockwise order of the edges out of any vertex $u$ extends naturally to the subtrees of $\H$ hanging from $u$. This is readily verified by induction on the number of steps executed We claim that the order in which the vertices of $\G$ are visited by algorithm 5 corresponds to a post-order traversal of the tree $\H$, in the usual definition [\Knu]. To show this, we need the following lemma:

\lemma{\Thexii}{The post-order traversal of $\H$ visits the destination $e$ before the origin of $e$, for any edge $e$ of $\G$.}

\proof{Let $u$ and $v$ be the origin and destination of $e$. If $v$ is a descendant of $u$ in $\H$, the lemma follows trivially from the definition of post-order traversal of trees. If that is not the case, consider the path $\pi$ in $H$ that goes from the root to $v$, and any path $\sigma$ in $\G$ from $v$ to the sink of $\G$. See figure 25a. Let $R$ be the complement of the outer face of $\G$. The concatenation of $\pi$ and $\sigma$ divides $R$ in two (not necessarily connected) regions, which we call {\it left} and {\it right} (with the direction of $\pi\sigma$ being taken as ``forwards''). From the definition of $\H$, the edge $e$ must lie in the left region. The origin $u$ of $e$ can lie neither on $\sigma$ nor on $\pi$, so it must also lie in the left region.

\pp Now consider path $\pi\pr$ in $\H$ from the root to $u$. This path must diverge from $\pi$ exactly once, at some vertex $w$, and ultimately it must enter the left region. Since $\G$ is acyclic, $\pi\pr$ cannot cross $\sigma$; therefore, after leaving $w$ the path $\pi\pr$ must lie entirely in the left region. In particular, among the edges going out of $w$, the one used by $\pi$ must precede the one used by $\pi\pr$. We conclude that in the post-order traversal of $H$, the subtree rooted at $w$ containing $v$ will be traversed before the subtree containing $u$.}

{\bf GARBAGE GENERATED IN September 23, 1983 1:04 pm}

\proof{...This in turn implies $\epsilon\pr$ is visited  are visited in this order by algorithm  origin its origin $\is assigned to ince $\G$ is acyclic,   $\sigma$ be any path in $\G$ from $u$ to the sink of $\G$. Let $u$ and $v$ be the origin and destination of $e$. If $v$ is a descendant of $u$ in $\H$, the lemma follows trivially from the definition of post-order traversal of trees. If that is not the case, consider the path $\pi$ in $H$ that goes from the root to $v$, and  . See figure 25a.  From the definition of $\H$, the edge $e$ must lie in the left region. The origin $u$ of $e$ can lie neither on $\sigma$ nor on $\pi$, so it must also lie in the left region.

\pp Now consider path $\pi\pr$ in $\H$ from the root to $u$. This path must diverge from $\pi$ exactly once, at some vertex $w$, and ultimately it must enter the left region. Since $\G$ is acyclic,   We conclude that in the post-order traversal of $H$, the subtree rooted at $w$ containing $v$ will be traversed before the subtree containing $u$.}

We claim that at the beginning of every step of algorithm  5, the stack $\pi$, from bottom to top, is precithe sequence of edges on  Each time we enter algorithm 4, the recursion stack contains a sequence of previous values of $v$ and $e$, that have been placed there by the ``suspended'' executions of step 2. We claim this sequence defines a path in $\H$ from the source vertex to $v$. Also, the vertices visited so far will be exactly those preceding $v$ in the post-order traversal of $\H$, except for the descendants of $v$ in the tree $\H$. These two invariants can easily be proved true, using lemma {\Thexii} and induction on the number of vertices already visited. Our previous claims about algorithm 4 then follow. 

The first invariant above implies that the last edge $e$ pushed onto the stack is always the (only) edge of $\H$ with destination $v$, that is, $\lastin{v}$. This also implies that the condition ``$v\pr$ has not been visited yet'' in step 2 can be replaced by the test ``$e$ is an edge of $\H$'', that is, $e=\lastin{v\pr}$. So, if we can compute $\lastin{v\pr}$  easily for any vertex $v\pr$, we can rewite algorithm 4 to use neither the recursion stack nor any ``mark'' fields on the vertices.

One representation for graph embeddings that satisfies this requirement is {\it quad-edge} data structure [\GuiSto]. This data structure has the important advantage of simultaneously representing the dual embedding in precisely the same format, at a negligible extra cost in storage. This allows the post-order traversals of both $\Hor$ and $\Ver$ to be performed by the same procedure, applied to the same data structure: the quad-edge representation of the subdivision $\Sscr$.

To represent a {\it directed} graph $\G$, such as $\Hor$ and $\Ver$, for input to the post-order traversal routine, we will use the quad-edge encoding of its undirected version, plus a predicate $\good{e,\G}$ that tells whether the directed edge $e$ of the structure is actually an edge of $\G$ (if $e$ is not in $\G$ then $\sym{e}$ will be in $\G$, and conversely).  In particular, during the second pass of the separator construction algorithm we have $\G=\Hor$, so $\good{e,\G}$ is simply the test of whether the $x$-coordinate of $\dest{e}$ is smaller than that of $\org{e}$. during the first pass, when $\G$ is the dual graph $\Ver$, we have $\good{e,\G}$ if and only if the region $\dest{e}$ is immediately below the region $\org{e}$; which turns out to be the same as $\lnot\good{\rot{e},\Hor}$.

In either case, the test $e=\lastin{\dest{e}}$ (``$e$ has not been visited yet'') is equivalent to $\good{e, \G} \and \lnot\good{\dnext{e}, \G}$. \bug\foot\dag{This extension of the plane to a sphere is also required by the definition of the quad-edge data structure.}. The rewritten version of  algorithm 4, below, takes the quad-edge data structure and the $\good$ predicate of $\G\pr$, and visits the acyclic graph $\G=\G\pr-\root$ in post-order:

\algorithm{5}{post-order traversal of an acyclic graph in constant space.}{
\begblock
  \step{\sno1 Set $e\gets \root$, and $v\gets \org{\root}$.}
  \step{\sno2 Repeat the following steps:}
    \begblock
      \comm{At this point we know that either $e=\root$ and $v$
        is the  sink of $\G$, or $e$ is an edge out of $v$ that
        has just been visited.}
      \step{\sno3 Set $e\gets\onext{e}$.}
      \step{\sno4 If $\good{e,\G}$, then}
        \begblock
          \comm{Here $e$ is the next unvisited edge out of $v$.
            If its destination $v\pr$ has not been visited yet,
            set $v$ to $v\pr$ and $e$ to the first outgoing edge of $v\pr$.
            This simulates the recursive call of algorithm 4, step 2.} 
          \step{\sno5 Set $u\gets \dest{e}$.
            If $\lnot\good{\dnext{e},\G}$,
          then set $v\gets u$ and $e\gets\sym{\dnext{e}}$.}
        \endblock
      \step{\nono Else}
        \begblock
          \comm{At this point $e=\sym{\firstin{v}}$, i.e.,
            we have exausted all edges out of $v$.
            We then visit $v$, locate the edge $\lastin(v)$
            that we used to enter $v$, and backtrack through it.
            This simulates step 4 of algorithm 4.}
          \step{\sno6 Visit $v$, and set $e\gets\sym{e}$.
            If $e=\root$, terminate the algorithm.} 
          \step{\sno7 While $\good{\dnext{e},\G}$, set $e\gets \dnext{e}$
          and repeat this step.}
          \comm{At this point $e=\lastin{v}$.}
          \step{\sno8 Set $v\gets\org{e}$.}
        \endblock
      \step{\sno9 Visit $e$ and loop back to step 2.}
    \endblock
\endblock
}

The equivalence between algorithms 4 and 5 is straightforward. It is also easy to show that the latter runs in $O(m)$ time for a subdivision with $m$ edges. Every edge of the graph is assigned to $e$ at most twice: once in the enumeration of the edges out of a vertex $v$ (step 3), and once in the determination of $\lastin{v}$ (step 7).