\input CS445Hdr
\def\file{CS445N7B.tex of June 22, 1982 1:38 PM}
\def\header{7. Decomposition problems}
\def\title{Lecture 16}
\sect{7.4. Optimal decomposition of a simple polygon in convex pieces.}

Algorithms 7.1 and 7.2 allow us to decompose any simple polygon $P$
of $n$ vertices into $n-2$ triangles.
For some applications, a decomposition into convex pieces other
than triangles may be acceptable, provided that
the number of pieces is kept as low as possible.
The {\it minimum} number $c(P)$ of convex polygons into which $P$
can be decomposed may be considerably less than $n-2$,
being for example 1 when $P$ itself is convex.\par

The value of $c(P)$ depends on whether the vertices of the
pieces are restricted to be those of $P$, or they are allowed to be
interior to it. In the second case, the ``new’’ vertices are sometimes
called {\it Steiner points}. A polygon for which the introduction of
Steiner points results in a decomposition with
fewer pieces is shown below:\fig4\par

The optimal convex decomposition using Steiner points
can be found in $O(n↑3)$ time, as was shown by D. Dobkin and
B. Chazelle in 1979.
For the optimal decomposition without Steiner points, the fastest algorithm
known is due to D. Greene (1981), which runs in $O(n↑4)$ time
in the worst case\foot1{This is a somewhat unusual situation,
since several ``easy’’ problems (e.g., the determination of
a minimum spanning tree in the plane) become much harder,
even $NP$-complete, if Steiner points are allowed.}. We will
spend the rest of this section discussing Greene’s algorithm
in some detail.\par

Let $p1̌, p2̌, \ldotss, pň$ be the vertices
of $P$ in counterclockwise order.
We will use the ordered pair $(i,j)$ as a synonym for the segment
$pǐpǰ$, which is assumed to be oriented from $pǐ$ to $pǰ$.
To simplify the wording of this section, we will extend the meaning
of the word {\it edge} so as to include not only the original edges of
$P$ but also its diagonals.\par

An edge $pǐpǰ$ is said to be {\it proper} iff $i<j$ and $pǐpǰ$
lies entirely in $P$ (on its boundary or in its interior).
If $(i,j)$ is a proper edge, we also say that $pǐ$ is visible from $pǰ$, and
vice-versa. Any convex subdivision of $P$ without Steiner points
is uniquely determined by a set of
non-intersecting proper diagonal edges (or {\it cuts}). \par

An optimal decomposition of $P$ into convex pieces can
be found by enumerating all subsets of the proper diagonals,
in order of increasing size,
and taking the first subset that determines only convex pieces
and whose edges do not cross.
Unfortunately, a polygon with $n$ vertices may require $\Theta(n)$
cuts, and therefore $2↑{\Theta(n)}$ subsets will have
to be considered. Clearly, exhaustive search is out of question.\par

Greene’s algorithm uses the technique of dynamic programming
to avoid this combinatorial explosion. Dynamic programming
is a general method for solving optimization problems which satisfy what may
be called the {\it principle of optimality}: the best solution to the whole
problem can be decomposed in such a way that each part is the
best solution to a similar but smaller subproblem.
The principle of optimality
allows us to build the global solution by attacking the smallest
subproblems independently of each other, and then repeatedly combining
or expanding their solutions.\par

\digress{The classical example of dynamic programming is the
determination of the shortest path $\pi$ between
two points $u,v$ in a graph: for any other
point $w$ along $\pi$, the shortest path from $u$ to $w$
is a prefix of $\pi$. We can find $\pi$
(and the shortest paths from $u$ to all vertices of the graph)
by starting at $u$ with paths of zero edges
and extending them simultaneously at each
iteration to previously unvisited points.\par

\dp After the $k$th iteration we will have the solution
for the ``$k$th subproblem’’, which asks for the best paths
from $u$ to all vertices that can be reached in $k$ steps.
The principle of optimality applies, and the next
iteration will expand these paths into the solution of
the $(k+1)$th subproblem.}

If $(i,j)$ is a proper edge, we will denote by $P{̌ij}$
the polygon $\langle pǐ, p{̌i+1}, \ldotss, pǰ \rangle$; this
polygon will be referred
to as the {\it ear} of $P$ defined by the edge $(i,j)$.
Let’s further define the {\it cost} of that edge as
being $c{̌ij} = c(P{̌ij})$, the minimum number
of pieces in any convex decomposition of its ear $P{̌ij}$.
If $j=i+1$ (i.e., $pǐpǰ$ is one of the original edges of $P$), then
the cost $c{̌ij}$ is zero by definition. It is also convenient to let
$c{̌ij}=+\infty$ whenever $(i,j)$ is not a proper edge.
According to these definitions, $P{̌1n}=P$, and
therefore $c(P)=c{̌1n}$.\par

For example, in the polygon below we have $c{̌3,9}=4$. The dotted
lines give an optimal decomposition of the ear $P{̌3,9}$ (hatched).\fig4\par

Let’s consider an optimal convex decomposition of $P$, and let $(i,j)$
be one of its cuts. The corresponding
ear $P{̌ij}$ is composed of a
convex {\it root piece} $K{̌ij}$, adjacent to $pǐpǰ$,
with zero or more ``sub-ears’’ attached
to it. Since each sub-ear can be decomposed independently of the others,
each of them must be subdivided
in the optimal fashion; it follows that $c(P{̌ij})$
will one plus the total cost of decomposing the sub-ears in convex pieces.
This is equivalent to saying that $c{̌ij}$ is one plus the
sum of the costs of the other edges of $K{̌ij}$.\par

This property is clearly an instance of the principle of optimality, and is one
of the key ideas on which Greene’s algorithm is based. Suppose we
have already computed the cost of all proper edges in the ear $P{̌ij}$, except that of the
cut $(i,j)$ itself; then we can determine the latter, $c{̌ij}$, by finding the
convex polygon $K{̌ij}$, inscribed in $P{̌ij}$ and adjacent to
$pǰpǐ$, whose total edge cost (excluding
$(i,j)$ itself) is minimum.\par

This task is not trivial, since the number of candidates for $K{̌ij}$
may be exponential. However, the principle of optimality
(albeit in a weaker form)
applies also to the determination of the cheapest polygon $K{̌ij}$.\par

Let’s define a {\it convex path from $pǐ$ to $pǰ$} as being a sequence
of vertices of $P$, with increasing indices, which determines
a convex polygon $K$ entirely contained in $P{̌ij}$ and adjacent to
the proper edge $(i,j)$.\fig5
A convex path $\pi$ from $pǐ$ to $pǰ$, together with the
edge $(i, j)$, uniquely defines the root piece $K{̌\pi}$ of some convex decomposition of $P{̌ij}$.\par

A pair of consecutive vertices in a convex path $\pi$ determines a (proper) edge
of $\pi$. The {\it cost} $c(\pi)$ of the path is the sum
of the costs of its edges, i.e.
the cost of the sub-ears of $P{̌ij}$ that hang from $K{̌\pi}$.
It follows that the cost of a proper diagonal $(i,j)$ is one more than the
cost of the cheapest convex path $\pi$ from $pǐ$ to $pǰ$, and $K\̌pi$ is
the root piece of an optimal decomposition of $P{̌ij}$.\par

If it were not for the convexity constraints, we could compute the
cheapest path $\pi{̌m}$ from $pǐ$ to each $pm̌$ ($i\leq m\leq j$)
by a straightforward application of the classical
shortest-path algorithm. We set $\piǐ\gets\langle pǐ\rangle$, and then,
for $m=i+1,i+2,\ldotss,j$, we find the $k$ in the range
$i \leq k < m$, with $(k,m)\neq (i,j)$, that minimizes the total cost
cost $c(\pi{̌m}) = c(\pi{̌k}) + c{̌km}$, and set
$\pi{̌m}\gets\pi{̌k}\cdot\langle pm̌\rangle$.
The cost $c{̌ij}$ of the edge $(i,j)$ would then be $c(\pi{̌j})+1$.\par

Unfortunately, this simple algorithm does not work.
The best {\it convex} path from $pǐ$ to
$pm̌$ is not necessarily $\pi{̌k}\cdot\langle pm̌ \rangle$,
since $\pi{̌k}$, although convex by itself, may make concave
angles with the edges $(k,m)$ or $(i,m)$.
Therefore, it is sometimes the case that the best path
from $pǐ$ to $pm̌$ gets to its penultimate vertex $pǩ$
by some path other than $\pi{̌k}$.\fig4\par

Greene’s algorithm takes care of the convexity
constraints by eliminating some
of the intermediate vertices $pm̌$ from consideration, and
by keeping track of several partial paths for each $m$.
This increases both storage and time requirements by
a factor of $\Theta(n)$.\par

While computing the cost of an edge $(i,j)$,
Greene’s algorithm restricts its attention to convex paths that are
{\it compatible} with $(i,j)$ i.e. whose vertices are all visible
from $pǐ$ and $pǰ$. Furthermore, for each proper
edge $(k,m)$ (where $i\leq k<m\leq j$),
it computes and saves the cheapest convex path $\pi{̌km}$ from
$pǐ$ that is compatible with $(i,j)$ and ends with
the edge $(k,m)$. This path is either
the edge $(k,m)$ itself, if $k=i$, or is computed by
appending $(k,m)$ to $\pi{̌rk}$, where $r$ is such that
$i\leq r<k$, the angle $přpǩpm̌$ is convex, and the
cost $c(\pi{̌rk})$ is minimum. This of course applies
only if $pm̌$ is visible to $pǐ$ and $pǰ$,
if there is at least one such path $\pi{̌rk}$, and if the
edge $(k,m)$ is proper; otherwise, there is no compatible convex path from
$pǐ$ ending with $(k,m)$.\par

\digress{This computation can still be seen as equivalent
to the determination of the
cheapest path from $pǐ$ to $pǰ$ in some graph $G$.
The nodes of $G$ are the pairs $(k,m)$ where $i\leq k<m\leq j$,
and both $pǩ$ and $pm̌$ are visible from $pǐ$ and $pǰ$. The
edges of $G$ are the triplets $(r,k,m)$ (connecting nodes $(r,k)$ and $(k,m)$)
such that $i\leq r<k<m\leq j$,
the angle $přpǩpm̌$ is convex,
and $př$, $pǩ$, $pm̌$, $pǐ$, $pǰ$ are all visible from each other.
To each node $(k,m)$ (except $(i,j)$) is assigned the cost $c{̌km}$. Then the
cheapest convex path from $pǐ$ to $pǰ$ in $P$
compatible with $(i,j)$
will correspond to the cheapest non-trivial cycle
of $G$ that starts with node $(i,j)$.}

In order to apply this algorithm, we need the costs of all edges
$(i,m)$, $(k,m)$ and $(k,j)$ for $i<k<m<j$. This requirement can be met
by enumerating the pairs $(i,j)$ in increasing order of $j-i$, so that
all required costs are known by the time the computation of $c{̌ij}$ is
undertaken.\par

Given the the costs $c{̌km}$ and
$c(\pi{̌rk})$ for all relevant $r$,
the computation of $\pi{̌km}$ for fixed $k$ and $m$
requires only $O(n)$ time; therefore, the computation of each
cost $c{̌ij}$ can be done in $O(n↑3)$ time,
and the total time will be $O(n↑5)$.\par

Greene was able to reduce this cost to $O(n↑4)$ by
taking advantage of the fact that, for a fixed $k$, the the angles
of the {\it proper} edges $pǩpm̌$ are strictly increasing with $m$,
and the same is true of the proper edges $přpǩ$ when $r$ increases.\fig4
It follows that any convex path from $pǐ$ to $pǩ$ that can be extended to
$pm̌$ can also be extended to $p{̌m\pr}$, if $m\pr>m$ and
$(k,m\pr)$ is a proper edge.\par

Suppose that when we computed $\pi{̌km}$
we concluded that the cheapest among the convex paths
$\pi{̌rk}$ that could be extended by the edge $(k,m)$ was $\pi↑*{̌k}$.
Then, in order to compute $\pi{̌km\pr}$, where $m\pr>m$,
we have to consider only $\pi↑*{̌k}$ and the paths $\pi{̌rk}$
such that $př$ lies in the angle opposite to $pm̌pǩp{̌m\pr}$.\fig4
Thanks to this fact, the computation of all paths $\pi{̌km}$ for a
fixed $k$ takes only $O(n)$ operations, instead of $O(n↑2)$, bringing the total
time down to $O(n↑4)$.\par

For each pair $(i,j)$, we need $\Theta(n↑2)$ space to represent the
partial paths $\pi{̌km}$\foot1{Although there are
$\Theta(n↑2)$ paths to be stored,
with average length $\Theta(n)$, each path can be represented as a
vertex and a pointer to some other path in the table,
so the total space is
still $\Theta(n↑2)$.}. After $c{̌ij}$ has been computed, we can discard all
those paths except the cheapest of those ending at $pǰ$,
which defines the optimal decomposition of the ear $P{̌ij}$;
the total space requirement would be then $\Theta(n↑3)$.\par

By using a two-pass algorithm, the space requirement
can be reduced to $\Theta(n↑2)$.
The first phase, which proceeds ``from the bottom up’’,
computes all costs $c{̌ij}$, discarding all paths after
processing each edge. The second phase, ``from the top down’’,
recomputes only those paths
that actually define the optimal decomposition.
Phase 1 requires $O(n↑2)$ storage to compute the cost of each
edge $(i,j)$, but that storage can be reused to
compute the cost of other edges. Both phases take $O(n↑3)$ time.\par

Algorithm 7.3 uses two auxiliary procedures, given below as algorithms
7.3B and 7.3D. Both receive as parameters a proper cut
$(i,j)$; they assume all edge costs
$c{̌km}$ for $i\leq k< m\leq j$, except perhaps for $c{̌ij}$ itself,
have been previously computed.
Algorithm 7.3B computes the cheapest
convex path $\beta$ from $pǐ$ to $pǰ$, and its cost $b$. If no such
path exists (i.e., if $(i,j)$ is not a proper edge), it returns a dummy
path $\Lambda$ with infinite cost.
Algorithm 7.3D, which is recursive and calls 7.3B, produces a list of the
convex polygons that constitute an optimal decomposition of the
ear $P{̌ij}$ into convex pieces.\par

\algorithm{7.3}{Optimal convex decomposition of a simple polygon.}{
\step 1. [Initialize.] {For $i=1,2,\ldotss, n-1$, set $c{̌i,i+1}\gets 0$.
Set $d\gets 2$.}
\step 2. [Phase 1.] {For $i=1,2,\ldotss,n-d$, set $j\gets i+d$
and perform step 3. Set $d\gets d+1$; if $d<n$ repeat step 2,
otherwise go to step 4.}
\step 3. [Compute cost of edge $(i,j)$.] {If $(i,j)$ is not a proper edge,
set $c{̌ij}\gets +\infty$. Otherwise, compute by algorithm
7.3B the best convex path $\beta$ from $pǐ$ to $pǰ$,
and its cost $b$, and set $c{̌ij}\gets b+1$.}
\step 4. [Phase 2.] {Apply algorithm 7.3D to the pair $(1,n)$,
producing the optimal decomposition of $P{̌1n}=P$ into convex pieces.}}

Algorithm 7.3B constructs two auxiliary tables: $\pi{̌km}$, containing,
for each pair $(k,m)$ with $i\leq k<m\leq j$,
the cheapest convex path from $pǐ$
that is compatible with $(i,j)$ and
ends with the edge $(k,j)$; and $t{̌km}$, that gives the cost of
that path. It also builds three auxiliary vectors $rǩ$, $\pi↑*ǩ$
and $t↑*ǩ$,
whose contents depend on the current value of the variable $m$.
For each $k$ such that $i<k<m$ and
$pǩ$ is visible from $pǐ$, $pǰ$ and $pm̌$,
$\pi↑*ǩ$ is the best convex path from
$pǐ$ to $pǩ$ that can be extended by the edge $(k,m)$,
$t↑*ǩ$ is its cost, and
$rǩ$ is the smallest index such that $(rǩ,k)$ is proper
and makes a concave angle with $(k,m)$.\par

\algorithm{7.3B}{Best convex paths from $pǐ$ to $pǰ$.}{
\step B1. [Initialize.] {Set $m\gets i+1$.}
\step B2. [Advance $m$.] {If $m=j$ go to step B7.
If $pm̌$ is visible from $pǐ$ and $pǰ$,
set $\pi{̌im}\gets \langle i,m \rangle$
and $t{̌im}\gets c{̌im}$;
else set $\pi{̌im}\gets \Lambda$, $t{̌im}\gets +\infty$. Set $rm̌\gets i$,
$\pi↑*m̌\gets\Lambda$, and $t↑*m̌\gets \infty$. Set $m\gets m+1$, $k\gets i+1$.}
\step B3. [Check for improper edges.] {If $pǩ$ is invisible from
$pǐ$, $pǰ$, or $pm̌$,
set $\pi{̌km}\gets \Lambda$, $t{̌km}\gets +\infty$ and go to step B6.}
\step B4. [Advance $rǩ$.] {If $rǩ\geq k$, or $(rǩ,k)$ is proper and
the angle between $(rǩ,k)$ and $(k,m)$ is concave,
go to step B5. If $t{̌rǩk} < t↑*ǩ$, set $\pi↑*ǩ\gets \pi{̌rǩk}$ and
$t↑*ǩ\gets t{̌rǩk}$. Set $rǩ\gets rǩ+1$ and return to step B4.}
\step 5. [Best path ending with $(k,m)$.] {If $\pi↑*ǩ=\Lambda$ then
set $\pi{̌km}\gets \Lambda$ and $t{̌km}\gets +\infty$;
else set $\pi{̌km}\gets \pi↑*ǩ\cdot \langle m\rangle$ and
$t{̌km}\gets t↑*ǩ + c{̌km}$.}
\step B6. [Increment $k$.] {If $k=m$, go back to step B2. Otherwise, set
$k\gets k+1$ and return to step B3.}
\step B7. [Best path to $pǰ$.] {Find $k$ such that $i<k< j$ and
$t{̌kj}$ is minimum; set $\beta\gets \pi{̌kj}$,
$b\gets t{̌kj}$ and return.} }

It is easy to check that, for each call of algorithm 7.3B, no step
will be performed more than $O(n↑2)$ times. The only case for which this
is not immediately obvious is step B4; to show that it too is reached
at most $O(n↑2)$ times, it is enough to notice that
the variable $rǩ$ is initialized once with $i$ in step B2,
is incremented by one every time the test of step B4 fails, and
its value cannot become greater than $k$. Since there are
$O(n)$ such variables, the
operation $rǩ\gets rǩ+1$ is executed at most $O(n↑2)$ times,
implying the same is true for step B4.\par

Algorithm 7.3D decomposes the ear $P{̌ij}$ by recomputing the best
convex path from $pǐ$ to $pǰ$, and calling itself recursively on the
sub-ears that lie to the right of it.\par

\algorithm{7.3D}{Decompose the ear $P{̌ij}$.}{
\step D1. [Find root piece.] {Compute the best convex path $\beta$
from $pǐ$ to $pǰ$, using algorithm 7.3B. Output the convex polygon
whose vertices are those of $\beta$.}
\step D2. [Enumerate sub-ears.] {Apply recursively algorithm 7.3D to
each cut $(k,m)$ belonging to the path $\beta$.}}

Algorithm 7.3 and its routines use a pre-computed
two-dimensional table to test whether a given edge $(i,j)$
is proper or not. This table can be constructed in $O(n↑2)$
time, since, for each vertex $pǐ$, we can find in
$O(n)$ time the set of vertices $pǰ$ that are visible from it
(see homework problem 27).

A further improvement in the running time of Greene’s algorithm
comes from the observation that,
if $pǐ$ and $pǰ$ are convex corners of $P$, the
cut $(i,j)$ cannot belong to an optimal convex
decomposition of $P$. Therefore, all such edges (and the paths
containing them) can be ignored in the algorithm. If $P$ has
$v$ concave corners, this improvement reduces the
running time of the algorithm to $O(n↑2v↑2)$.\par

\vfill\eject
\sect{7.5. Near-optimal convex decomposition.}

Theoretically, algorithm 7.3 is quite important, for it shows that the
optimal convex decomposition of a polygon can be found
in polynomial time. However, its $\Theta(n↑4)$ asymptotic running time
makes it far too slow for practical use.
In most applications, an expensive algorithm that gives exact solutions
is less valuable than a cheap one that gives reasonably accurate
approximations.\par

We will describe next an entirely different algorithm, also due to Greene
(1981), that runs in $O(n\log n)$ time and finds a decomposition of a
monotone polygon $P$ in at most $4c(P)-3$ convex pieces, i.e. with at most
four times as many cuts as the absolute minimum.
Used in conjunction with algorithm 7.2,
it gives an efficient method for the convex decomposition of simple polygons.\par

The method is very similar to the triangulation algorithm of
Preparata and Garey (7.1),
since it too examines the vertices $p1̌, p2̌, \ldotss, pň$ from top to bottom (assuming the given polygon is monotone along the
$y$ axis), removing the convex pieces
in such a way that, after each cut, the figure that remains to
be decomposed is still a monotone polygon. However, the
fact that the pieces are convex polygons instead of
triangles forces us to use more complex data structures
and to consider many more special cases.\par

When vertex $pǐ$ is going to be considered, the remaining polygon $P*$
is represented by a vertex $r$, and three auxiliary vertex lists, the
{\it stack} $S=\langle s1̌, s2̌, \ldotss, sǩ\rangle$, the {\it alternate chain}
$A=\langle a1̌, a2̌, \ldotss, a\̌lscr\rangle$, and the
{\it extension chain} $X=\langle x1̌, x2̌, \ldotss,
xm̌\rangle$. The vertex $r$ and the lists $S$, $X$,
and $A$ satisfy the following {\it chain invariants}:\par

\indent\bu\quad from top to bottom, the vertices of the residual
polygon $P*$ that are
above $pǐ$ are $\langle r,a1̌, a2̌, \ldotss, a\̌lscr\rangle$ on one
side, and $\langle r, s1̌, s2̌, \ldotss, sǩ, x1̌, x2̌, \ldotss, xm̌\rangle$
on the other;\par

\indent\bu\quad the angles at $r$, at $a1̌, a2̌, \ldotss,
a\̌lscr$, and at $x1̌, x2̌, \ldotss, xm̌$ are all convex;

\indent\bu\quad the angles at $s1̌, s2̌, \ldotss, sǩ$ are all concave;\par

\indent\bu\quad the segment $ra\̌lscr$ is entirely in $P$.\fig7\par

In the above statements, as well as in the algorithm below, the ``angle at $p$’’ means the internal angle of the remaining polygon $P*$ at the
vertex $p$; this angle may change when $P*$ changes.
Note that the point
$r$ can be either above or below $s1̌$. \par

The left and right boundaries of $P*$ are called the {\it stack/extension
side} and the {\it alternate side}, depending on the current location
of the corresponding chains (which may switch sides many times
during the execution of
the algorithm). The next point to be considered, $pǐ$, will
be adjacent to $r$ or $a\̌lscr$ if it lies on the alternate side, and
to $r$, $xm̌$, or $sǩ$ if it lies on the stack/extension
side\foot1{Note that these definitions are ambiguous when
$i=n$, or when $\lscr-1=m=k=0$.}.\par

The chains $S$, $A$ and $X$ are actually double
queues, with insertions and deletions happening at both ends. The
operation ``delete $s1̌$’’ means ``set $sǰ\gets s{̌j+1}$ for
$j=1, 2, \ldotss, k-1$, and set $k\gets k-1$; the reverse operation
``insert $y$ at the front of $S$’’
means ``set $s{̌j+1}\gets sǰ$ for $j=k, k-1, \ldotss, 2$,
set $s1̌\gets y$, and set $k\gets k+1$.
As we know, $S$ can easily be implemented in such a way that
both operations (and the similar ones at the rear end) require
only $O(1)$ time.\par

Greene’s method is given below as two
sub-algortithms (7.4A and 7.4X) which are called by algorithm 7.4
to process each vertex $pǐ$ of $P$. It is assumed that
$n\geq 3$.

\algorithm{7.4}{Efficient decomposition of a monotone polygon in
convex pieces.}{
\step 1. [Initialize chains.] {Set $r\gets p1̌$, $\lscr\gets k\gets
m\gets 0$, $i\gets 2$.}
\step 2. [Process points.] {If $pǐ$ is on the stack/extension side of $P*$, or
$k=m=\lscr=0$, perform algorithm 7.4X; else perform algorithm 7.4A.
Set $i\gets i+1$; if $i<n$, repeat step 2.}
\step 3. [Finish the decomposition.] {If $k=0$, output the polygon
$\langle r, a1̌, a2̌, \ldotss, a\̌lscr, xm̌, x{̌m-1}, \ldotss, x1̌\rangle$
and stop. Otherwise, output the piece $\langle r, a1̌, a2̌, \ldotss, a\̌lscr, s1̌\rangle$, set $r\gets s1̌$, $\lscr\gets 0$, delete $s1̌$ from
$S$ and repeat step 3.\fig4}}

Routine 7.4A tries to append the next point $pǐ$ to the
alternate chain; in order to do so without violating the
chain invariants, it may have to remove one or more convex pieces
from the part of $P*$ that is above $pǐ$. In this process,
the chains may become empty and switch sides, so that
$pǐ$ may end up appended to the stack. On entry to 7.4A,
at least one of the chains must be nonempty.

\algorithm{7.4A}{Process point on alternate side.}{
\step A1. [Check diagonal $rpǐ$.] {If $rpǐ$ is inside $P*$,
go to step A3. Otherwise, we must have
$\lscr>0, k>0$; output the polygon $\langle r, a1̌, a2̌,
\ldotss, a\̌lscr, s1̌\rangle$. If $a\̌lscr$ is below $s1̌$, then set
$a1̌\gets a\̌lscr$, $r\gets s1̌$, $\lscr \gets 1$, delete $s1̌$ from
$S$ and go back to step A1; else set $r\gets a\̌lscr$ and $\lscr\gets 0$.\fig3}
\step A2. [Check angle at $s1̌$.] {If $k=0$ or the angle at $s1̌$ is concave, go
to step A3. If $k=1$, insert $s1̌$ at the front of the extension chain $X$,
delete it from $S$, and go to step A3. Otherwise, output the triangle
$\langle r, s1̌, s2̌\rangle$, delete $s1̌$ from $S$, and repeat step
A2.\fig3}
\step A3. [Check angle at $pǐ$.] {If the angle at $pǐ$ is convex,
append $pǐ$ to the alternate chain $A$.
If the angle is concave, and $k=m=\lscr=0$, append $pǐ$ to the
stack $S$. In both cases, return to algorithm 7.4.\fig3}
\step A4. [Remove piece from alternate side.] {If $k=0$ then output
the piece $\langle r, a1̌, a2̌, \ldotss,\penalty-100
a\̌lscr, pǐ, \penalty-100 xm̌, x{̌m-1},
\ldotss, x1̌\rangle$, set $r\gets xm̌$ and $\lscr\gets m\gets 0$. If $k>0$ then output $\langle r, a1̌, a2̌, \ldotss,
a\̌lscr, pǐ, s1̌\rangle$, set $r\gets s1̌$, $\lscr\gets 0$ , and delete $s1̌$
from $S$. Go back to step A3.}}\fig3

It is not hard to check that the chain invariants are satisfied
at the beginning of each step of 7.4A, except in step A3 where the angle
at $s1̌$ may be convex. It is also easy to verify that the condition
$k=m=\lscr=0$ may happen only once, just before the
last execution of step A3, and, therefore, the chains
do not switch sides until that moment.\par

To prove that the pieces produced by algorithm 7.4A are entirely
contained in $P*$, we have to show that the corresponding cuts
are proper diagonals of $P*$. For example, let’s consider the
cut $a\̌lscr s1̌$ in step A1, and find out where it may cross
the boundary of $P*$. By the convexity of $A$, we see that
$a\̌lscr s1̌$ cannot cross any edge on the alternate side between $r$
and $a\̌lscr$.
By the concavity of $S$, and the hypothesis that $a\̌lscr$ can see $r$,
we conclude that it cannot cross any edge between $r$ and $x1̌$
on the stack/extension side. Finally, since the angle at $xm̌$
is convex, and the next point $pǰ$ on the stack/extension side has to
be below $pǐ$, we can rule out also the edges between $x1̌$
and $pǰ$.\fig3

Similar arguments show that the cuts $rs2̌$ (in step A2),
$pǐs1̌$, and $pǐxm̌$ (in step A4) are interior to $P*$,
and that the pieces defined by them
are convex.

The other routine, 7.4X, is quite similar to 7.4A and
is called when the point $pǐ$ is on the stack/extension
side of $P*$, or when all three chains are empty.

\algorithm{7.4X}{Process point on stack/extension side.}{
\step X1. [Check angle at $pǐ$.] {If the angle at $pǐ$ is convex,
append $pǐ$ to the extension chain $X$;
if the angle is concave and $m=0$, append $pǐ$ to the stack $S$.
In both cases, return to algorithm 7.4.\fig3}
\step X2. [Remove piece from extension side.] {If $k=0$, output the
polygon $\langle r, a1̌, a2̌, \ldotss,\penalty-100
a\̌lscr, pǐ,\penalty-100 xm̌, x{̌m-1},
\ldotss, x1̌\rangle$, set $r\gets a\̌lscr$, $\lscr\gets m\gets 0$,
and return to step X1.
Otherwise, output the polygon $\langle sǩ, x1̌, x2̌, \ldotss, xm̌,
pǐ \rangle$ and set $m\gets 0$.\fig3}
\step X3. [Check angle at $sǩ$.] {If the angle at $sǩ$ is convex, set
$m\gets 1$, $xm̌\gets sǩ$, and delete $sǩ$ from $S$.
In any case, return to step X1.}}

The correctness of 7.4X can be established in essentially the same way
as that of the previous routine. The chain invariants hold
at the beginning of every step, except that the angle at $sǩ$ may be convex in step X3. From the convexity
of the angle at $a\̌lscr$ and fact that the next point $pǰ$ on the
alternate side is below $pǐ$, we concude that
$a\̌lscr pǐ$ and $pǐsǩ$ in step X2 are proper cuts,
and the pieces defined by them are convex.\fig3\par

It easy to check that each iteration of any loop in
Greene’s algorithm at least one vertex is either added to or deleted
from $\Qscr=S\un A\un X\un \set{r}$. Since each vertex is
inserted at most once in $\Qscr$, we conclude that algorithm 7.4
runs in $O(n)$ time.\par

To count the number of pieces, let’s use an
accounting scheme in which every new cut is charged
to one of its endpoints. If one of the endpoints is an element
$sǐ$ of $S$, and the cut makes the angle at $sǐ$ become
convex, the cut is charged to $sǐ$. The cut
$a\̌lscr s1̌$ in step A1 is charged to $s1̌$, even though
the angle at $s1̌$ may remain concave; in such case, the
cut $a\̌lscr s1̌$ is said to be {\it extraordinary}.
In all other cases, the cut is
incident to the next vertex $pǐ$ and is charged to it.
For example, in the second case of step X2
the cut $pǐsǩ$ is charged
to $pǐ$ if the angle at $sǩ$ remains concave, and to $sǩ$ if
it becomes convex (so that $sǩ$ is moved to $X$ in step X3).\par

A particular vertex of $P$ can be inserted in $S$ at most once (either in step A3 or in step X1), and is deleted from
$S$ whenever its angle becomes convex. Therefore, if we do not count extraordinary cuts, every vertex can be charged at most twice: once while it is the next vertex $pǐ$,
and once when it is about to leave $S$.\par

In addition, we claim that a vertex can be charged at most
one extraordinary cut. For the cut $a\̌lscr s1̌$ in step A1
to be extraordinary, $\̌lscr$ must be above $s1̌$, so among
the vertices of $P*$, $s1̌$
must be at least third from the top. After the cut is made,
$a\̌lscr$ will be the top vertex, and $s1̌$ will be
second from top, so it cannot receive any other
extraordinary charge.\fig4\par

A point $p$ is inserted in $S$ only if the current angle
at $p$
is concave. Furthermore, when a cut is charged to $pǐ$
instead of a point of $S$, the current
angle at $pǐ$ is also concave. Since the
angles of $P*$ never increase, we conclude that a point is charged
only if it was a concave vertex of the original polygon $P$.\par

Moreover, we claim that a vertex $v$ can be charged
an extraordinary cut only if it is an {\it obnoxious vertex}
of $P$, that is, the angle at $v$ is concave,
and it cannot be made convex by a single cut.
This last condition is equivalent to saying that there is some
edge on the boundary of $P$ so close to $v$ that
it intercepts all rays out of $v$ that might split that
corner in two convex angles.\fig3\par

We see that a vertex $s1̌$ that receives an
extraordinary charge is an obnoxious vertex of $P*$
from the fact that the angle at $s1̌$ remains concave
after the cut, and that the latter is
performed only if $r pǐ$ is not inside $P*$.
The angle at $s1̌$ in the original polygon $P$
was at least as narrow as
the present one; also, when step A1 is reached,
no cuts with endpoint $pǐ$ have yet been made,
so $pǐ a\̌lscr$ is on the perimeter of $P$.
This implies $s1̌$ was an obnoxious vertex of the
original polygon, too.\par

In a convex decomposition of $P$, every concave
vertex must
be incident to at least one cut, and every obnoxious one
to at least two. On the other hand, a cut is incident to
exactly two vertices, so, if $P$ has $v$ concave vertices,
$x$ of which are obnoxious, the best convex
decomposition has at least $(v+x)/2$cuts
and $(v+x)/2+1$ pieces.\par

By our accounting , algorithm 7.4 will generate at most
$2v+x$ cuts. The number of pieces generated will be
$$\eqalign{2v+x+1|\leq (2v+2x+4)-3\cr
\null|=4((v+x)/2+1)-3\cr
\null|\leq 4c(P)-3.\cr}$$


The performance of algorithm 7.4 rarely approaches this upper bound.
Actually, the worst case known so far is given by the figure below:
The number of cuts generated by the approximate
algorithm (a) is only
${3\over2}v-1$, less than three times that of the
optimum solution (b).\fig5\par

\vfill\eject\end