\input CS445Hdr
% This is CS445N4C of February 18, 1982 9:10 AM
\setcount0 29
\def\header{4. Convex hull algorithms}
\def\title{Lecture 9}


\sect {4.10. The half-hull technique.}

Let $P$ be a set of points in the plane, and $H$ its convex hull.
We can define the {\it upper half} $H↑U$ of $H$ as being
the hull of $P\un\leftset(0,-\infty)\rightset$. Similarly, we define
the {\it lower half} $H↑L$ as being the hull of
$P\un\leftset(0,\infty)\rightset$.\fig4
It is easy to show that $H=H↑U\inte H↑L$, and
that $H↑U$ and $H↑L$ have at most two vertices in common (the
leftmost and rightmost points of $H$, if they are unique).
The vertices of $H$, in counterclockwise order,
are the finite vertices of $H↑L$ in order of increasing $x$, followed
by those of $H↑U$ in order of decreasing $x$ (with any
common vertices deleted).\par

This technique is rather common in convex hull algorithms
(cf. problem 8 and algorithm 4.7). Its main advantage is that it allows
replacing the cyclic ordering of the hull vertices by the linear ordering
of their $x$-coordinates in each half.\par

Furthermore, algorithms based on the split-and-merge principle usually
involve the determination of common tangents to the hulls
$H1̌$ and $H2̌$ of the two subproblems. If the split is
done by means of a vertical separating line, one of the
tangents will connect the upper halves $H1̌↑U$ and $H2̌↑U$, and the other
will connect $H1̌↑L$ and $H2̌↑L$. By processing the upper and lower
halves independently of each other,
we have to worry with only one tangent at a time; this usually
makes for simpler code.\par

Splitting the hull in ``upper’’ and ``lower’’ parts seems to introduce
an arbitrary privileged direction in an otherwise ``isotropic’’ problem,
a situation that is unpleasant to most mathematical minds.
It seems that in most algorithms this technique is
not essential, and we can almost always produce versions
that work simultaneously with the entire hull,
sorting the vertices in cyclic order. However, the monotonic half-hull
technique is usually found to produce dramatic simplifications in the
algorithms, particularly at the level of the final code. It should also
be noted that many algorithms based on cyclic ordering of the vertices
usually have a privileged ``starting point’’ or ``reference direction’’
(cf. Graham’s algorithm, 4.2), so they are
not entirely ``isotropic’’ either.\par

\digress{The processing of the upper and lower halves is so similar, that
the same procedures can be used for both; all we need
is an extra boolean parameter that, when set,
causes all point coordinates to be multiplied by $-1$ before
being used. This is equivalent to a 180\deg rotation of the
problem, which of course makes the lower hull to become
the upper one.\par

\dp To be sure, we need not even this special parameter if
we keep separate data structures for each half hull,
and if the points themselves are rotated before being included
in one of them. For example, if $\prg{Insert($x$,$y$,$H↑U$)}$
updates the upper
half to take into account the insertion of point $(x, y)$, then
$\prg{Insert($-x$,$-y$,$H↑L$)}$ will do the same for the
lower half. With this trick, counterclockwise traversal of $H$
can be done by scanning $H↑L$ and $H↑U$ in the {\it same
order}, namely of decreasing $x$.\par

\dp Programmers whose aesthetic sense is troubled by the fact
that $H↑U$ and $H↑L$ may or may not have common endpoints,
depending on whether $\hull(P)$ has vertical edges or not, may
consider ``skewing’’ the infinity points a little bit. By this we mean
that a vertical edge of $H$ is considered to be a proper edge of
$H↑U$ if it is at the right of $H$, and a proper edge of
$H↑L$ if it is at the left.\fig4
In this way, $H↑U$ and $H↑L$ will always have two common vertices;
said another way, the edges of $H$ are the (disjoint) union of the
finite edges of $H↑L$ and $H↑U$. Of course, some programmers will
find {\it this} trick awful, and may prefer to stick to the original
definition, or to include each vertical edge of $H$ in {\it both} half-hulls.}

\sect{4.11. Dynamic algorithms.}

The on-line computation of the convex hull of a set
whose points are being added and deleted in arbitrary sequence
(known as the {\it dynamic} convex hull problem)
is significantly harder than the corresponding problem in
which only insertions are allowed.\par

To begin with, the deletion of a point from the set can cause an
``interior’’ point to become part of the convex hull. The on-line
algorithms we studied in sections 4.4 and 4.5 are designed to
throw away interior points; if we were to use them in dynamic
situations, we should arrange for interior points to be stored away
somewhere, and to be examined again when a point of the current
hull is deleted.\par

The naive solution would be to store interior points in an unstructured
list or array, and to apply the on-line insertion algorithm
to each of them whenever a hull vertex is deleted. However, if
the current point set has $n$ elements, we can have up to
$n-3$ interior points (and, as we know, $O(n)$ points on the average),
so the deletion of a single vertex will take $O(n\log n)$ time. This
shows that the dynamical convex hull problem requires
special data structures and algorithms.\par

In what follows we will give an overview of a dynamic
convex hull algorithm due to M. Overmars
and J. van Leeuwen (1979). This algorithm breaks the
hull in its upper and lower halves, as described in the previous section,
storing and processing each half independently.\par

In what follows, we will be concerned only with the lower half $H↑L$
of $H=\hull(P)$. Both the current set ofpoints $P$ and its lower hull $H↑L$ are
represented by a single tree-like data structure,
whose leaves are the points of $P$,
and whose internal nodes correspond to line segments joining them.\par

More specifically, if $P$ has currently two or more elements,
it is partitioned by a vertical line with $x=\bar x$ into
a left half $P1̌$ and a right half $P2̌$, of roughly equal sizes.
The left and right subtrees of the data structure represent the sets
$P1̌$ and $P2̌$, and their respective lower hulls $H1̌↑L$ and $H2̌↑L$.
As we said before, the hulls of $P1̌$ and $P2̌$ have two common tangents,
and exactly one of touches the lower halves of both. Therefore,
the lower hull $H↑L$ is given (in order
of increasing $x$-coordinates) by a ``prefix’’ of $H1̌↑L$,
followed by a segment $ab$ of this common tangent, followed by a
``suffix’’ of $H2̌↑L$:\fig5
The root of the tree contains only the connecting edge $(a, b)$,
and the separating abscissa $\bar x$.
The subtrees corresponding to
$P1̌$ and $P2̌$ are recursively defined in the same manner, until
we get to single points (the leaves of the data structure).\par

Assuming that the subtrees corresponding to $P1̌$ and $P2̌$
have been constructed, all that remains to do is to determine
the connecting or ``root’’ edge $(a, b)$.
We could use algorithm 4.5 for this purpose, were it not for
its relative inefficiency. If $P1̌$ and $P2̌$ have respectively
$n1̌$ and $n2̌$ points, that algorithm
would take $\Omega(n1̌+n2̌)=\Omega(n)$ time, in the worst
case. Since this computation has to be repeated for each edge
in the data structure, the total time would be prohibitive.\par

Fortunately, we can find $(a, b)$ in just $O(\log n)↑2$ time, by
an double binary search on the two hulls.
Following the definitions in section 4.5,
we can check that if $b$ is a fixed point
on the $H2̌↑L$, and $a$ scans $H1̌↑L$ in order of increasing
$x$, the segment $ab$ will change from internal to supporting and
then to concave,
with respect to $H1̌↑L$. Therefore, by a simple binary
search we can find in time $O(\log n1̌)$ the point $a=f(b)$
such that $ab$ is tangent to the left hull at $a$.
If we repeat this process for each vertex $b$ of $H2̌↑L$, in order of
increasing $x$, we will notice a similar ``monotone’’ behavior:
the edge $(b,f(b))$ will change from
concave to supporting and then to internal at $b$, with
respect to the $H2̌↑L$.\fig4
Therefore, by doing a binary search on $b$ we can find the common tangent
to both hulls in $O(\log n2̌)$ probes. The total time will be
$O(\log n1̌)O(\log n2̌)=O(\log n)↑2$.\par

In order to perform each binary search in $O(\log n)$ probes,
we should require $H1̌↑L$ and $H2̌↑L$ stored as a pair of linear
arrays or binary search trees. Fortunately, the tree structure
we are using to represent the set $P$ can also be used
to guide this search, since the vertices on the
hull are a subsequence of the list of all points sorted by $x$-coordinate.
This situation is somewhat analogous to that of the real-time algorithm
discussed in section 4.5. The search is made a little
more complex by the fact that the tree contains all points of $P$,
not just those on the hull; but this turns out not to affect
the final bound of $O(\log n)↑2$.\par

Note that the space required to represent this tree structure is
proportional to the total number of vertices in the current set:
beacuse of the way the edges are defined, they constitute a connected,
acyclic graph on $n$ points (i.e, a tree), which, as we know, has
exactly $n-1$ edges.\par

To insert a new point $(x↑*, y↑*)$, we proceed at first as if we were
inserting a new element in a standard binary search tree:
we we go down either the left or the right subtree, depending on whether
$x↑*<\bar x$ or not, until we get to an empty subtree.
The latter is then replaced by the new leaf $(x*, y*)$.
After this ``downwards’’ phase, we return to the
root by the same path, and at each
node along it we recompute the corresponding edge,
i.e. the common tangent to the hulls represented by its
two subtrees.\par

The deletion of a point proceeds in a totally similar manner:
the corresponding leaf is deleted, and the common tangents
corresponding to each node on the way back to the root
are recalculated.\par

Assuming the structure can be kept balanced in
face of arbitrary insert and delete operations, the path to
any leaf will contain $O(\log n)$ nodes. Since the processing
of each node takes $O(\log n)↑2$ time, each insertion and deletion
will require $O(\log n)↑3$ time (not including the time spent rebalancing
the updated tree).\par

The structure can be kept balanced in much the same way as
AVL binary search trees are. During the ``upwards’’ pass
of an insertion or deletion, before updating each node we check whether
the heights of its subtrees differ by at most one. If that is not the case,
the situation is corrected by a ``rotation’’ of the node and a few of its
closest descendants, which involves shuffling the subtrees of
those nodes and recomputing their connecting edges. Since we have
$O(\log n)$ rotations\foot1{This is the worst case for rebalancing
after {\it deletion} of a point. It can be shown that
after at most one rotation is needed to rebalance the tree
after an insertion.}, each rotation affects $O(1)$ nodes, and the
recomputation of each edge takes $O(\log n)↑2$ time, the
total cost of an insertion or deletion (including rebalancing)
is still $O(\log n)↑3$.\par

These bounds are not optimal, since it is known that the common tangent
can be computed in $O(\log n)$ time, rather than $O(\log n)↑2$.
This requires even more complex data structures than those
described above, and therefore such algorithms will not be discussed here.

\vfill\eject\end