I had to give my Dealer this week, so I couldn't read the "business" part of your paper yet. However, I have read the "formalities" sections (1 to 3) and have a lot to say about them. Please forgive me; I fear I have become more of a mathematician than a computergraphician, and therefore I have got rather strong opinions about such vital matters as the location of superscripts and the order of terms in a sum.
Moreover, I am myself working on the "formalities" section of the computational geometry book, and in particular on homogeneous coordinates. I can give you a copy of my scribblings if you wish, but there isn't much in them: I only got so far as to define points, lines, planes, etc; no transformations yet, not even vectors. In fact, by reading your paper and trying to write my comments I came to understand those concepts a lot better (the differences between vectors and points, for example). Many of the comments below I wrote mostly for my own benfit, to get the things straightened out in my mind (possibly for use in the book).
Here are my suggestions/comments/ramblings, section by section. The notation 3.2/4/1-3 means section 3.2, paragraphs 4, line 1 to 3.
Sections 1 and 2
I believe these two sections would be better if combined into a single "Introduction". Section 2 could go after 1./2.
Paragraphs 1./3 and 1./4 should probably be transposed.
1./3/1: ...schemes, --> ...schemes.
1./4/6-7: ... their surfaces relatively ... better: ...their surfaces relative to each other ...
2./2-3: what is the difference between 1 and 2? Isn't item 2 relative to B, too?
Section 3
*** WARNING: READING THE NONSENSE BELOW MAY BE HAZARDOUS TO
YOUR MENTAL HEALTH ***
Section 3 apparently suffers from a kind of "chicken and egg" dilemma:
Who comes first, points or coordinates? In other words, which of the views of "space" are you taking as the starting point:
The "Cartesian" view:
Space is simply R3.
There is a natural WORLD frame: position (0,0,0), vectors (1,0,0), (0,1,0), (0,0,1).
A point, a vector, and the coordinates of a point wrt WORLD are the same thing.
There is a notion of "true" orthogonality (without reference to a frame): xX+yY+zZ=0.
As a consequence you can compare non-parallel distances, and talk of a
"rotation", also without reference to a specific frame.
There is a natural unit of distance: dist((000),(100))=1.
The "Classical Physics" or "Euclidean" view:
Space is physical space, or anything "isomorphic" to it.
Points are locations in this space.
Vectors are the same as translations: you can add (compose) two vectors,
or add (apply) a vector to a point, or subtract two points and get a vector,
but not add two points, or talk of the "tip of a vector".
There is no distinguished origin or direction: we may think of the WORLD frame as
being chosen arbitrarily by the user.
SolidViews knows about and manipulates only the coordinates of points and vectors and
frames wrt some frame (ultimately, WORLD), but the points and frames
themselves are physical objects external to SolidViews.
There is a notion of "true" orthogonality, so e.g. you may ask the user to choose
an orthogonal basis to be the WORLD. This also implies one can compare lengths
even of non-parallel segments, so you can ask the user to pick WORLD with
axes of equal length. You can also talk of a "rotation" without reference to a specific frame.
There is no "absolute" unit of length, however, so you cannot ask the user to take
WORLD with "unit length" vectors. Length and distance are always relative to some
frame (see below).
The "Linear Algebra" ("Einstenian?") view:
Same as "classical physiscs", except that there is no notion of "true" orthogonality:
You cannot even ask the user to choose WORLD orthogonal.
There are probably many other viewpoints you can think of, but my impression is that
most of the time geometers/mathematicians/physicists/computergraphicians are talking
about one of these. The Cartesian view is more economical and simpler in many ways,
but the Euclidean view seems to be the most natural for non-mathematicians
(You seem to be assuming this view, except for a couple of places.)

Of course which one you pick makes no real difference from
the practical or mathematical point of view, but
it determines the language you are allowed to use
in the "basic notions" section. For example, in the Cartesian viewpoint you define WORLD as being the frame <(000),(001),...>; in the Euclidean and Einstenian views you
say WORLD is some arbitrary frame chosen by the user, and (in the Euclidean view only) you may require it to be orthogonal and with axes of equal length. Also (to be really nit-picking) in these two viewpoints the "coordinates of a vector" must be defined independently of the "coordinates of a point", since vectors and points are very different beasts.
PS: If this seems nonsense to you, don't worry: it seems nonsense to mee, too, but I
hoped that by writing it up I could see things more clearly (I still don't).
*** END OF conscious NONSENSE ***
My preferred way of looking at frames and coordinates, according to the "Euclidean" viewpoint and your current notation (slightly extended), would be something like this:
Begin ----------------------------------------------------------------------------------
A coordinate frame G consists of a point Go of three-space (the reference point or origin of the frame), and three linearly independent vectors Gx, Gy, Gz. It can be visualized as three lines (the x,y, and z coordinate axes of G) passing through Go and parallel to those vectors.
Let p be a point of three space. The x,y,and z coordinates of p in the frame G are the real numbers Gpx, Gpy, Gpz such that p = Gpx Gx + Gpy Gy + Gpz Gz + Go (the bold + is the "physical" operation of composing two vectors (=translations), or applying a vactor (=translation) to a point of space). We will write these coordinates in homogeneous form, that is, in the form of a 1x4 array [Gpx Gpy Gpz 1], that will be denoted by Gp.
The coordinates of a vector v in the frame G are the three real numbers Gvx, Gvy, Gvz such that v = Gvx Gx + Gvy Gy + Gvz Gz (note: without Go). In homogeneous form, the coordinates of a vector are written as the 1x4 array Gv = [Gvx Gvy Gvz 0] (note: last element is 0, not 1).
To obtain the coordinates of the point p and of the vector v in a different frame H, we post-multiply Gp and Gv by the 4x4 array
   H(Gx)xH(Gx)yH(Gx)z  0
   H(Gy)xH(Gy)yH(Gy)z  0
HG = 
   H(Gz)xH(Gz)yH(Gz)z  0
   H(Go)xH(Go)yH(Go)z  1
where H(Gx)y (as defined in the previous paragraphs) is the y-coordinate of the vector Gx in the frame H, and so forth. That is, the rows of HG are the homogeneous coordinates of Gx, Gy, Gz, and Go in the frame H. The array HG is called the matrix of G in the frame H.
Proof that the array multiplication really does what we claim: by carrying it out for a point p, we get the 1x4 array
[Gpx H(Gx)x + Gpy H(Gy)x + Gpz H(Gz)x + H(Go)x,

Gpx H(Gx)y + Gpy H(Gy)y + Gpz H(Gz)y + H(Go)y,
Gpx H(Gx)z + Gpy H(Gy)z + Gpz H(Gz)z + H(Go)z, 1]
We must check that
(Gpx H(Gx)x + Gpy H(Gy)x + Gpz H(Gz)x + H(Go)x) Hx
+ (Gpx H(Gx)y + Gpy H(Gy)y + Gpz H(Gz)y + H(Go)y) Hy
+ (Gpx H(Gx)z + Gpy H(Gy)z + Gpz H(Gz)z + H(Go)z) Hz + Ho = p
This is just a matter of expanding the expression above and using the identities below (from the definitions):
H(Gx)x Hx + H(Gx)y Hy + H(Gx)z Hz = Gx,
H(Gy)x Hx + H(Gy)y Hy + H(Gy)z Hz = Gy,
H(Gz)x Hx + H(Gz)y Hy + H(Gz)z Hz = Gz,
H(Go)x Hx + H(Go)y Hy + H(Go)z Hz + Ho = Go,
and finally
Gpx Gx + Gpy Gy + Gpz Gz + Go = p.
We have proved our claim, namely Gp HG = Hp.

Similarly, to obtain the matrix HK of an arbitrary frame K in the frame H, we post-multiply GK by the matrix HG:
GK HG = HK
(We can easily prove this by considering the effect of post-multiplying separately each row of GK by HG.)
Most of the matrix products we will consider are of the form Ap BA CB DC... or BA CB DC..., where the superscript of each term is the frame represented by the following term. For succintness, we will omit the superscripts in such products, and write just pABC and ABC, respectively.
The length of a vector v relative to a frame G is by definition the real number
sqrt[(Gvx)2 + (Gvx)2 + (Gvx)2]. The distance between two points p, q rel. to G is the length (rel to G) of the vector p - q. Two vectors u,v are said to be orthogonal relative to G if Gux Gvx + Guy Gvy +Guz Gvz = 0. A transformation of space into itself that keeps the origin of G fixed and preserves all distances between points (rel. to G) is a rotation relative to G.
An affine transformation is a mapping of three space into itself that preserves parallelism (and therefore colinearity, ratios of areas, ratios of lengths along the same direction, etc.) We denote by pF the image of a point p under an affine map F. The image of a vector v under F is defined as being the vector vF such that (p+v)F = pF + vF for all points p. The image of a frame H under a mapping F results by applying F to the origin and vectors of H. The coordinates of pF or vF in a given frame H can be computed by post-multiplying Hp or Hv, respectively, by the 4x4 array
   H(HxF)x H(HxF)y H(HxF)z 0
   H(HyF)x H(HyF)y H(HyF)z 0

   H(HzF)xH(HzF)yH(HzF)z  0
   H(HoF)xH(HoF)yH(HoF)z  1
where H(HxF)y (as usual) is the y-coordinate of the vector HxF in the frame H, and so forth. That is, the rows of this matrix are the homogeneous coordinates of HxF, HyF, HzF, and HoF in the frame H itself. This matrix is precisely the matrix that describes the frame HF in the frame H, that is, H(HF). Conversely, every pair of frames G, H defines an affine map, that takes H into G, and whose matrix is simply HG. MORE STUFF TO COME ...
End ----------------------------------------------------------------------------------
Other observations/suggestions not included in the above:
3.2/1: move this to the very end of "Terms and Notations" section (until then, introduce each new variable in full: "Let q be another point, H be another frame...").
3.2/2: (Since we will...) I have trouble understanding this. seems a rather ambiguous notation.
3.2/3: I guess the pre-superscript convention is a carry-over from Paul's notation. Since you do array products in reverse order, it would be more consistent to use post-superscripts for the reference frame. The formulas
Hp = Gp HG and HK = GK HG would become
pH = pG GH and KH = KG GH
which are easier to memorize and to vocalize. Also, the vectors Gx etc I defined above would be better denoted by xG, so as to give p = pGx xG + pGy yG + pGz zG + oG. Don't you agree that the aesthetic pleasure you will get contemplating those equations is worth the trouble of transposing those few thousands superscripts in your paper? True, you will have also to fix Cedar.Style to make sub+superscripts like pHx work, but that will not take longer than a month or two...

3.2/4: I would try to avoid using "vector" as a synonym of "1xN array", and reserve "vector" for the physical/geometrical concept (=translation). "Row vector" and "column vector" are slightly less ambiguous, but still disturbing. (Perhaps use just "column" and "row"? yich!) Also, it seems to be superfluous to point out the notational differences between your paper and Paul's, except where it really matters (like the switch from column to rows). Also "is assumed be a displacement from the coordinate frame to its left" is confusing; better say "denotes the coordinates of p in the frame to its left".
3.3/1: Are all frames orthogonal (with respect to WORLD)? Is WORLD orthogonal? (if in the Euclidean view: are its vectors of same length?)
3.3/1, 3.3/last, and others: It would be better to collect all references to the way SolidViews uses frames into a new (sub?)section, separate from the "pure math" sections. The WORLD frame should be defined there, too. Other points that should be moved to such section:
--- Each node in the hierarchy has an associated frame
--- The frame G of a node is is defined by its matrix HG in the frame H of the parent node.
--- Therefore, if the frame H is replaced by HF, where F is any affine map, G automatically becomes GF
--- Leaf objects (spheres, cubes, etc) are defined in terms of coordinates relative to the object's frame, so if the frame is scaled, rotated, distorted the object suffers the same changes.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
WELL This is enough ramblings for now. Please see if you can find something useful in all this trash (Good luck...)

See you tomorrow (maybe)

jorge
\section{\Ch8.\Sec2. Additional geometric tools}
\digress{{\bf Note:} This section is here mainly for historical reasons. Logically, it belongs to much earlier in the book.}
\note{There is much duplication in what follows, particularly between the 2D and 3D sections. They probably should be developed in parallel. Some of the plane properties could go to exercises.}
\subsection{\Ch8.\Sbsec{\Sec2.1}. The Cartesian plane}
The algorithmic character of computational geometry forces us to be explicit about ``sideness'' tests and constraints that are normally taken for granted in geometrical discussions. For example, in many algorithms it is necessary to test on which side of a line a given point lies, whether a point lies ``above'' another, or wheter an angle is clockwise or counterclockwise.
In classical plane geometry there is no direct way to distinguish the two half-planes determined by a line, or to distinguish a positive (counterclockwise) turn from a negative one. The classical Euclidean plane has no ``viewing side''; that is, all geometrical properties of an object are unaffected by reflection around a line, which is equivalent to ``flipping the plane over''\foot\dag{Moreover, the Euclidean plane has no distingushed point, direction, or unit of length: the properties of an object are unaffected by a rigid rotation or translation, or by an uniform change of scale.}. Sideness relations have to be introduced indirectly. For example, in Euclidean geometry it is meaningful to say whether two points are on the same side of a line, or on opposite sides of it; therefore, if we fix a ``reference'' point $r$, then given any line $\lscr$ not passing through $r$ we can distinguish the the side of $\lscr$ that contains $r$ from the one that does not.
In computational geometry, however, we work on the {\it Cartesian plane} $\reals^2$, which has a richer structure than the Euclidean one. Specificaly, in the Cartesian plane there is a distinguished point (the origin), a distinguished unit of length (the distance between $(0,0)$ and $(0,1)$), a distinguished ``East'' direction (that of the positive $x$-axis) and a distingushed ``North'' direction orthogonal to it (that of the positive $y$-axis). We can therefore speak meaningfully of an horizontal segment, of the half-plane above a given line, of the extremal point of a set in the $+x$ direction, and so on.
\subsection{\Ch8.\Sbsec{\Sec2.2}. Orientation of triangles}
In particular, the East and North directions allow us to distinguish the ``clockwise'' and ``counterclockwise'' senses of rotation. Informally speaking, we are moving {\it clockwise} around a point $p$ if we are going in the general South direction with the line of motion passing East of $p$, or if we are going Eastwards with the line of motion passing North of $p$, and so forth. A triangle $abc$ is {\it clockwise oriented} (writeen $\CW(a,b,c)$) if we move that way around $c$ when going from $a$ to $b$. We write $\CCW(a,b,c)$ if the triangle $abc$ has the opposite ({\it counterclockwise}) orientation.\figspace3cm
These intuitive concept are neatly captured by a simple algebraic formula:
\definition{\Def{36}}{Three points $a$, $b$, and $c$ of $\reals^2$ are said to be {\it positively oriented}, written $\posor(a,b,c)$, if the determinant
$$\Delta(a,b,c)=\detmat{\matrix3{
xaya1\cr
xbyb1\cr
xcyc1\cr}}\eqno(\Eq{36})$$
is positive. If $\Delta(a,b,c)$ is negative, we say $a,b,$ and $c$ are {\it negatively oriented}, and we write $\negor(a,b,c)$.}
It is a rather well-known fact that $\CCW(a,b,c)\eqv\posor(a,b,c)$, and $\CW(a,b,c)\eqv\negor(a,b,c)$ for any triangle $abc$. However, considering the importance of these predicates for geometrical algorithms, it is worth spending a few lines to show this equivalence.
Let us assume first that the side $ab$ is not vertical, that is, $xa\neq xb$. The line of the motion from $a$ to $b$ will intercept the vertical line passing through $c$ at the ordinate
$$y\ast=ya + {{xa-xc}\over{xa-xb}}(yb-ya).$$\figspace3cm
For the triangle $abc$ to be counterclockwise oriented we must have $yast>yc$ when $xa>xb$, and $yast<yc$ when $xa<xb$. This is equivalent to the condition
$$\left(ya-yc +{{xa-xc}\over{xa-xb}}(yb-ya)\right)(xa-xb)>0,$$
that is,
$$(ya-yc)(xa-xb) + (xa-xc)(yb-ya) >0.\eqno(\Eq{36A})$$
If $ab$ is vertical, we must have $ya\neq yb$. The line $ab$ cuts the horizontal line through $c$ at the abcissa $xa=xb$. For $abc$ to be counterclockwise oriented, we must have $xa>xc$ if $yb>ya$, and $xa<xc$ if $yb<ya$; that is, $ (xa-xc)(yb-ya) >0$. Since $(xa-xb)=0$, this condition is equivalent to inequality (\Eq{36A}). But that comes out to be
$$xayb-yaxb+xbyc-ybxc+xcya-ycxa>0,$$
which is otained from (\Eq{36}) by expanding the determinant. We conclude that $\CCW(a,b,c)\eqv\posor(a,b,c)$. An entirely similar argument shows that $\CW\eqv\negor$.
The predicates $\posor$ and $\negor$ are defined for any three points $a,b,c$, even if they are colinear or coincident. From the theory of determinants we have
$$\eqalignno{\Delta(a,b,c)=\detmat{\matrix3{
xa-xcya-yc0\cr\vsk3
xb-xcyb-yc0\cr\vsk3
xcyc1\cr}}\cr\vsk6
\null=\detmat{\matrix2{
xa-xcya-yc\cr\vsk3
xb-xcyb-yc\cr}}(\Eq{37A})\cr\vsk6
\null=(xa-xc)(yb-yc)-(ya-yc)(xb-xc).(\Eq{38A})\cr}$$
Now let $u=a-c=(xa-xc,ya-yc)$ and $v=b-c=(xb-xc, ya-yc)$. From (\Eq{37A}), we see that
$$\Delta(a,b,c)=\detmat{\matrix2{
xuyu\cr
xvyy\cr}}.\eqno(\Eq{39A})$$
The points $a,b,c$ are colinear if and only if $u$ is a multiple of $v$, or vice-versa. This is precisely the condition for the determinant above to be zero. So, {\it the three points are colinear if and only if $\posor(a,b,c)$ and $\negor(a,b,c)$ are both false}\foot\dag{Note that this includes the case where two or more of the points are coincident.}. In those cases, we will define $\CCW(a,b,c)$ and $\CW(a,b,c)$ to be both false. With this convention, $\CCW$ and $\CW$ become identical to $\posor$ and $\negor$.
Equation (\Eq{38A}) gives an efficient formula for computing $\CCW(a,b,c)$, that requires only five subtractions and two multiplications.
\note{discuss numerical stability}
\note{comment on other meanings of $\CCW$: positive area, positive angle, ccw as seen from an interior point, etc. Say that the affine maps that preserve $\CCW$ are those with positive determinant. In fact, (\Eq{39A}) is the determinant of the affine map that takes $(0,1),(1,0),(0,0)$ to $a,b,c$, respectively.}

Note that the order of the points $a,b,$ and $c$ is important. Intuitively, we expect the orientation of the triangle $bac$ to be opposite to that of $abc$. Indeed, by the elementary properties of determinants we have
$$\Delta(b,a,c)=\detmat{\matrix3{
xbyb1\cr
xaya1\cr
xcyc1\cr}}=-\detmat{\matrix3{
xaya1\cr
xbyb1\cr
xcyc1\cr}}=-\Delta(a,b,c),$$
so we conclude $\CCW(b,a,c)\eqv\CW(a,b,c)$. In general, we have the following:
\theorem{\Th{49A}}{Let $a\pr$, $b\pr$, and $c\pr$ be any permutation of the three points $a, b$, and $c$. Then $\posor(a\pr, b\pr, c\pr)$ is the equivalent to $\posor(a,b,c)$ or $\negor(a,b,c)$, depending on whether the permutation is even or odd, respectively. That is, $$\twoline{\posor(a,b,c)\eqv\posor(b,c,a)\eqv\posor(c,a,b)}{2pt} {\eqv\negor(c,b,a)\eqv\negor(b,a,c)\eqv\negor(a,c,b)}$$}
\subsection{\Ch8.\Sbsec{\Sec2.2}. Oriented lines}
In the Cartesian plane $\reals^2$, we define an {\it oriented line} as a classical (non-oriented) line $\lscr$ endowed with a {\it normal direction} $\hat \lscr$, a unit vector perpendicular to $\lscr$. This enables us to distinguish the {\it left side} of $\lscr$, the half-plane of $\lscr$ into which $\hat\lscr$ points, from its {\it right side}.
\note{Defining oriented lines in the 2SP requires care, since some lines have no normals, and the normal at the bottomm is different from that on top. Consider defining them as a line plus a distinction of which is the positive side. This has the advantage of being generalizable to circles as weel. The definition of (tangent) direction vector then becomes a problem.}
An oriented line $\lscr$ is uniquely determined by its normal vector and any point on $\lscr$. The line can also be specified by a {\it characteristic function}, a function $f$ of $\reals^2\mapsto\reals$ that is zero on $\lscr$, positive on its left side, and negative on its right side. An important example is the {\it signed distance} from $\lscr$ to an arbitrary point $p=(x,y)$, denoted by $\sdist(\lscr, p)$, that is the usual Euclidean distance between the two (measured perpendicularly to $\lscr$) taken with plus or minus sign depending on whether $p$ lies in the left or right side of $\lscr$.
\note{Consider defining characteristic functions so that they are positive on the right (and normals to be right-pointing, and the 1's column in $\Delta$ to be the first one. Note that this last suggestion inverts the meaning of $\posor$ for 3-space but not for $2-space$. It seems to make $\incircle$ and $\posor$ to agree.}
Let $\lscr$ be the oriented line with normal vector $\hat\lscr=(X, Y)$ and passing through the point $q=(xq,yq)$. For an arbitrary point $p=(x,y)$, the signed distance $\sdist(\lscr, p)$ is the component $\hat\lscr\cdot(p-q)$ of $p-q$ in the normal direction $\hat\lscr$.\figspace3cm
This gives
$$\eqalign{\sdist(\lscr, p)=X(x-xq) +
Y(y-yq)\cr
\null=Xx + Yy -
(Xxq+Yyq)\cr}\eqno(\Eq{91A})$$
This shows that a that any oriented line has a characteristic function of the form $f(x,y)=X x+Y y+W$, that is, a first-degree polynomial in $x$ and $y$.
Conversely, any first-degree polinomial $f(x,y)=X x+Y y+W$ such that $X$ and $Y$ are not both zero is a characteristic function for some oriented line $\lscr$. This line has unit normal vector
$$\hat\lscr={1\over{\sqrt{X^2+Y^2}}}\,(X, Y),\eqno(\Eq{92A})$$
and passes through the point
$$\lscr0=-{W\over{X^2+Y^2}}\,(X, Y).\eqno(\Eq{93A})$$
This can be easily verified by substituting these values back into (\Eq{91A}): we get
$$\sdist(\lscr, p)={{Xx+Yy+W}\over{\sqrt{X^2+Y^2}}}.\eqno(\Eq{94A})$$
Since this has the same sign as $f(x,y)=Xx+Yy+W$, we conclude that the latter too is a characteristic function for $\lscr$. The line $\lscr$ is unique, and will be denoted by $\plc{X,Y\hom W}$.
\note{The justification for this notation is by analogy with homogeneous coordinates of a point, $[x,y\hom w]$.}
When $X^2+Y^2=1$, the coefficients are said to be {\it normalized}, and the formulas above simplify to
$$\hat\lscr=(X, Y),$$
$$\lscr0=(-WX, -WY),$$
and
$$\sdist(\lscr, p)=Xx+Yy+W,$$
The point $\lscr0$ given by the formulas above is the point on $\lscr\plc{X,Y\hom W}$ that is closest to the origin. The distance from the origin to $\lscr$ is
$$\sdist(\lscr, O)={W\over{\sqrt{X^2+Y^2}}}\eqno(\Eq{95A})$$
or, if the coefficients are normalized,
$$\sdist(\pi, O)=W.$$

Clearly, $\plc{X, Y\hom W}$ and $\plc{X\pr, Y\pr\hom W\pr}$ denote the same oriented line if and only there is a {\it positive} real number $\alpha$ such that $X\pr=\alpha X$, $Y\pr=\alpha Y$, and $W\pr=\alpha W$. It follows that every oriented line has a unique set of normalized coefficients. The oriented line $\plc{-X, -Y\hom -W}$ passes through the same points as $\plc{X, Y\hom W}$, but has the opposite orientation: the left side of one is the right side of the other, and vice-versa.
\footprob{\Prob{\Sbsec{\Sec2.2}.1}}{When and where does the line $\lscr=\plc{X, Y\hom W}$ intersect the three coordinate axes?}
\footprob{\Prob{\Sbsec{\Sec2.2}.2}}{What is the relationship between $\plc{X, Y\hom W}$ and $\plc{X, Y\hom -W}$?}
\note{Develop the analogous theory for points in homogenous coordinates}
\subsection{\Ch8.\Sbsec{\Sec2.3}. The oriented line determined by two points}
\definition{\Def{37A}}{If $a,b$ are two distinct points of $\reals^2$, we define the {\it oriented line from $a$ to $b$} , or simply the {\it line $ab$}, as being the (classical) line through those two points, oriented so that a point $c$ is on its left (resp.right) side if and only if $a,b,$ and $c$ are positively (resp. negatively) oriented.}
To show that this line is always well defined, let us analyze how the predicate $\posor(a,b,c)$ depends on the point $c=$, when $a$ and $b$ are two (fixed) distinct points. If we expand the determinant (\Eq{36}) of definition \Def{36} by its last row, we get
$$\Delta(a,b,c)=Xxc+Yyc+W,\eqno(\Eq{51A})$$
where
$$X=+\detmat{\matrix2{
ya1\cr
yb1\cr}}=(ya-yb),$$
$$Y=-\detmat{\matrix2{
xa1\cr
xb1\cr}}=-(xa-xb),$$
and
$$W=+\detmat{\matrix2{
xaya\cr
xbyb\cr}}=xayb-xbya.$$
Note that $X$, $Y$, and $W$ do not depend on the point $c$. As we observed, expression (\Eq{51A}) is zero when $c=a$ or $c=b$. On the other hand, since $a$ and $b$ are distinct, there is some point $c$ that is not colinear with them, and for this point $\Delta(a,b,c)$ is nonzero. It follows that not both of $X$ and $Y$ in (\Eq{51A}) are zero, so $\plc{X, Y\hom W}$ is a valid oriented line with characteristic function $f(x,y)=Xx+Yy+W=\Delta(a,b,(x,y))$. By definition \Def{36}, this line has the required properties. The only other oriented line through $a$ and $b$ is the opposite of this one; its left and right sides are reversed, and therefore it does not satisfy definition \Def{37A}. This shows the line $ab$ exists and is unique.
\subsection{\Ch8.\Sbsec{\Sec2.4}. Orientation of tetrahedra}
In this book we will be concerned mostly with computational geometry on the plane. As a general rule, problems in three and higher dimensional spaces will be discussed quite informally, and only if they are a natural extension of their planar counterparts. However, it is often the case that a purely planar problems can be reduced, via some appropriate mapping of its input data, into a simpler geometrical problem in a space of higher dimension. For example, a line segment $pq$ on the plane can be viewed as a point $(xp, yp, xq, yq)$ in $\reals^4$. Such reductions justify the development of basic tools for computational geometry in spaces of higher dimension.
The $\posor$ and $negor$ predicates can be generalized to higher dimensional spaces in a most natural way:
\definition{\Def{37}}{We say that $k+1$ points $a0, a1, \ldots, ak$, of $\reals^k$ (in that order) are {\it positively oriented} (denoted by $\posor(a0, a1, \ldots, ak)$) if the determinant
$$\Delta(a1, a2, \ldots, a{k+1})=\detmat{\matrix5{
x{01}x{02}\ldotsx{0k}1\cr\vsk3
x{11}x{12}\ldotsx{1k}1\cr\vsk3
\vdots\vdots \vdots\cr\vsk3
x{k1}x{k2}\ldotsx{kk}1\cr}}\eqno(\Eq{38})$$
is positive. If $\Delta$ is negative, we say the points are {\it negatively oriented} (written as $\negor(a0, a1, \ldots, ak)$).}
In particular, given four points $a,b,c,$ and $d$ in three-dimensional space, we have
$$\Delta(a,b,c,d)=\detmat{\matrix4{
xayaza1\cr\vsk3
xbybzb1\cr\vsk3
xcyczc1\cr\vsk3
xdydzd1\cr}}\eqno(\Eq{38B})$$
The predicates $\posor(a,b,c,d)$ and $\negor(a,b,c,d)$ are both false if $\Delta(a,b,c,d)$ is zero. Subtracting the last row of (\Eq{38B}) from the first three, we get
$$\Delta(a,b,c,d)=\detmat{\matrix3{
xa-xdya-ydza-zd\cr\vsk3
xb-xdyb-ydzb-zd\cr\vsk3
xc-xdyc-ydzc-zd\cr}}.\eqno(\Eq{37B})$$
This determinant is zero if and only if the three vectors $a-d$, $b-d$ and $c-d$ are linearly dependent, that is, the four points $a,b,c,d$ lie on the same plane. This includes the cases where two or more of the points are coincident, or three of them are colinear.
\note{comment on other meanings of $\CCW$: the three vectors $a-d$, $b-d$, and $c-d$, in that order, constitute a right-handed basis. That is, the determinant (\Eq{37B}) of the linear transformation mapping the standard basis into those three vectors is positive.}
We remark in passing that the volume of the tetrahedron with vertices $a$, $b$, $c$ , and $d$ is ${1\over6}\leftv\Delta(a,b,c,d)\rightv$.
The value of $\Delta(a,b,c,d)$ can be computed by expanding (\Eq{37B}), along any row or column, into a linear combination of three $2\times 2$ determinants; this requires 9 mutiplications and 14 additions/subtractions in total.
\note{Gaussian decomposition of (\Eq{37B}) does not help at this point: requires 7 multiplications, 3 divisions, and 14 additions/subtractions. It gets better at higher dimensions, of course.}
\note{Remark on best ways to compute $\posor$; give code?}
Theorem \Th{49A} clearly generalizes to arbitrary dimensions. Let $a0\pr, a1\pr, \ldots, ak\pr$ be a permutation of $a0, a1, \ldots, ak$; then $\posor(a0\pr, a1\pr, \ldots, ak\pr$ is equivalent to $\posor(a0, a1, \ldots, ak)$ or $\negor(a0, a1, \ldots, ak)$, depending on whether the permutation is even or odd, respectively. In particular, $$\posor(a,b,c,d)\eqv\negor(b,c,d,a)\eqv\posor(c,d,a,b)\eqv\negor(d,a,b,c).$$
\subsection{\Ch8.\Sbsec{\Sec2.5}. Oriented planes}
The concept of oriented line extends quite naturally to Euclidean spaces of more than two dimensions. In the three-dimensioanl space $\reals^3$, we define an {\it oriented plane} $\pi$ as a plane (in the classical sense) endowed with a {\it normal direction} $\hat \pi$, a unit vector perpendicular to it. We are therefore able to distinguish the {\it left half-space} of $\pi$, into which $\hat \pi$ points, from its {\it right half-space}. Like a line, an oriented plane $\pi$ is uniquely determined by its normal vector and a point on $\pi$.
A {\it characteristic function} for $\pi$ is a function from $\reals^3\mapsto\reals$ that is zero, positive, or negative depending on whether the argument is on $\pi$, on its left half-space, or on its right-half-space, respectively. The signed distance from $\pi$ to a point $p$ is defined exactly as for oriented lines, and it too is given by $\sdist(\pi, p)=\hat\pi\cdot(p-q)$ where $q$ is any point on $\pi$.\figspace4cm
If $\hat\pi=(X,Y,Z)$ and $p=(x,y,z)$, we have
$$\sdist(\pi, p)=Xx + Yy +Zz - (Xxq+Yyq+Zzq),\eqno(\Eq{91B})$$
proving that every oriented plane has a characteristic function that is a first-degree polynomial in $x$, $y$ and $z$. Conversely, any such polynomial $f(x,y,z)=Xx+Yy+Zz+W$ (with $X,Y,$ and $Z$ not all zero) is a characteristic function for a unique oriented plane, denoted by $\plc{X,Y,Z\hom W}$. Its unit normal vector is
$$\hat\pi={1\over{\sqrt{X^2+Y^2+Z^2}}}\,(X, Y, Z),\eqno(\Eq{92B})$$
and it passes through the point
$$\pi0=-{W\over{X^2+Y^2+Z^2}}\,(X, Y, Z).\eqno(\Eq{93B})$$
The coefficients are said to be {\it normalized} if $X^2+Y^2+Z^2=1$.
As in the case of lines, the planes $\plc{X, Y, Z\hom W}$ and $\plc{X\pr, Y\pr, Z\pr\hom W\pr}$ denote the same oriented plane if and only there is a {\it positive} real number $\alpha$ such that $X\pr=\alpha X$, $Y\pr=\alpha Y$, $Z\pr=\alpha Z$, and $W\pr=\alpha W$. It follows that every oriented plane has a unique set of normalized coefficients. The oriented plane $\plc{-X, -Y, -Z\hom -W}$ passes through the same points as $\plc{X, Y, Z\hom W}$, but has the opposite orientation: the left half-space of one is the right half-space of the other, and vice-versa.
\footprob{\Prob{\Sbsec{\Sec2.5}.1}}{When and where does the plane $\pi=\plc{X, Y, Z\hom W}$ intersect the three coordinate axes?}
\footprob{\Prob{\Sbsec{\Sec2.5}.2}}{What is the relationship between $\plc{X, Y, Z\hom W}$ and $\plc{X, Y, Z\hom -W}$?}
\footprob{\Prob{\Sbsec{\Sec2.5}.3}}{Generalize oriented plane and related concepts to higher dimensions}
\note{Develop the analogous theory for points in homogenous coordinates}
\subsection{\Ch8.\Sbsec{\Sec2.6}. The oriented plane through three given points}
The oriented plane through three given points is also defined by a generalization of definition \Def{37A}:
\definition{\Def{37B}}{If $a,b, c$ are three non-colinear points of $\reals^3$, we define the {\it oriented plane through $a$, $b$ and $c$} , or simply the {\it plane $abc$}, as being the (classical) plane through those three points, oriented so that a point $d$ is on its left (resp.right) half-space if and only if $a,b, c,$ and $d$ are positively (resp. negatively) oriented.}
It is easy to prove that this plane always exists and is unique. Its characteristic function is
$$\Delta(a,b,c,d)=Xxd+Yyd+Zzd+W,\eqno(\Eq{51B})$$
(as a function of $d$), where
$$X=-\detmat{\matrix3{
yaza1\cr
ybzb1\cr
yczc1\cr}},$$
$$Y=+\detmat{\matrix3{
xaza1\cr
xbzb1\cr
xczc1\cr}},$$
$$Z=-\detmat{\matrix3{
xaya1\cr
xbyb1\cr
xcyc1\cr}},$$
and
$$W=+\detmat{\matrix3{
xayaza\cr
xbybzb\cr
xcyczc\cr}}.$$
This plane is such that the triangle $abc$ is seen as counterclockwise orietned to an observer anywhere on its left side.
\subsection{\Ch8.\Sbsec{\Sec2.7}. Oriented circles}
\note{Shouldn't the orientation of circles be given iby a normal vector? Worry about this in the 2SP.}
\note{find a uniform way to define orientation of lines, planes and circles.}
The test of whether a point is inside or outside a given circle plays a central role in many geometrical algorithms. In such algorithms, it is often convenient to treat straight lines (with the addition of the point at infinity) as circles with infinite radius. Topologically, the two concepts are equivalent: like a circle, a straight line divides the inversion plane ($\reals^2$ plus the point at infinity $\omega$) in two simply connected regions. However, there is no natural criterion for calling one of these regions the ``interior'' of the line, and the other the ``exterior''. We will resolve this ambiguity by specifying an orientation for the line, and substitute the notions of ``left side'' and ``right side'' for ``interior'' and ``exterior''. Mathematical cleanliness then requires we trat circles in the same manner. This justifies the following definitions:
\definition{\Def4}{An {\it oriented circle} is a circle on the extended plane endowed with a direction of traversal. The two regions of the extended plane bounded by $C$ are called the {\it left} and {\it right sides} of $C$, with respect to the direction of traversal of $C$.}
As we said before, an oriented straight line (with the point at infinity included) is a special case of oriented circle; its left and right sides are simply its left and right half-planes. For an oriented circle $C$ of finite radius, the left side is the interior or the exterior of the circle, depending on whether $C$ is oriented counter- or clockwise, respectively.
Like oriented lines and planes, an oriented circle $C$ can also be defined by a characteristic function, that is zero on $C$, positive on its left side, and negative on its right side.
\theorem{\Th{81}}{Every oriented circle $C$ has a characteristic function of the form
$$f(x,y)=Xx+Yy+Z(x^2+y^2)+W.\eqno(\Eq{78})$$}
\proof{If $C$ is a counterclockwise oriented circle with finite radius $r$ and center $c=(xc, yc)$, we can test whether $p=(x,y)$ lies inside or outside $C$ by comparing $\dist(c,p)=\sqrt{(xc-x)^2+(yc-y)^2}$ against $r$. Since neither quantity is negative, the same results will be obtained if we compare $(\dist(c,p))^2$ with $r^2$. So, the expression
$$\eqalignno{\nullr^2-[(xc-x)^2+(yc-y)^2](\Eq{79})\cr
\null\null=2xcx+2ycy-(x^2+y^2)+(r2-xc^2+yc^2)(\Eq{80})\cr}$$
is a characteristic function for $C$, and the theorem then holds with $X=2xc$, $Y=2yc$, $Z=-1$, and $W=r2-xc^2+yc^2$.
\pp If $C$ is {\it clockwise} oriented, the left side is the exterior, and the right side is the interior. The sign reversal of(\Eq{80}) will be then a characteristic function for $C$. The theorem is satisfied by taking $X=-2xc$, $Y=-2yc$, $Z=1$, and $W=-(r2-xc^2+yc^2)$.
\pp If the radius is infinite, the circle is an oriented straight line, which as we know has a characteristic function of the form $Xx+Yy+W$. This satisfies the theorem if we take $Z=0$.}
The ``converse'' of this theorem is also true: for any quadruple of coefficients $X, Y, Z$, and $W$ such that $X^2+Y^2-4ZW >0$, there is an oriented circle whose characteristic function is (\Eq{78}). If $Z=0$, the circle is simply the oriented line $\plc{X, Y\hom W}$. Otherwise it is a finite circle whose center $c$ and radius $r$ are given by
$$c=\left(-{X\over{2Z}}, -{Y\over {2Z}}\right)\eqno(\Eq{101})$$
and
$$r=\sqrt{xc^2+yc^2+W}={1\over{\leftv 2Z\rightv}}\sqrt{X^2+Y^2-4ZW}\eqno(\Eq{102})$$
The circle will be oriented counter- or clockwise depending on whether $Z$ is negative or positive.
\digress{The condition $X^2+Y^2-4WZ>0$ is equivalent to the requirement that $f(x,y)=Xx+Yy+Z(x^2+y^2)+W$ assumes both positive and negative values. If $X^2+Y^2-4ZW <0$, $f$ has everywhere the same (non-zero) sign as $Z$: the ``circle'' it represents has no boundary, and the whole plane lies on one side of it. When $X^2+Y^2-4ZW =0$, $f$ is zero at the point $c$ (\Eq{101}) and has the sign of $Z$ everywhere else.}
\note{Should this stuff go into chapter \Ch2? A sketchier version of it is already there.}
\note{Should we accept circles of radius zero?}
\note{Remark that (\Eq{79}) is faster to compute than (\Eq{80}) if we know the center and radius. Equation (\Eq{80}) is valuable because of the $\lambda$-mapping, and it also gives a more efficient formula for the case when $C$ is determined by three given points.}
\subsection{\Ch8.\Sbsec{\Sec2.8}. The quadratic mapping}
These results have a very nice geometrical interpretation. Consider the function $\lambda$ that lifts every point $(x, y)$ of the plane onto the point $(x, y, x^2+y^2)$ on the paraboloid of revolution $\Lambda$ defined by the equation $z=x^2+y^2$. This mapping has an amazing property: circles and straight lines on the $(x,y)$-plane are mapped to planar sction of $\Lambda$, end vice-versa!\figspace5cm
\note{Should we use $\lambda(x,y)=(x,y,-x2-y2)$? this would lead to $\incircle(a,b,c,d)=\posor(\lambda(a),\lambda(b),\lambda(c),\lambda(d))$, instead of $\negor$.}
This fact follows directly from theorems \Th{81}: if the characteristic function of $C$ is $Xx+Yy+Z(x^2+x^2)+W$, then $(x,y)$ lies on $C$ if and only if its image $(x,y,x^2+y^2)$ lies on the plane $\pi(C)=\plc{X,Y, Z\hom W}$. Furthermore, the point $p$ is to the left of $C$ if and only if its image $\lambda(p)$ is to the left of $\pi(C)$. In particular, if $C$ is a counterclockwise oriented circle of finite radius, the plane $\pi(C)$ is oriented downwards ($Z$ is negative), so $p$ is to the left of $C$ if and only if its image is below $\pi(C)$.
Conversely, if a plane $\plc{X,Y,Z\hom W}$ cuts the paraboloid $\Lambda$, Any point $(x,y,z)$ on the intersection will satisfy both $z=x^2+y^2$ and $Xx+Yy+Zz+W=0$; the projection of that point will then satisfy $f(x,y)=Xx+Yy+Z(x^2+y^2)+W=0$. Moreover, by asssmption there will be points of $\Lambda$ on both sides of $\plc{X,Y,Z\hom W}$, so $f$ takes both positive and negative values. It follows that $X^2+Y^2-4ZW>0$, and therefore $f$ is a valid characteristic function for some circle.
\digress{These results can be generalized quite easily to other algebraic curves. For example, general conic sections (which include circles, ellipses, parabolas and hyperboles as special cases) have characteristic functions of the form $Xx+Yy+Ax^2+By^2+Cxy+W$. Therefore, the function $\xi:(x,y)\mapsto(x,y,x^2,y^2,xy)$ maps the problem of locating a point against a conic into that of locating a point of $\reals^5$ against a four-dimensional hyperplane of $\reals^5$. If the term $Cxy$ is omitted, the characteristic function describes an ellipse or hyperbole with main axes parallel to the coordinate axes.
\dp One can also extend this to higher dimensions: by the mapping $\lambda:(x,y,z)\mapsto(x,y,z,x^2+y^2+z^2)$, point location against a sphere is reduced to point location against a three-dimensional hyperplane of $\reals^4$.}
\footprob{\Prob{\Sbsec{\Sec2.8}.1}}{Study the problem of finding the minimum spanning ellipse in this light. What if we constrain the main axes of the ellipse to be parallel to the coordinate axes?}
\subsection{\Ch8.\Sbsec{\Sec2.9}. The oriented circle through three points}
The quadratic mapping makes the following result almost self-evident:
\theorem{\Th{41}}{Given any three distinct points $a, b, c$ on the plane, there is a unique circle that passes through them}
\proof{Consider the images $\lambda(a), \lambda(b), \lambda(c)$ of $a,b,c$. These three points are distinct and, since $\Lambda$ is a strictly convex surface, they are also non-colinear. There exists then a unique plane $\pi=\lambda(a)\lambda(b)\lambda(c)$ passing through those three points. The intersection of this plane with $\Lambda$ is a circle that passes through $a,b,c$. Any other circle also passing through $a,b,c$ would correspond to a plane passing through $\lambda(a),\lambda(b),\lambda(c)$ but distinct from $\pi$, a contradiction.}
The circle passing through $a,b,c$ can be oriented in two distinct ways. If we go once around the circle starting from $a$, we will encounter the three points either in the order $a,b,c$ or in the order $a,c,b$, depending on which way we go. This motivates the following definition:
\definition{\Def{29}}{The {\it (oriented) circle $abc$}, where $a,b,c$ are distinct points of the plane, is the classical (unoriented) circle passing through them, with the direction of traversal from $a$ through $b$ to $c$.}
If the three points determine a counterclockwise oriented triangle, then the circle $C=abc$ is oriented the same way. Under the quadratric mapping, $C$ maps to the intersection of $\Lambda$ with the plane $\pi=\lambda(a)\lambda(b)\lambda(c)$. The left side (that is, the interior) of $C$ maps to the part of $\Lambda$ that lies below $\pi$. According to the comments we made before, that is the {\it right} side of $\pi$.\figspace5cm
The oriented circle $acb$ passes through the same points as $C$, but has the opposite orientation. In general, if $a\pr, b\pr, c\pr$ is a permutation of the points $a,b,c$, the circle $a\pr b\pr c\pr$ will be oriented the same way as $C$ if the permutation is even, and the opposite way if it is odd. The orientation of the planes $\pi\pr=\lambda(a\pr)\lambda(b\pr)\lambda(c\pr)$ and $\pi$ are realted in precisely the same way. We conclude that the observation made in the preceding paragraph is true for {\it any} three points $a,b,c$, namely the left side of the circle $abc$ maps to the part of $\Lambda$ that is in the {\it right} side of the plane $\pi$.
Note that the proof of theorem \Th{41} still carries through if the three points $a,b,c$ are colinear; in that case, the plane $\pi$ is vertical, and the circle through $a,b,c$ degenerates to the line supporting them. The discussion above also carries through: the left side of the oriented line $abc$ maps to the part of $\lambda$ that lies to the right of $\pi$.
\proof{Let $p1, p2, p3$ be three sufficiently small pigs, and $\hbox{\bi W\/}$ an wolf, assumed to be simultaneously big and bad.}
\subsection{\Ch8.\Sbsec{\Sec2.10}. The $\incircle$ test}
We are now ready to define our geometric predicate, that applies to four points $a, b, c, d$ of the plane:
\definition{\Def5}{The predicates $\incircle(a, b, c, d)$ and $\outcircle(a,b,c,d)$ are true if the point $d$ lies respectively to the left and to the right of the oriented circle $abc$. Both predicates are false if two or more of $a,b,$ and $c$ coincide, or if $d$ is on the circle $abc$.}
In particular, when $a, b, c$ are counterclockwise oriented, $\incircle(a, b, c, d)$ is true if and only if $d$ lies inside the circle $abc$, and $\outcircle(a,b,c,d)$ is true if and only if $d$ is outside. When $a,b,c$ are clockwise oriented, these meanings are reversed: $\incircle$ is true {\it outside} the circle $abc$, and $\outcircle$ is true inside.\figspace4cm
\note{Edit this:} This predicate seems rather awkward to use in practice, since its meaning is dependent on the ordering of the points $a$, $b$ and $c$. Actually, this is not so: as for the algorithms we are going to see, in most uses of this predicate we know beforehand that $abc$ is a counterclockwise oriented triangle, so the intuitive interpretation suggested by its name is correct. However, definition \Def5 is better at handling degenerate cases situations (where $a$, $b$ and $c$ are collinear) than the simpler, intuitive interpretation.
According to what we discussed in the previous section, $\incircle(a,b,c,d)$ tests whether $\lambda(d)$ lies on the right side of the orietned plane $\lambda(a)\lambda(b)\lambda(c)$. Together with definition \Def{37B}, this proves the next lemma:
\lemma{\Th{40B}}{For any points $a,b,c,d$ on the plane,
$$\eqalign{
\incircle(a, b, c, d)
=\negor(\lambda(a),\lambda(b), \lambda(c),\lambda(d))\cr
\outcircle(a, b, c, d)
=\posor(\lambda(a),\lambda(b), \lambda(c),\lambda(d))\cr}.\eqno(\Eq{103})$$}
From this equivalence we immediately derive a somewhat surprising fact: if the points $a,b,c,d$ are not on the same circle, then transposing any adjacent pair in the predicate $\incircle(a,b,c,d)$ will change the value of the predicate from true to false, or vice versa. In general, permuting the arguments of $\incircle(a,b,c,d)$ will either preserve or reverse its value, depending on whether the permutation has even or odd parity. The same of course is true of $\outcircle(a,b,c,d)$. In particular, we have
$$\incircle(a,b,c,d)\eqv\outcircle(b,c,d,a) \eqv\incircle(c,d,a,b)\eqv\outcircle(d,a,b,c).$$
We can see that in $\incircle(a,b,c,d)$ all four points play a symmetric role, even though from the definition $d$ seems to be special.
From theorem \Th{40B} we also conclude that
\theorem{\Th{43}}{The predicate $\incircle(a, b, c, d)$ is true if and only if
$$\Delta(\lambda(a), \lambda(b), \lambda(c), \lambda(d))=
\detmat{\matrix4{
xayaxa^2+ya^21\cr\vsk3
xbybxb^2+yb^21\cr\vsk3
xcycxc^2+yc^21\cr\vsk3
xdydxd^2+yd^21\cr}}\eqno(\Eq7)$$
is negative. The predicate $\outcircle(a, b, c, d)$ is true if and only if this determinant is positive.}
This theorem gives a reasonably efficient formula for computing the $\incircle$ predicate.
\footprob{\Prob{\Sbsec{\Sec2.10}.1}}{Find the most efficient way to compute the $\incircle$ test. {\it Hint\/}: Nine subtractions reduce the determinant of equation \Eq6 to a $3\times 3$ determinant. Note that we need only the sign of $\Delta$, not its absolute value. {\it Another hint\/}: Consider expanding (\Eq6) by the third column; who knows, it may be better.} }
\footprob{\Prob{\Sbsec{\Sec2.10}.2}}{Study the numerical stability of the determinant (\Eq7), and develop ``numerically robust'' algorithms for computing it.}
\subsection{\Ch8.\Sbsec{\Sec2.11}. Properties of the $\incircle$ test}
The $\incircle$ property has an alternate geometrical interpretation that is in some situations easier to use:
\lemma{\Th{20C}}{Let $a,b,c,d$ be in counterclockwise order the four distinct vertices of a simple quadrilateral $Q$ (not necessarily convex). Let $\hat a, \hat b, \hat c, \hat d$ be the corresponding internal angles of $Q$. Then the following conditions are equivalent:
\tp 1. $\incircle(a,b,c,d)$;
\tp 2. $\hat a+\hat c < 180\deg$;
\tp 3. $\hat b+\hat d > 180\deg$;
\tp 4. $\hat a+\hat c < \hat b+\hat d$.}
\proof{Conditions 2, 3 and 4 are obviously equivalent, since the interior angles of any simple quadrilateral add to exactly $360\deg$. Let us show that 1 and 2 are equivalent.\figspace3cm
\pp Because of our assumptions on $Q$, the circle $C=\circle(a,b,c)$ is counterclockwise oriented. The arcs $cda$ and $abc$, respectively subtended on $C$ by the angles $\hat b$ and $\hat d$, add up to $360\deg$. As we know from elementary geometry, the measure of $\hat b$ is half of the arc $cda$. Also, the angle $\hat d$ will be more than half of the arc $abc$ if and only if $d$ is inside the circle $C$\foot\dag{Note that this holds even if $Q$ is concave at $d$, that is, if $a$ and $d$ lie on the same side of the line $bd$.}. We conclude that $\hat a+\hat c <{1\over2}360\deg=180\deg$ if and only if $d$ is to the left of $\circle(a,b,c)$, that is, if and only if $\incircle(a,b,c,d)$.}
\digress{Another way of testing whether four points are cocircular is given by Ptolemy's theorem, which states that $a,b,c,d$ lie on the same sircle (in that order) if and only if
$$\bar{ab}\times\bar{cd}+\bar{bc}\times\bar{ad} = \bar{bd}\times\bar {ac}$$
This theorem however cannot be used to decide whether $d$ is inside or outside the circle $abc$, as we always have
$$\bar{ab}\times\bar{cd}+\bar{bc}\times\bar{ad}\geq \bar{bd}\times\bar {ac},$$
with equality only when the four points are cocircular. In fact, the quantity one obtains by rendering $\bar{ab}\times\bar{cd}+\bar{bc}\times\bar{ad} -\bar{bd}\times\bar {ac}$ radical-free is exactly the square of the determinant $\Delta(\lambda(a),\lambda(b),\lambda(c),\lambda(d))$ above. \note{Check this again. Is the square really thre from the beginning, or is it introduced in the elimination of radicals?}}
\note{Historical note: augment.} The quadratic mapping as a means of transforming circles in the plane into planes in three-space is a rather old idea. The basic principles of the $\incircle$ test appear in a classic paper by Jung (do they???). However, it is not as well known as it should, and in fact it has been independently rediscovered many times.