as1r4.tex $${d i\over d s} = -{k+1\over k} + {1 \over k s} $$ $$ i(s) = -{k+1\over k}s + {1 \over k} \log s + c $$ where $c$ is a constant of integration that can be determined by the initial conditions: $i(1 - \epsilon) = \epsilon$. For large $n$, $\epsilon$ goes to zero, giving: $$ c = {k+1\over k} $$ and a solution of \item{1.} {\bf Blind vs. Feedback.} The rumor example used feedback from the recipient; a sender loses interest only if the recipient already knows the rumor. A blind variation loses interest with probability $1/k$ regardless of the recipient. Analysis and simulations indicate that, while the number of sites missing an update using blind removal still falls off exponentially with $k$, $k$ will have to be at least one larger to achieve performance equivalent to a feedback mechanism. Column C of Table 1 shows a blind variation. \item{2.} {\bf Counter vs. Coin.} Instead of losing interest with probability $1/k$ we can use a counter, so that we lose interest only after $k$ unnecessary contacts. The counter improves both the percentage removed and the time to converge, at the expense of keeping extra state for elements on the infective lists. Using counters, it is also possible to rigorously bound the rumor message traffic. Suppose $m$ sites are removed at the end of the epidemic. Then $m-1$ messages were necessary, and $m$ sites sent $k$ messages before they were removed, so the traffic was at least $(k+1)m - 1$. On the other hand, each unnecessary message was send by a site that, within the last $k$ cycles, sent a necessary message, or newly discovered the update. This gives an upper bound of $k(2m-1)$. \item{3.} {\bf Push vs. Pull.} So far we have assumed that the all sites would randomly choose destination sites and {\sl push} infective updates to the destinations. The {\sl pull} vs. {\sl push} distinction made already for anti-entropy can be applied to rumor mongering. Under certain assumptions described below (i.e. no connection limit) the {\sl pull} technique results in more sites removed at the end of an epidemic, however, {\sl pushing} has the advantage that it quiesces when the databases are consistent. \item{4.} {\bf Minimization.} It is also possible to make use of the counters of both parties in an exchange to make the removal decision. The idea is to use a push and a pull together, and if both sites already know the update, then only the site with the smaller counter is incremented. This requires sending the counters over the network, but it results in the most complete removal of sites we have seen so far. \noindent We have simulated the above variations, paying attention to two other parameters: \item{5.} {\bf Connection Limit.} Under the {\sl push} model, for example, a site can become the recipient of more than one push. If there is a connection limit, then the excess pushes are dropped. If the cycle time (each site attempts one push during one cycle) is short, then a connection limit is a necessary and realistic assumption. The connection limit affects {\sl push} and {\sl pull} mechanism differently: {\sl pull} gets significantly worse, and, surprisingly, {\sl push} gets significantly better. Without a connection limit, {\sl pull} is very effective. Since {\sl push} works best with a connection limit of $1$ it seems worthwhile to enforce this limit even if more connection are possible in a cycle. \item{6.} {\bf Hunt Limit.} If a connection is rejected because of a connection limit then the choosing site can hunt for other sites. Hunting is relatively inexpensive and seems to improve all connection limited cases. In the extreme case, a connection limit of $1$ with an infinite hunt limit results in a complete permutation. Push and pull become equivalent, and the percentage removed is very good. Ê]˜J˜ J˜J˜iJ˜¨J˜J˜J˜J˜J˜J˜J˜—J˜J˜šJ˜J˜«J˜J˜ J˜QJ˜J˜êJ˜—…—ø[