How HotBits Works


This document describes the operation of the original 1996 first-generation HotBits generator. This document is for historical reference only; please see the third-generation HotBits “how it works” document if you are interested in the operaton of the generator currently in service at Fourmilab.

Note: this document uses HTML superscript and subscript tags introduced by Netscape in Netscape Navigator 2.0. If your browser doesn't support this feature, the following number: “1010” will appear as “1010” instead of “ten to the tenth”. You shouldn't have any trouble figuring out the meaning from context.

Tunnelling to Freedom

Face it, folks: Nature is a lazy Mother. If there's any way at all a physical system: subatomic particle, nucleus, atom, molecule, star, or galaxy can reduce its energy without violating a law of physics, quantum mechanics tells us it will. What it doesn't tell us is when. Why is this, and how can we exploit this physical principle to generate random numbers?

Nuclear Decay the Beta Way

Consider the atoms of Krypton-85 that make up the HotBits radiation source. Due to details of how the atomic nucleus is structured which we thankfully don't need to get into here, it turns out that if one of the neutrons in the nucleus were to turn into a proton, the resulting Rubidium-85 nucleus would have less binding energy. Now a neutron just can't turn into a proton willy-nilly: that would violate the law of charge conservation since a neutron with a charge of 0 would be changing into a proton with a charge of +1. Physicists believe charge conservation is never violated: even a black hole bears the net charge of all the particles it has devoured. But there's a way around this—if the neutron changes into a proton and simultaneously spits out an electron, the charge before and after is the same; before we had a neutron with a charge of 0, afterward a proton with a charge of +1 and an electron with a charge of −1. +1 + −1 = 0: the books balance! In the world of atomic physics, this is called beta decay and the electron that flies out of the nucleus a beta particle.

(A beta particle is an electron, pure and simple, and all electrons are absolutely identical. The reason an electron which happens to begin its career by being shot out of an atomic nucleus as opposed to, say, boiled out of the hot metal filament in the other end of your computer monitor, is called a “beta particle” is historical. It took a while for physicists to figure out that “beta rays” and electrons were one and the same thing, and by that time the name had stuck.)

Anyway, we can write the formula for the beta decay of Krypton-85 as:

Kr-85 ==) Rb-85 + e- + gamma

The Krypton-85 nucleus (the 85 means there are a total of 85 protons and neutrons in the atom) spontaneously turns into a nucleus of the element Rubidium which still has a sum of 85 protons and neutrons, and a beta particle (electron) flies out, resulting in no net difference in charge. In this case, a gamma ray is also emitted, carrying away some of the energy. “Gamma rays” turn out to be nothing other than photons—particles of light, just carrying a lot more energy than visible light. They're called “gamma rays” instead of “photons” for same reason beta particles aren't just called electrons. Nuclear reactions release a lot of energy: photons of visible light have an energy between 1 and 10 electron volts. The electrons in your computer monitor or television have energies between 10,000 and 20,000 electron volts (the high voltage needed to impart this energy to them is why it's a poor idea to stick your hand inside a television set). By comparison, the photon that flies out of decaying Krypton-85 has 513,990 electron volts (514 keV) of energy, and the electron 687,000 electron volts (687 keV). Particles with this kind of energy behave differently than the kind we usually encounter, which is why it took a while for physicists to figure out they really were just very energetic photons and electrons. This also explains why “nuclear radiation” is more dangerous than daylight and why nuclear bombs make so big a bang compared to the same amount of dynamite.

First Uncertainty Bank: Energy Loans for Needy Nuclei

Now you'd think that given the chance to reduce its energy by more than a million electron volts, a Krypton-85 nucleus would be just itching to heave that electron out the door. But there's a catch. Even though the final result of emitting the electron reduces the energy of the nucleus, the process of emitting it requires more energy than the nucleus has lying around. Think of the poor Krypton nucleus as being trapped on a hillside like this:

If it manages somehow to get the energy to make it over the little bump to the right, it can slide all the way down to the bottom and turn into Rubidium, but otherwise it's stuck where it is. If quantum mechanics did not govern the universe, the Krypton-85 nucleus would be stable. But, of course, without quantum mechanics atoms wouldn't be stable, so neither you nor I nor anything else made of atoms would exist, so despite all its complexity, fuzziness, uncertainty, and spooky action-at-a-distance, quantum mechanics is probably a Good Thing. However, I must note that quantum mechanics also permits Microsoft Windows to exist.

What our Krypton atom stuck on its energy ledge needs is a loan of energy to escape. Once over the hill, it will gladly repay its debt with the ample energy it releases as it skids down the slope into the valley of Rubidium. We could loan the energy to the nucleus by hitting it with a gamma ray, but thanks to the uncertainty principle of Heisenberg, that isn't necessary! The nucleus can, in effect, borrow the energy from the vacuum, momentarily violating the law of conservation of energy, and then, from the energy released by the decay, repay the loan before the conservation cops arrive.

Heisenberg's uncertainty principle provides, described in very broad brushstrokes to avoid getting bogged down in detail, that while any given physical quantity: the position of a particle for example, can be measured as precisely as you wish, the more uncertain a complementary quantity, momentum in the case of position, becomes. The same uncertainty relation applies to time and energy. You can measure the energy of a system as precisely as you like, but there is a minimum time required to measure its energy to a given precision. Conversely, the energy of a system can be said to fluctuate to an increasing extent as you observe it over shorter and shorter intervals.

On the scale of atoms and subatomic particles, the results of this uncertainty have profound effects. For the uncertainty of energy at very short time intervals means there is a nonzero probability that, at a given instant, the Krypton-85 nucleus will have enough energy to surmount the hill that is confining it. Once pushed over the edge, the energy released pays back the uncertainty principle's “energy loan” in a time less than would be required to measure the momentary non-conservation of energy. One can also view the confined Krypton nucleus as “tunnelling through” the barrier confining it—in fact, this process is called “quantum tunnelling”.

But even though the energy loan which triggers which beta decay is not detectable, the decay that results most definitely is and, being impossible in the absence of the uncertainty principle, demonstrates its essential role in nuclear and atomic physics. Note that once the Krypton nucleus beta decays into Rubidium-85, it finds itself at the bottom of a valley with steep walls on either side. There is no place to tunnel to—it is trapped since the energy it would need to jump back up to the Krypton-85 level could be “borrowed” only for an interval less than the time needed to transform a proton into a neutron. As a consequence, Rubidium-85 is a stable isotope—it is not radioactive. We could, however, give it the energy required to jump it back onto the Krypton-85 ledge. By bombarding it with energetic electrons (beta particles) in a particle accelerator or nuclear reactor, occasionally an electron will strike a Rubidium nucleus with sufficient energy to convert a proton into a neutron—reversing the arrow in the beta decay equation, transforming it back into Krypton-85 through the process of inverse beta decay. Once transformed, it is, of course, doomed to eventually tunnel its way back to the bottom of the valley.

Physicists: please excuse my glossing over details such as the weak interaction, W bosons, u and d quarks, cross sections, etc. etc. and the very sloppy description of the uncertainty principle. I'm afraid if I go into any more detail, I'll lose the entire audience before we get to the good stuff—half life and the no-hidden-variables nature of quantum theory.

Get a (Half-)Life

Rubidium is stable because the energy valley that contains it is so deep it can't borrow the energy needed to tunnel out for long enough to complete the process. The barrier confining Krypton-85 is sufficiently high that a given nucleus has only a 50% chance of tunnelling through in a period of 10.73 years—eternity on the time scale of most nuclear events. This is called its half-life, since if you start out with a given large number of Krypton-85 nuclei, every 10.73 years you'll find that, on the average, half of the number present at the start of the period have decayed into Rubidium. What happens if the barrier is higher or lower, as is the case for other nuclei prone to beta decay? Well, if the barrier is lower, it means less energy needs to be borrowed to surmount it, and as a result the energy can be borrowed for a longer multiple of the time needed to “do the deal”. As a result, the probability of the nucleus decaying in a given period of time is increased or, in other words, the half-life is decreased. The nucleus with a lower barrier will be more radioactive. Sodium-35 perches precariously on a ledge with a tiny barrier compared to the one that confines Krypton-85. As a result, its half-life is only 1.5 milliseconds—one and a half thousandths of a second. On the other hand, Indium-115 has an energy barrier so high that you have to wait 4.4×1014: 440 million million years for half the nuclei in a sample to decay. It kind of takes your breath away to discover a mundane physical process which occurs at rates varying over 24 orders of magnitude—from about a thousand times a second to a thousand times the age of the universe, but many things about quantum mechanics take your breath away once you invest the effort to appreciate (if not understand) them.

What's interesting, and ultimately useful in our quest for random numbers, is that even though we're absolutely certain that if we start out with, say, 100 million atoms of Krypton-85, 10.73 years later we'll have about 50 million, 10.73 years after that 25 million, and so on, there is no way even in principle to predict when a given atom of Krypton-85 will decay into Rubidium. We can say that it has a fifty/fifty chance of doing so in the next 10.73 years, but that's all we can say. Ever since physicists realised how weird some of the implications of quantum mechanics were, appeals have been made to “hidden variables” to restore some of the sense of order on which classical physics was based. For example, suppose there's a little alarm clock inside the Krypton-85 nucleus which, when it rings, causes the electron to shoot out. Even if we had no way to look at the dial of the clock, it's reassuring to believe it's there—it would mean that even though our measurements show the universe to be, at the most fundamental level, random, that's merely because we can't probe the ultimate innards of the clockwork to expose its hidden deterministic destiny.

But hidden variables aren't the way our universe works—it really is random, right down to its gnarly, subatomic roots. In 1964, the physicist John Bell proved a theorem which showed hidden variable (little clock in the nucleus) theories inconsistent with the foundations of quantum mechanics. In 1982, Alain Aspect and his colleagues performed an experiment to test Bell's theoretical result and discovered, to nobody's surprise, that the predictions of quantum theory were correct: the randomness is inherent—not due to limitations in our ability to make measurements. So, given a Krypton-85 nucleus, there is no way whatsoever to predict when it will decay. If we have a large number of them, we can be confident half will decay in 10.73 years; but if we have a single atom, pinned in a laser ion trap, all we can say is that is there's even odds it will decay sometime in the next 10.73 years, but as to precisely when we're fundamentally quantum clueless. The only way to know when a given Krypton-85 nucleus decays is after the fact—by detecting the ejecta. A Krypton-85 nucleus which has “beat the reaper” by surviving a century, during which time only one in a thousand of its litter-mates haven't taken the plunge and turned into Rubidium, has precisely the same chance of surviving another hundred years as a newly-minted Krypton-85, fresh from the reactor core.

Bit from It

This inherent randomness in decay time has profound implications, which we will now exploit to generate random numbers—HotBits. For if there's no way to know when a given Krypton-85 nucleus will decay then, given an collection of them, there's no way to know when the next one of them will shoot its electron bolt and settle down to a serene eternity as Rubidium. That's uncertainty, with its origins in the deepest and darkest corners of creation—precisely what we're looking for to make genuinely random numbers.

If we knew the precise half-life of the radioactive source driving our detector (and other details such as the solid angle to which our detector is sensitive, the energy range of decay products and the sensitivity of the detector to them, and so on), we could generate random bits by measuring whether the time between a pair of beta decays was more or less than the time expected based on the half-life. But that would require our knowing the average beta decay detection time, which depends on a large number of parameters which can only be determined experimentally. Instead, we can exploit the inherent uncertainty of decay time in a parameter-free fashion which requires less arm waving and fancy footwork.

The trick I use was dreamed up in a conversation in 1985 with John Nagle, who is doing some fascinating things these days with artificial animals. Since the time of any given decay is random, then the interval between two consecutive decays is also random. What we do, then, is measure a pair of these intervals, and emit a zero or one bit based on the relative length of the two intervals. If we measure the same interval for the two decays, we discard the measurement and try again, to avoid the risk of inducing bias due to the resolution of our clock.

To create each random bit, we wait until the first count occurs, then measure the time, T1, until the next. We then wait for a second pair of pulses and measure the interval T2 between them, yielding a pair of durations. If they're the same, we throw away the measurement and try again. Otherwise if T1 is less than T2 we emit a zero bit; if T1 is greater than T2, a one bit. In practice, to avoid any residual bias resulting from non-random systematic errors in the apparatus or measuring process consistently favouring one state, the sense of the comparison between T1 and T2 is reversed for consecutive bits.

For example, you might worry about the fact that the intensity of the radiation source is slowly decreasing over time. Krypton-85's 10.73 year half-life isn't all that long. One half-life in the future, we'll measure T1 and T2 intervals, on the average, twice as long as today. This means, then, that even on consecutive measurements there is a small bias in favour of T2 being longer than T1. How serious is this? Well, expressed in seconds, the half-life is about 3.4×108 and we receive count pulses at a rate of 1000 per second or so. So the time needed to perform the measurements to produce one random bit is on the order of 10−11 half-lives, and T2 will then tend to be longer by a factor of the same magnitude. Since the inter-count interval is around a millisecond, this means T2 will be, on average, 10−14 seconds longer than T1. This is comparable to the long-term accuracy of the best atomic time standards and is entirely negligible for our purposes. The crystal oscillator which provides the time base for the computer making the measurement is only accurate to 100 parts per million, or one part in ten thousand, and thus can induce errors ten million times as large as those due to the slow decay of the source. (This is, again, unlikely to be a real problem because most computer clocks, while prone to drifting as temperature and supply voltage vary, do not change significantly on the millisecond scale. Still, jitter due to where the clock generator happens to trigger on the oscillator waveform will still dwarf the effects of decay of the source during one measurement.)

The eminent physicist John Archibald Wheeler has speculated that, at the deepest level, the universe is made of information, and that all the complexity we see from the subatomic to the cosmic scale is an emergent property of this underlying simplicity, just as the simplest computer can, given enough time, faithfully simulate physical processes far more complicated than itself. Wheeler calls this “it from bit”—matter, energy, and the universe as a whole may be the consequences of the exchange and processing of information. This may or may not be true, but in any case HotBits brings the converse to your virtual desktop: information generated by a fundamental, inherently unpredictable, subatomic process delivered directly to you over the Web. Bit from it!

Serving 'em Hot and Fresh…

Finally, how does your request for HotBits get processed? You request HotBits by filling out and transmitting a request form, which is sent by your World-Wide Web browser in HTTP to our Web server, www.fourmilab.ch. Your request form is processed by a CGI program written in Perl which, after validating the request, forwards it in HTTP format to a dedicated HotBits server machine which is connected to the HotBits generation hardware via the COM1 port. This machine is an 8 year old Compaq 386/20 (since upgraded to a 486) which, despite being utterly reliable and having been instrumental in numerous software development projects over its distinguished career, is too slow and has insufficient memory and disc space to accommodate the ludicrously over-complicated and morbidly memory-obese applications being belched into our industry these days from that fountainhead of incompatibility and change for the sake of competitive advantage in Redmond, Washington.

Why the indirection? Timing the intervals between decay events without the kind of special-purpose hardware I used in my original 1986 design requires locking out interrupts and dedicating the CPU to measuring the time between counts, since otherwise other processes which use the CPU, even those as innocent as a screen saver, could introduce nonrandom periodicities in the bitstream. Dedicating a PC (which would otherwise be unused), permits us to prevent interrupts and obtain maximum-resolution measurements of the inter-count delays without compromising response time for requests from the outside world.

To provide better response, the dedicated HotBits server machine maintains an inventory of two million (256 kilobytes) random bits, and services requests from this inventory whenever possible. The server rebuilds inventory in the background, between user requests for HotBits. A real-time generation facility, which builds HotBits “while you wait”, bypassing the inventory, is available for specialised experiments, but access to this facility is presently limited only to users within the fourmilab.ch domain. Some interesting applications of this may find their way onto the Web over the next year….

Interposing the main www.fourmilab.ch server also makes it possible to maintain a separate inventory of HotBits on that machine, refreshed by periodically drawing down the inventory on the HotBits server when it is full. A separate inventory on the Web server permits faster response to requests since there is no need to contact the dedicated HotBits machine as long as the request can be filled from local inventory. Further, it allows uninterrupted service even when the primary HotBits server is down for maintenance. The main server inventory is maintained by a HotBits Proxy Server running on that machine which communicates in the same HTTP protocol as the dedicated HotBits machine. Source code for the HotBits Proxy Server is in the public domain and is available for downloading, but was written to run on Solaris 2.5 using POSIX threads, and may prove difficult to get working on other versions of Unix.

Source code for the low-level driver that interfaces to the HotBits hardware is in the public domain and available for downloading. Once I got the HotBits driver program checked out under MS-DOS, I needed to get it talking over the local network. The most expedient way turned out to be hacking an interface into a simple HTTP server which appeared in the February 1996 issue of Microsoft Systems Journal in an article by David Cook. I downloaded Mr. Cook's program from the MSJ source code archive and it built and worked the first time. It only took a few hours and the twenty or thirty reboots it takes to develop any Windows program to add code to service HotBits requests. One advantage of using HTTP is that you can test your server without writing a special-purpose client program—all you need is a Web browser!

HotBits Main Page

HotBits Hardware Description

HotBits Software Driver


Valid XHTML 1.0
by John Walker
May, 1996