## Cellular Automata Laboratory |

Cellular automata were invented in the late 1940s by Stanislaw Ulam (1909–1984) and John von Neumann (1903–1957). One can say that the “cellular” part comes from Ulam, and the “automata” part from von Neumann.

Ulam was primarily a mathematician. He invented the Monte Carlo simulation technique, the highly infinite “measurable cardinals” of set theory, and made contributions to analysis and number theory. With Edward Teller, Ulam was the coinventor of the hydrogen bomb. Von Neumann was a still more wide-ranging mathematician. He did work in set theory, in the foundations of quantum mechanics, in economics, and in game theory. With Herman Goldstine, von Neumann designed the logical architecture of the first electronic computers.

In 1948 von Neumann read a paper called “The General and Logical Theory of Automata” to a symposium in Pasadena, California, and in 1949 he delivered a related series of lectures called “Theory and Organization of Complicated Automata,” at the University of Illinois. One of the questions he was concerned with in these talks was whether or not it would ever be possible for a machine, or “automaton,” to reproduce itself.

Usually a machine makes something much simpler than itself—consider a huge milling machine turning out bolts. Could a machine possibly fabricate machines as complicated as itself? Or is there some extramechanical magic to self-reproduction? To simplify the problem, von Neumann suggested that we suppose that our robots or automata are made up of a small number of standardized parts:

I will introduce as elementary units neurons, a “muscle,” entities which make and cut fixed contacts, and entities which supply energy, all defined with about that degree of superficiality with which [the theory of neural networks] describes an actual neuron. If you describe muscles, connective tissues, “disconnecting tissues,” and means of providing metabolic energy…you probably wind up with something like 10 or 12 or 15 elementary parts.

Using the idea of machines made up of multiple copies of a small number of standardized elements, von Neumann posed his question about robot self-reproduction as follows.

Can one build an aggregate out of such elements in such a manner that if it is put into a reservoir, in which there float all these elements in large numbers, it will then begin to construct other aggregates, each of which will at the end turn out to be another automaton exactly like the original one?

Using techniques of mathematical logic, von Neumann was then able to deduce that such self-reproduction should in fact be possible. His proof hinged on the idea that an automaton could have a blueprint for building itself, and that in self reproduction, two steps would be necessary: 1) make an exact copy of the blueprint, and 2) use the blueprint as instructions for making a copy of the automaton. The role of the blueprint is entirely analogous to the way DNA is used in biological self-reproduction, for here the DNA is both copied and used as instructions for building new proteins.

The complexity of a reservoir full of floating machine parts hindered von Neumann from making his proof convincing. The next step came from Stanislaw Ulam, who was working with von Neumann at Los Alamos during those years. Ulam's suggestion was that instead of talking about machine parts in a reservoir, von Neumann should think in terms of an idealized space of cells that could hold finite state-numbers representing different sorts of parts.

Ulam's first published reference to this idea reads as follows:

An interesting field of application for models consisting of an infinite number of interacting elements may exist in the recent theories of automata. A general model, considered by von Neumann and the author, would be of the following sort:

Given is an infinite lattice or graph of points, each with a finite number of connections to certain of its “neighbors”. Each point is capable of a finite number of “states”. The states of neighbors at time

t_{n}induce, in a specified manner, the state of the point at timet_{n+1}.One aim of the theory is to establish the existence of subsystems which are able to multiply, i.e., create in time other systems identical (“congruent”) to themselves.

By 1952, von Neumann had completed a description of such a self-reproducing “cellular automaton” which uses 29 states. CelLab includes a modern, improved version of this rule under the name Langton. Von Neumann's CA work was not published during his lifetime; it seems that once he saw the solution, he became distracted and moved on to other things. Ulam continued working on a number of simpler cellular automata, publishing several papers on them during the early 1960s.

The next big event in CA history occurred in 1970. In the October, 1970, issue of Scientific American, the popular mathematics writer Martin Gardner's “Mathematical Games” column bore the subtitle: “The fantastic combinations of John Conway's new solitaire game ‘life’,”. John Conway, a mathematician at the University of Cambridge, had sent Martin Gardner a new mathematical game—the game of Life. Conway had selected Life's rules by a process of trial and error; his vague initial goal was to find a cellular automaton rule in which simple patterns could grow to large size, but in which it was not clear whether any patterns could grow forever. Gardner reported that:

Conway conjectures that no pattern can grow without limit. Put another way, any configuration with a finite number of counters cannot grow beyond a finite upper limit to the number of counters on the field. This is probably the deepest and most difficult question posed by the game. Conway has offered a prize of $50 to the first person who can prove or disprove the conjecture before the end of the year. One way to disprove it would be to discover patterns that keep adding counters to the field: a “gun” (a configuration that repeatedly shoots out moving objects such as the “glider”), or a “puffer train” (a configuration that moves about leaves behind a trail of “smoke”).

The prize was won a month later by William Gosper and five fellow hackers at MIT; they sent Martin Gardner a telegram with the coordinates of the cells to turn on to make a glider gun. You can see Gosper's glider gun by loading the Life and selecting the pattern GlidrGun.

One must remember that 1970 was still the Dark Ages of computing; Conway himself ran his Life simulations by marking the cells with checkers or flat Othello counters. For Gosper and his team to get Life to run on a monitor at all was a nontrivial feat of hacking—it was a new thing to do with a computer. After Gardner's second column on Life, the game became something of a mania among computer users. By 1974, an article about Life in Time ([Time74]) could complain that “millions of dollars in valuable computer time may have already been wasted by the game's growing horde of fanatics”.

More and more intricate Life patterns were found all through the 70s, and by 1980, Conway had enough Life machinery at hand to publish a detailed proof that Life can be used to simulate any digital computation whatsoever.

A number of people at MIT began studying cellular automata other than Life during the 1970s. Probably the most influential figure there was Edward Fredkin. He had the idea for the two-state “Parity” rule under which EveryCell counts up the number of its firing neighbors and goes to 1 if the number is odd and to 0 if the number is even. The dimensionality of the space and the shape of the neighborhood are not crucial—in almost every situation, the Parity rule will make copy after copy of whatever starting pattern you give it. The copies are grouped into fractalized clusters. Although he himself holds no higher degrees, Fredkin was made a professor associated with the MIT Laboratory for Computer Science, and he directed a number of dissertations on Cellular Automata.

Around 1980, Fredkin formed the Information Mechanics Group at MIT along with Tommaso Toffoli, Norman Margolus and Gérard Vichniac. Soon after that he left MIT to live on Mosquito Island, a Caribbean island he owns as a result of some highly remunerative work he did on information processing in the 1960s. Fredkin has not published anything since then, though I have seen a draft of some notes he wrote for a Digital Information Mechanics workshop he held at Mosquito Island in 1982.

Fredkin envisions a new science where we represent all physical
quantities as packets of information. The substrate on which
these packets move is to be a cellular automaton. Not to put too
fine a point on it, Fredkin argues that, at some deep level, the
world we live in is a huge cellular automaton. Although Conway
had already expressed opinions to the effect that in a
cosmically large Life simulation one might see the evolution of
persistent patterns which are as intelligent as us, Fredkin was
the first to suggest that the world we live in really
*is* a CA.

By 1984, Toffoli and Margolus had nearly perfected the CAM-6 cellular automaton machine and were generating some publicity about it. In addition, Stephen Wolfram was publishing numerous articles about CAs. I was first hooked on modern cellular automata by [Wolfram84]. In this article, Wolfram suggested that many physical processes that seem random are in fact the deterministic outcome of computations that are simply so convoluted that they cannot be compressed into shorter form and predicted in advance. He spoke of these computations as “incompressible,” and cited cellular automata as good examples. His article included some intriguing color photographs of one-dimensional CAs.

The magazine Science 85 had hired me to do a number of articles for them in the past, so in April of 1985, I arranged with them to travel to Princeton and Cambridge to interview the new cellular automatists. The next section is adapted from the article I wrote, which eventually appeared as [Rucker87a]. As it turned out, an editor at Science 85 found cellular automata too esoteric, so in the end I sold the article to…a science fiction magazine.

We've been talking all afternoon and Stephen Wolfram is tired. On the computer screen in front of us, patterns are forming. We are watching the time-evolutions of various one-dimensional cellular automata. Some of the patterns are predictable as wallpaper, some are confusingly random, but just now there is one that strikes a pleasing balance between order and chaos. It's shaped like a pyramid, with red and blue down the sides, and with a symmetrical yellow pattern in the middle—a pattern vaguely like an Indian goddess.

“What's the number on that one?” asks Wolfram.

“398312,” answers Norman Packard, Wolfram's associate at the Institute for Advanced Study in Princeton.

“This is the way to do scientific research,” I remark. “Sit and watch patterns, and write down the numbers of the ones you like”.

“Oh, this isn't for science,” says Wolfram. “This is for art. Usually I just talk to scientists and businessmen, and now I'm trying to meet some artists. Wouldn't that last one make a good poster?”

A few days later and I'm with Charles Bennett, an IBM information-theorist visiting Boston University. Bennett has a TV coupled to a computer and two naked boards full of circuits and chips. One of the boards has two tiny green lights sticking up like eyes. The board with the lights, explains Bennett, serves as a source of random zeroes and ones.

“Watch this,” says Bennett. “The Game of Life starting from a primordial soup of bits. It's a rudimentary model of evolution.”

He fiddles with his machine and the TV screen lights up with a color flea-circus: this is the “soup”. And then, as the game of Life's transformation rules take over, the dots begin racing around, clumping into things like worms. The worms crawl around the screen, colliding and eating each other, casting off bits of stable debris.

“That's a glider gun,” says Bennett, pointing to a twinkling little dot-creature. A steady stream of smaller dot-patterns is pulsing out from the glider gun. “We've got sixty-five thousand pixels on this screen with sixty updates per second.”

Bennett shows me another pattern, one that looks like boiling red cottage cheese, and then he takes me across the Charles River to the MIT Laboratory of Computer Science. In the basement is an exuberant French scientist named Gérard Vichniac. He and an associate are watching a small rectangular dot-pattern in the center of their terminal's screen. The pattern is mostly white, with a small amount of red in it. The edges keep folding in on each other as the pattern evolves according to some simple rule which Vichniac made up. He calls it an “Ising Model,” but it looks like an electronic kaleidoscope. “This pattern started as a red square,” Vichniac tells me. “The rule is reversible, so we know that eventually the red square must come back. We've been watching it for eighty thousand steps now”.

Upstairs from Vichniac are two equally cheerful cellular automata specialists, Norman Margolus and Tommaso Toffoli. There's another journalist there, a guy from Rolling Stone. Cellular automata are hot. I introduce myself and sit down to watch the demonstration. Right now there's a central cloud of dots, with square little patterns flying out of it. On the sides of each of the square patterns are individual pixels that pump up and down.

“Look at the critters, Tom,” says Margolus. “They look like rowers, don't they?”

“Show him the square rock,” says Toffoli.

Margolus clears the screen and keys a big red square into the center. The square expands out to the edges of the screen and bounces back. As the bouncing continues, the patterns interfere and form complex checkerboard patterns, enough patterns in thirty seconds to have made a respectable one-man Op-Art show in the 1960s.

Toffoli pries Margolus away from the controls and takes over. “Now we do the square rock in the toroidal pond again, but this time we add a heat-bath, a cloud of random gas in the background.”

The background fills with darting dots, and Toffoli keys another big red square into the center. This time the waves are smooth and roughly circular, much like real waves in a real pond. We try it with two squares and get interference patterns. Toffoli is pleased. He says that this shows how simple physics really is.

What is going on?

For the past fifty years, scientists thought of computers in
terms of a *series* of computations, to be carried out
successively. The idealized model for such computers was the
Turing machine: a device which moves back and forth along a long
strip of paper making marks. Turing's model led John von Neumann
to the key insight that got the computer revolution off the
ground: *a computer program should contain computing
instructions as well as data*.

One of the main changes expected from coming generations of
computers is that computers are going to begin doing
computations in *parallel*. A few such parallel computers
exist, such as NASA's seven million dollar Massively Parallel
Processor at the Goddard Space Flight
Center.
But these computers have yet to realize their full potential.
The problem is that there is still no simple model of parallel
computation; and there is still no good theory of how to program
a parallel computer.

Cellular automata may provide the necessary new mind tool for thinking about parallel computation. A striking feature of CAs is that their eventual output is so hard to predict. In practice, the best way to predict what pattern a CA will show in, say, a hundred steps, is simply to run the CA rule itself for one hundred steps. This suggests that the best way to “program” a parallel computer may be empirical: try out several million randomly chosen cell-programs, and select the one that accomplishes your goal the best.

Probably the best-known CA worker is Stephen Wolfram, aged twenty-four. Wolfram was born in Oxford, and is said to have left home at the age of twelve. As a teenager, he published a number of papers on particle physics. He obtained his Ph.D. in physics from Cal Tech in 1980, won the MacArthur prize in 1981, and joined the Institute for Advanced Study in Princeton in 1982. And then, in the process of trying to find a model for the way in which galaxies form out of the universe's initially chaotic state, Wolfram became interested in cellular automata.

Stocky, tousled, and seeming a bit older than his years, Wolfram speaks with the directness of a man who knows what he is doing. “Computer scientists had done some fairly worthless clean-up work on Ulam and von Neumann's work. There were maybe a hundred papers. What I found outrageous was that none of the papers had any pictures.”

Wolfram's papers all have pictures, lots of pictures, usually pictures of one-dimensional cellular automata evolving in time. He recalls his initial investigations into one-dimensional CAs as “botanical.” He watched thousands and thousands of them on his computer until he got a feeling for what kinds of possibilities there were. He now feels that a number of very simple CAs can serve as universal computers. In Wolfram's words, “It is possible to make things of great complexity out of things that are very simple. There is no conservation of simplicity.”

One application of CAs is to the little-understood phenomenon of turbulence. “If we had a better understanding of how complex systems work, we could use them in engineering applications,” remarks Wolfram, and goes on to tell a story about the design of the DC-10 airplane. “The wing of a DC-10 is held on by a single steel bar. Two or three steel bars would probably be better, but for more than one bar the mathematics becomes too complicated for a simulation to be carried out. The weakness of our mathematics forces us to adopt the simplest possible design.”

I ask him what engineers think of his method of modeling turbulence with CAs. “Some say it's wrong, and some say it's trivial. If you can get people to say both those things, you're in quite good shape.”

Up at Boston University, Charles Bennett and the Hungarian
computer scientist Peter Gacs are using two-dimensional cellular
automata to model biological notions. Unlike a solid-state
computer, a human brain is filled with random noise. How is it
that we manage to remember things, and to think logically, when
all of our mental patterns are constantly being bombarded by
extraneous stimuli? Bennett and Gacs tell me they have found a
CA model for the process, and they show me the screenful of
boiling red cottage cheese. Despite the boiling, the cheese
stays mostly red: *this* is the persistence of memory.
Gacs says something very interesting about the device that
produces the display.

“With the cellular automaton simulator, we can see many very alien scenes. We have a new world to look at, and it may tell us a lot about our world. It is like looking first into a microscope.”

Computer science is still so new that many of the people at the cutting edge have come from other fields. Though Toffoli holds degrees in physics and computer science, Bennett's Ph.D. is in physical chemistry. And twenty-nine year old Margolus is still a graduate student in physics, his dissertation delayed by the work of inventing, with Toffoli, the CAM-6 Cellular Automaton Machine.

After watching the CAM in operation at Margolus's office, I am sure the thing will be a hit. Just as the Moog synthesizer changed the sound of music, cellular automata will change the look of video.

I tell this to Toffoli and Margolus, and they look unconcerned. What they care most deeply about is science, about Edward Fredkin's vision of explaining the world in terms of cellular automata and information mechanics. Margolus talks about computer hackers, and how a successful program is called “a good hack.” As the unbelievably bizarre cellular automata images flash by on his screen, Margolus leans back in his chair and smiles slyly. And then he tells me his conception of the world we live in.

“The universe is a good hack.”

During the years 1982–1986 I was living as a freelance writer in Lynchburg, Virginia. Eventually living in Lynchburg became unfeasible. I was broke and getting deeper in debt, while our children were needing braces and college. Even if it was peaceful and cozy in Lynchburg, the bandwidth always seemed way too low—where the “bandwidth” of some information source means the number of bits per second that it delivers.

What was really chafing on me the most was my strong sense that I was missing out on a great intellectual revolution: the dawn of computer-aided experimental mathematics. Fractals, chaos, cellular automata—it was everywhere. I clicked over the final switchpoint when I did the interviews to write my article on cellular automata. Those guys were having so much fun, looking at such neat things, and making up such great theories about what they saw! I decided to become one of them.

If you're already a mathematician, becoming a computer scientist is not so much a matter of new knowledge as a matter of new attitude. Born again. Willing to commit to the machine. By way of preparation, I wrote Mind Tools, a book which surveys mathematics from the standpoint that everything is information. So when I got a chance to interview for a job in the Mathematics and Computer Science Department at San Jose State University, I had thought enough about computers to give a good talk on information theory. They called to offer me the job on March 22, 1986, my fortieth birthday.

In The Unbearable Lightness of Being, Milan Kundera talks about “the frenzy of a forty-year-old man starting a new life.” That's how it was to move from Virginia to California with my wife and three kids and to start teaching computer courses. The very first semester, they gave me a course in Assembly Language to teach. I didn't know anything at all about assembly language; if the truth be told, I didn't even know what DOS was.

Fortunately there was another mathematician-turned-computer-scientist at SJSU who was teaching Assembly Language the period before me. His name is William Giles and he's a great teacher. I went to his classes and wrote down everything he said, and then I would teach that to my class. During the second semester I began to understand something about what I was doing, and I wrote my very first cellular automaton program, an assembly language textmode graphics display of a one-dimensional CA.

Margolus and Toffoli's CAM-6 board was finally coming into production around then, and I got the Department to order one. The company making the boards was Systems Concepts of San Francisco; I think they cost $1500. We put our order in, and I started phoning Systems Concepts up and asking them when I was going to get my board. By then I'd gotten a copy of Margolus and Toffoli's book, Cellular Automata Machines, and I was itching to start playing with the board. And still it didn't come. Finally I told System Concepts that SJSU was going to have to cancel the purchase order. The next week they sent the board. By now it was August, 1987.

The packaging of the board was kind of incredible. It came naked, all by itself, in a plastic bag in a small box of styrofoam peanuts. No cables, no software, no documentation. Just a three inch by twelve inch rectangle of plastic—actually two rectangles one on top of the other—completely covered with computer chips. There were two sockets at one end. I called Systems Concepts again, and they sent me a few pages of documentation. You were supposed to put a cable running your graphics card's output into the CAM-6 board, and then plug your monitor cable into the CAM-6's other socket. No, Systems Concepts didn't have any cables, they were waiting for a special kind of cable from Asia. So Steve Ware, one of the SJSU Math&CS Department techs, made me a cable. All I needed then was the software to drive the board, and as soon as I phoned Toffoli he sent me a copy.

Starting to write programs for the CAM-6 took a little bit of time because the language it uses is Forth. This is an offbeat computer language that uses reverse Polish notation. Once you get used to it, Forth is very clean and nice, but it makes you worry about things you shouldn't really have to worry about. But, hey, if I needed to know Forth to see cellular automata, then by God I'd know Forth. I picked it up fast and spent the next four or five months hacking the CAM-6.

The big turning point came in October, when I was invited to Hackers 3.0, the 1987 edition of the great annual Hackers' conference held at a camp near Saratoga, CA. I got invited thanks to James Blinn, a graphics wizard who also happens to be a fan of my science fiction books. As a relative novice to computing, I felt a little diffident showing up at Hackers, but everyone there was really nice. It was like, “Come on in! The more the merrier! We're having fun, yeeeeee-haw!”

I brought my AT along with the CAM-6 in it, and did demos all night long. People were blown away by the images, though not too many of them sounded like they were ready to a) cough up $1500, b) beg Systems Concepts for delivery, and c) learn Forth in order to use a CAM-6 themselves. A bunch of the hackers made me take the board out of my computer and let them look at it. Not knowing too much about hardware, I'd imagined all along that the CAM-6 had some special processors on it. But the hackers informed me that all it really had was a few latches and a lot of fast RAM memory chips.

I met John Walker at Hackers 3.0. He told me a little about Autodesk and we talked in fairly general terms about my possibly doing some work with them. A month or two later, John showed up at my house with Eric Lyons, the head of the Autodesk Technology Division. They were toting a 386 and a five megabyte Mandelbrot set movie they'd made. I showed them all my new CA stuff, and they more or less offered me a fulltime job. It was so sudden I wasn't really ready to think about it.

Spring of 1988 I taught Assembly Language again, and this time
just about all we did was write CA programs. The big revelation
I had about getting the programs to run faster was to have no
rigid preconceptions about what I wanted the program to do.
Instead I began to listen to what the machine and the assembly
language were telling me about what *they* wanted to do.
And what they wanted to do was to run the ASCII rule. Given the
structure of the textmode memory, it's the simplest possible 2D
cellular automaton that an 80*x*86 can do.

That same spring, I was teaching a special course on Cellular
Automata, and a custom chip designer called John Wharton had
signed up for it. I'd met him at the Artificial Life Conference
at Los Alamos in September, 1987, and he'd been at Hackers 3.0
as well. Wharton showed me how to use a stored lookup table for
rapidly updating a one-dimensional cellular automaton four cells
at a time. (I use this trick in **SoundCA**.) Wharton and I
talked a lot about how to make an inexpensive version of the
CAM-6…whether by cloning the hardware or by reinventing
the whole thing in software. The other students in my Cellular
Automata class were a big help as well. Mary Ammirato and Brent
Stewart in particular were very good at finding and naming new
patterns in Vote and Brain.

The semester ended, and the nice rental house my family and I had initially lucked into got sold out from under us for $450,000. Looking for a new place to house us on a professor's salary, I realized that here in Silicon Valley I was really poor. I consoled myself by writing a lot more cellular automaton programs, a whole disk's worth of them, including a lot of 2D CGA ones. In June I heard that Eric Lyons was giving a talk on cellular automata at Autodesk. I went up, and after the talk I showed Eric my programs and asked if he and John had really meant it about offering me a job.

In July, John Walker mailed me a copy of his first version of
the MS-DOS-based simulator. This was a two-bit CGA-only
version, but it ran Brian's Brain maybe 30% faster than my best
hack at the same thing. *And it was externally
programmable.*

I began pushing really hard for the job. We went back and forth for a few weeks, and August 15 I started a three-month contract as a consultant at Autodesk. The main thing I did was to test out Stephen Wolfram's new mathematics program Mathematica™ on a Mac II that Autodesk lent me. The idea was to figure out what a PC 386 interface for Mathematica ought to look like, and to use Mathematica to find some some interesting new graphics algorithms. I found all kinds of things, but that's another story.

When my consulting contract ran out in November 1988, Autodesk still wasn't quite sure about whether to really hire me fulltime. That's when I firmed up the idea that John Walker and I should pool all our CA knowledge and create a unified CelLab. That would be a specific thing I could do during my first year at Autodesk, to put together the CelLab. The deal was okayed, and to make my joy complete, John magnanimously agreed to put my name on the package cover.

As I write this, it's April 10, 1989, and we're planning to ship the product next month. The code seems to be all done, and when I finish this section the manual will be done too, modulo one more frantic round of corrections.

So, OK, Rudy, finish it.

When I look at how completely cellular automata have transformed my life in the last four years I can hardly believe it. The most exciting thing for me to think about is how CelLab is going to transform the lives of some of you who use it; and how you in turn will change the lives of others around you.

A revolutionary new idea is like an infection that's actually good for the people who get it. I caught cellular automata in 1985, and I've put them on CelLab so you can catch them too.

What happens next? It's up to you.

The first edition of CelLab was developed by Rudy Rucker and John Walker in 1988 and 1989 when both were working in the Autodesk research lab. The package was to be the first title in the “Autodesk Science Series”, which would use computer simulation to explore aspects of science and mathematics. The product was first shipped in June of 1989 at a suggested retail price of US$59.95, under the name Rudy Rucker's Cellular Automata Laboratory. Rudy went on to complete the second title in the Science Series, James Gleick's CHAOS — The Software which used programs developed by Rudy and another Autodesk programmer, Josh Gordon, to illustrate aspects of James Gleick's bestselling book. CHAOS — The Software shipped in November of 1989. Rudy was working on the third title in the series, Artificial Life Lab, and John was developing the fourth, Home Planet, when Autodesk's management decided to close the research lab and terminate development of the Science Series. Rudy finished Artificial Life Lab, which was published as a book plus disk by The Waite Group Press in 1993. John released Home Planet as a freeware program in the same year, and the current version can be downloaded from this site.

The demise of the Science Series orphaned Cellular
Automata Laboratory, which disappeared from the market in
1994. Rudy and John explored the idea of a new edition with
several publishers, but none seemed to be interested. With the
advent of the World-Wide Web, software can be distributed at a
minuscule fraction of the cost of packaged software in the
1980's, so this seemed a natural way to get Cellular
Automata Laboratory back into the hands of creative
people interested in exploring massively parallel computing.
Re-launching a program developed almost a decade ago required a
modicum of work: a new cellular automata simulator that ran
under Windows was developed, the User Guide (originally a 265
page book typeset using LaTeX) was transformed into an HTML
document for the Web, and Java was added to the languages one
can use to define cellular automata rules, being ever so much
more with-it than Pascal, BASIC, and C. The Web edition of
CelLab was released in 1995, and attracted an
audience of *dozens* of enthusiastic users.

Eventually, as is so often the fate of software developed for closed, proprietary platforms, the MS-DOS and 16-bit Windows programs developed for CelLab stopped working on “improved” versions of Microsoft operating systems—death by Microsoft. Updating them to work on current versions of Windows would amount to almost a complete re-write, and that would only be jumping back onto the treadmill of constantly revising them for each successive release from the masters of strategic incompatibility. Further, it would limit the audience for the programs to users of a legacy, technologically inferior platform which is continually losing market share to better alternatives. (If you'd like to visit CelLab Classic, go right ahead—we never delete anything—but you're unlikely to find a machine which will run it.)

In 2017, John had converted a number of resources on his Fourmilab site to replace applets written in the Java language, which is rapidly being abandoned as a means to develop interactive Web pages, with the native facilities of HTML5 and JavaScript, which are standards supported by most modern Web browsers on both desktop and mobile platforms. By running entirely within the browser, there is no need for the user to download or install any software on their own machine, nor any worry about that software becoming incompatible with future operating systems the user might install. A Web-based application will run on any machine or operating system on which a standards-compliant browser is available, and that's just about all of them.

The question was, would JavaScript and HTML5 canvas graphics
support be fast enough to run the very demanding computation and
graphics support which CelLab required? When it
was originally written in 1988 and 1989, CelLab
required every assembly language trick and tweak in the twisted
minds of clever programmers to squeeze out around one generation
per second from an IBM PC/AT (a 6 MHz 80286 machine), but in the
intervening years, typical personal computers, including mobile
platforms, have become around *five hundred times*
faster. That can make up for a lot of inefficiency in
programming language implementations. A prototype was built,
and the results were remarkable: without resorting to any
off-the-deep-end tricks or techniques which might risk
incompatibility, JavaScript and HTML5 could deliver stunning
performance: up to 100 generations per second for most cellular
automata rules on a modest laptop computer with no special
graphics hardware.

Several months later, WebCA is the result. It runs all of the rules
which the original MS-DOS and Windows graphics mode cellular
automata simulators ran, and many new rules have been added.
It is compatible with compiled rule definitions, patterns, and
color palette files used by the earlier programs. (User evaluators
[“own code”], which were written in 80*x*86
assembly language or as Windows DLLs are not compatible, but all
of the user evaluators included with the earlier programs have
been re-written in JavaScript and work compatibly with the
earlier implementations.) Besides running in the browser on a
multitude of platforms, the other major innovation in WebCA is
that rule definition programs and custom user evaluators
can be written in JavaScript, supported directly inside the
browser, and do not require users to install a separate
development environment for a programming language such as
C, Pascal, Basic, or Java. (The ability to define rules in
Java and compile them into rule files has been retained, but
there is no need to use it—it is no more efficient than
JavaScript.) Now, to explore cellular automata on your own,
you need nothing more than a rudimentary text editor (such as
the “notepad” programs included with all operating
systems) and your Web browser.

So now it's finished, again, or at least at large once again, or
something. Ideally, CelLab will never be
*done*, not as long as folks continue to use it to
explore the world of cellular automata and share their
discoveries with other pioneers on this frontier of computing.

Programmers interested in how WebCA works and its development history may consult the Development Log, where all changes to the software are documented.