Throughout the second generation period operating system technology progressed toward the goal of squeezing more and more performance from computers. Early developments included spooling (from Simultaneous Peripheral Operation On-Line), which allowed a computer to process other work at full speed while devoting a small portion of its attention to running slow devices such as card readers and printers. Since many programs did not use the full capacity of the computer, but rather spent much of their time reading and writing much slower peripheral devices such as tape drives and magnetic drum and disc memories, operating systems were eventually generalised to allow concurrent execution of multiple jobs, initially in the hope of maximising the usage of scarce CPU and memory resources, and later with subsidiary goals such as providing more responsive service for small tasks while larger jobs were underway.
If a computer's time could be sliced or shared among a small number of batch jobs, why couldn't it be chopped into much smaller slices and spread among a much larger community of interactive users? This observation, and a long-standing belief that the productivity of computer users (as opposed to the productivity of the computer itself) would be optimised by conversational interaction with the computer, led to the development of timesharing systems in the 1960s. Timesharing promised all things to all people. To the computer owner it promised efficient utilisation of computing power by making available a statistical universe of demands on the computing resource which would mop up every last CPU cycle and core-second. It promised batch users the same service they had before, plus the ability to interactively compose their jobs and monitor their progress on-line. And it offered interactive, conversational interaction with the computer to a new class of users.
Computer facilities were imagined, in the late 1960s, as agglomerating into regional or national ``computer utilities'', paralleling the electric power industry, which would sell computing capability to all who needed it, providing access wherever telephone lines reached, and all users a common database which could grow without bounds.
The archetypal device for computer interaction in the third generation was the Teletype model 33 KSR. It is hard to explain to people who didn't first enter computing in the batch era just how magical this humming, clunking, oil fume emitting, ten character per second device with the weird keyboard seemed. You could type in ``PRINT 2+2'', and almost instantly the computer would print ``4''. And most of all, you could imagine that device in your own home, linked by telephone to the computer whenever you needed it (the price of the hardware and the cost of computing in that age kept this a dream for virtually everybody, but it was a dream whose power undoubtedly contributed to its fulfillment, albeit through very different means).
The interactive character device, whether a slow printing terminal such as a teletype, or an ASCII ``glass teletype'' running at speeds of up to 960 characters per second, led to the development of conversational computing. The user types a line of input to the computer, which immediately processes it and responds with a reply (perhaps as simple as a prompt indicating it's ready for the next line). Many different flavours of this form of interaction were explored and coexist today, including the BASIC environment originally developed by Kemeny and his group at Dartmouth, editors such as TECO and its many derivatives such as VI, the project MAC timesharing environment whose influence is everywhere in the industry (including in Autodesk's own text editor), and eventually TOPS-10 and Multics (with their many derivatives including Unix and MS-DOS).
The conversational mode of interaction was the Turing test made real--the user ``conversed'' with the computer, just as he might with another human on a teletype-to-teletype connection (or CB on CompuServe today), and if the computer's responses tended toward comments such as ``WAIT'', ``FAC ERR 400000044000'', or ``segmentation violation-core dumped'' rather than observations about relevant passages in Wittgenstein, well with enough funding for the AI Lab and enough lines of Lisp, who knew?
Today it is fashionable to hold conversational computing in disdain, yet it achieved most of the goals that realistic observers expected of it. That it disappointed those whose expectations weren't grounded in the reality of what it was shouldn't be held against it--visionaries are always disappointed by reality (and therefore often lead the way to the next generation). In the guise of GE Timesharing, conversational computing introduced hundreds of thousands of students in high schools to computing, and many of these people now fill the ranks of the computing industry. The conversational model is almost universally the approach of choice by software developers, to the extent that Apple's own Macintosh Programmer's Workshop implements that model on a computer that is identified with another model entirely. The dominance of MS-DOS in the personal computer market and Unix in the technical workstation world (as well as the BASIC environment on most home computers, such as the Commodore and Atari 8 bit families) is testimony to the effectiveness and staying power of this mode of interaction. (It should be noted, however, that Unix in particular has in its shell programming facilities co-opted much of the second generation's job control languages, and a significant fraction of the power of Unix comes from integrating that approach with conversational interaction).
Editor: John Walker