Von Neumann, another star from Göttingen, gave the next important step in computation after Turing (checkout my previous post for Turing’s first step), in a way that, to this day, you can’t talk about computers without talking about Von Neumann.
Which came first: the Hardware, or the Software and Languages? In an analogous way to the chicken and the egg problem, doesn’t it seem like a fair question? Last time we saw that, even before computers were in people’s imagination, Turing managed to decouple them elegantly. So it was a matter of time for them to grow and become specialised. I’d argue today, as a fair answer, that Von Neumann made hardware come first, in a way that conditioned the second forever. Let’s analyse this problem.
The Von Neumann Architecture.
The computers as we know them, are essentially running on the Von Neumann architecture (set aside any variants, Harvard machines, stack machines, register machines… they’re all just slight variations for this matter), an abstract machine that models a physical implementation of what we know as a computer.
It came to be known in the famous paper First Draft of a Report on the EDVAC, a truly remarkable paper for his time, where Von Neumann (together with the EDVAC team) describes famously, in 1945, a draft of an abstract machine, a stored-program and binary machine, successor to the ENIAC, funded by and for the U.S. Army’s Ballistics Research Laboratory. 1
What is it?
In §2.0, we can find a fantastic overview of the paper, where he divides the system in six pieces:
- “Since the device is primarily a computer, it will have to perform the elementary operations of arithmetic most frequently”: a central arithmetic unit will have to exist, responsible of these operations: what today has evolved as the ALU. [§2.2]
- A Central Control (CC) part to sequence the instructions must be given, as a general controller that assures instructions are carried out. [§2.3]
- “Any device which is to carry out long and complicated sequences of operations must have a considerable memory”, called M, to remember the instructions, the tables of values of special functions, initial conditions of a problem, intermediate computations, etc. [§2.4, §2.5]
- An outside recording medium R, capable of maintaining input and output, independent of M, and therefore decoupling memory from storage for performance reasons, and that can be produced by any mean (typing, magnetic tapes, punching cards, etc), therefore decoupling the interface of R from its implementation. [§2.5]
- An input organ I, to transfer information from R into CC and M.
- An output organ O, to transfer information from CC and M back into R.
This is, essentially, what the Von Neumann architecture is all about.
The interoperation of the parts
It’s quite remarkable, if you’re somehow familiar with modern hardware, to notice how much we owe to Von Neumann. Quite advanced at his time, he was aware of so many details we’re still fixing today!
The Central Arithmetic Unit, or today, the Arithmetic Logic Unit
The first and most important thing is the ALU, and which operations it requires. This mechanism will always operate on two elements of input, i and j, built as modern registers, and output the result to a third register o. Addition and multiplication are essential, and you can implement subtraction and division based on the former two [§7], or implement them in dedicated hardware for performance reason [§8], as well as perhaps a square root operator for function interpolation. Very importantly, is a comparison operator, greater than [§11.4], binary-to-decimal and decimal-to-binary conversions, and input and output operations for the registers. The CA might pipeline instructions to improve performance: Von Neumann notices that operations like multiplication might take many clock cycles, so pipelining is then an easy improvement.
A global synchronising clock will operate over the entire system, making the elements work at ticks of the clock: Von Neumann even goes at length discussing the frequency this clock should have in order not to tick faster than the relays and vacuum tubes that compound other pieces of the architecture.
Memory, the bottleneck
If you’ve heard today about RAM, or Random Access Memory, well, Von Neumann thought of it first. He acknowledges very early that memory would be the main performance bottleneck [§12.4]: trying to discuss the best size for M, and organizing its layout, he argues that it’s important the memory be organised in a random access fashion, where cells have an address you can refer to and fetch at zero extra cost.
The Central Control Unit, CC
A fascinating element of the system, is that who is responsible for control. A Central Control Unit CC will fetch M through input I word-at-a-time, and distinguish if it is an operation, or data for the operation, explicitly specifing that data and instructions live together in the same memory [§14.1]. The word should probably be of 32bits size, of which one is reserved for this distinction. Data will have a second key bit, to distinguish positive from negative numbers, and the remaining 30 bits will represent a floating point binary number, with digits of precision, which he argues will be enough for all purposes.
CC will fetch the next word in sequential order as how they’re displayed in the memory layout: two possible exceptions are allowed to transfer the connection to a different place in M. These are, fetching a distant data, in which case control is immediately transferred back once the data is fetched, or a strict jump, in which case the CC will continue being instructed in the sequential order from the place being jumped to [§14.3]. Just like modern hardware does it: a CA and a CC is essentially all a modern CPU is.
A Universal Turing Machine
This machine, as described, is indeed a UTM.
- It has a tape with random access to improve performance, but that’s equivalent to a tape where you go left or right arbitrarily far as instructed by the tape to the control unit.
- It has a head that reads and moves along the tape, that is, the Control Centre.
- It has a deterministic transition function, specified by the instructions defined in the ALU.
- It is Universal, it simulates every other TM: the description of a TM is modelled by the set of instructions living in M, that is, the tape, to be fed to the CC. It can conditionally branch, and it can arbitrarily modify its memory.
- And a last touch, it is easily reprogrammable! Change the source R, load the new source into M, and you can very easily change the whole simulation to a new TM!
Down the rabbit hole
Von Neumann remains satisfied giving performance numbers: the vacuum tubes of his times were already a thousand times faster than a human neural synapse [§4.3], and techniques to reduce computation needs for certain operations were easy to find and elegant to implement [§5.4]. He even said once on an interview that he’d never expect more than half a dozen computers on a country: even a visionary wouldn’t imagine how far would we go with computers!
But Von Neumann is not seeing the problem, or at he’s relegating it to something unimportant. “CC will fetch the next word” [§14.1], “The device will in general produce more numerical material than the final result” [§1.3], “There must be orders available to instruct CC to transfer control to any other desired point in M” [§14.3], and so on and so forth. His programs name the intermediate steps, structure their flow of control with far jumps and reads, and fetch instruction and data through a bottleneck. Something is going on there, perhaps.
Next, we’ll see how this model conditioned Hardware and Programming for times to come. And the criticism will come from perhaps a man that inflicted more damage to these wounds. Stay tuned!
- Remember as well Von Neumann’s involvement in the Manhattan Project, which, by the way, launched its infamous atomic bombs just two months after the publication of this paper.