The machine language > assembler > cpu/computer layer is fascinating to me.
From the book, "In fact, just as we say that the machine language is designed to exploit a given hardware platform, we can say that the hardware platform is designed to fetch, interpret, and execute instructions written in the given machine language."
I'm curious about how machine language, assembler, cpu/computer evolved - historically and in practice.
Re: machine language > assembler > cpu/computer layer
I bet the answer can be put in "just" a couple of volumes :)
First of all, it depends how far back you want to go. The earliest programmable computer I know of is Charles Babbage's analytical engine.
Then there are purely theoretical models like the Turing machine and Church's Lambda calculus.
The first programmable computers were required rewiring. The first programmable computer seems to be Konrad Zuse's Z3, followed by the much more well known ENIAC. And from then on I think it's an explosion of options and ideas, developed in parallel or as exploring alternative approaches.
Generally speaking, the more complex the machine language is, the bigger the CPU needs to be. This means that the early computers supported quite limited instruction sets. This changed radically with the invention of the integrated circuit and the first CPUs on a chip, although it took IBM and the other mainframe manufacturers a few years to realize these are not just toys.
Then there's the RISC architecture revolution from the 1980s, which focused on machine language suitable for compilers rather than for humans.