It seems quite confusing, I couldn't understand how the computer can ever be running, considering these questions:
1.If actual hardware are built as described in the course, without simulators, where to write the assembly language?
2.Assemblers are written in high level language, but how to write the first assembler, when there are just pile of hardwares, and no high level languages at all?
3.We can imagine that binary codes are loaded into CPU, but who did this "load" job at the very beginning? CPU?Which comes first, chicken or egg?
4.seems there are still many questions, in what books or courses can I find the answers?
thanks a lot!
There are a few ways to accomplish it. Today when a new hardware architecture comes out you do essentially what you do in these projects -- your develop your development tools on another platform that already exists.
But what you are essentially asking is how did this happen initially when there were no other platforms.
That is a process that is generally referred to as bootstrapping. Initially you have a piece of hardware that has no program in it and you need to get a program into it. The program consists of nothing but a sequence of binary patterns loaded into successive memory locations. You can design hardware that lets you manually set a series of switches and press a button that will load the pattern established by switches into the current memory location. The address of that location can either be set by another set of switches or by the output of a counter that, after push the button, advances to the next memory location.
So now you just need to come up with the sequence of binary patterns that you need for your program, so you need an assembler. At first, YOU are the assembler and you write your program out in assembly (because it's easier for humans to do that) and then you hand assemble it into the binary opcodes that are needed and program your chip.
So what's the first program that you write this way? A very simple assembler!
You need this assembler to be able to access some kind of input device, assemble the code it gets from there, and store the result in some kind of output device. So it's not just an assembler, but also needs some semblance of an operating system. Your first program probably has all of this in one program and you make it as simple as you can (since you have to hand assemble the code). You probably make it so that it looks for the program it is supposed to assemble at a fixed location in storage memory and writes the output to some other fixed location in storage memory. When you run it the input you provide is the assembly code for a slightly better assembler. You keep doing this until you have an assembler that supports the entire instruction set.
As you are doing this you are also developing a monitor program (think of that as a really primitive operating system) that provides just the bare minimum services you need to use your tools at the next stage of development.
This is, in general, a pretty long and painful process. Even if the N2T project wanted to put you through that (and there would be quite a bit of useful educational value in doing so), it would blow the scope and time frame of the project out of the water. Even more to the point, it would require the platform to support some kind of nonvolatile storage and a means to communicate with it.
One way to do this, at least initially, is to design a piece of hardware that loads data from some nonvolatile memory device (paper tape, whatever) into RAM. Then you run the program (which now looks for the source code at a fixed RAM location and writes the result to another fixed RAM location), and then you use that hardware to transfer the contents of RAM back out to nonvolatile memory. This approach would actually be a pretty small step up in the design of the N2T hardware and so extending the project to enable bootstrapping the assembler probably wouldn't be that hard.
If you mean you want to learn more about the bootstapping process, I doubt you'll find it taught in any courses. At this point it is of largely historical interest only; the need to be able to verify the correctness and integrity of programs from a security standpoint have made it a topic of some interest today, but this is a very different context than what you are talking about.
Search the web for bootstrapping compilers or bootstrapping assemblers or for the history of compilers or similar things.
There is actually quite a bit of current interest in boot-strapping again.
There are a lot of people interested in having a reproducible and verifiable chain of compilation - compiling the whole software stack from code with no pre-compiled binary. The idea is that any binary is basically unreadable, and therefore not trustworthy. Most of these projects are working back towards the start, writing smaller and smaller compilers with the aim of having a tiny, hand-written machine-code program that does enough to allow compilation of a simple compiler which can then compile a more complicated compiler and so on until there is enough to run standard software.
A search for 'reproducible builds' should turn up a lot of interesting stuff. The guix project is worth a look, aiming at having a whole OS with the smallest possible binary bootstrap seed.
It's a great book -- many courses that teach The Elements of Computing Systems use it as a companion book. Charles Petzold is a very good author who makes anything he writes about very readable, even if I do blame him for a cracked rib sustained while reading his book on Windows programming.