Problems understanding Hardware/Software interface
Posted by cdrh on
URL: http://nand2tetris-questions-and-answers-forum.52.s1.nabble.com/Problems-understanding-Hardware-Software-interface-tp4032625.html
I am actually having a hard time understanding the software/hardware interface..What I don’t understand is how everything is communicating with one another and how it all connects. It feels like I’m missing something. Please help me get this straight.
@2:15 in unit 4.1. Noam says “We put the program/software inside the hardware and then it can act in a way that is according to the software we put there”. By this, do you mean that the “program/software” is basically the instructions?
So in week 2, we discussed and built the ALU. Those control bits, depending on how many we have, make up the truth table by hardwiring/circuits which provides our arithmetic and logical functionalities. Now moving back to the instructions/machine language part. @6:35 in unit 4.1, he says “in this course, we are actually dealing the hardware that runs the machine with the software that immediate directly operates above it." Also back to the quote in paragraph 2. I do not get how you have these RAW hardware components and then suddenly "put" this “software” layer onto it. Here’s a confusing point. In unit 4.2, @2:54, he talks about “Machine operations” that provide logical/ arithmetic operations and flow control. It seems as if machine language is somehow using these functionalities of the ALU and PC and making them communicate with one another. How though? We built those in two separate units. They are it’s own entity. How does machine language bridge this gap?
The the best analogy I can think of to describe my problem is. It seems like the ALU and PC (memory) are engineers, and they can do such and such tasks. The machine language is like their project manager/boss, it tells them what to do and helps them communicate to bring their work together. I want to know how that PM is implemented. The problem is from my POV, the engineers are somehow doing PMs job even though they don't know how. Because so far, we've only built two components: ALU and memory. Now it's telling me that this machine language comes from no where and bridges this gap? Where is its hardware component?
Not only is that the problem. But who’s the one decoding these mnemonics? The ALU and PC obviously can’t do that. How does the computer know (@8:38 in unit 4.1) that mnemonic translates to go do a certain task? All we have is an ALU and memory so far. Who is handling these decoding operations?
Another problem, how is the 2s complement system implemented? How does the 16-bit computer know 0000 0000 0000 0001 is ONE? Who's telling it that? It seems like we just built hardware and said "hey here's this system called 2s complement. it's a translation of our decimal system". And the computer just somehow magically recognizes it. Again, same problem here. All we have is an ALU and memory. Who and what is doing this decoding?
I cannot seem to understand anything that has some encoding/decoding mechanism implemented when all we have are two components that have no functionality in decoding.
I understand how these hardware components are conceptually, but my issue is connecting all the dots together. It feels as if we went from creating these hardware with limited functionality, and now we are adding all these concepts and operations that seem impossible with THAT limited functionality. These are serious issues that I have while trying to understand the whole, big picture. There are more issues. But I think if I get these, the rest will make sense.