Problems understanding Hardware/Software interface

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

Problems understanding Hardware/Software interface

cdrh
This post was updated on .
I am actually having a hard time understanding the software/hardware interface..What I don’t understand is how everything is communicating with one another and how it all connects. It feels like I’m missing something. Please help me get this straight.

@2:15 in unit 4.1. Noam says “We put the program/software inside the hardware and then it can act in a way that is according to the software we put there”. By this, do you mean that the “program/software” is basically the instructions?


So in week 2, we discussed and built the ALU. Those control bits, depending on how many we have, make up the truth table by hardwiring/circuits which provides our arithmetic and logical functionalities. Now moving back to the instructions/machine language part. @6:35 in unit 4.1, he says “in this course, we are actually dealing the hardware that runs the machine with the software that immediate directly operates above it." Also back to the quote in paragraph 2.  I do not get how you have these RAW hardware components and then suddenly "put" this “software” layer onto it.  Here’s a confusing point. In unit 4.2, @2:54, he talks about “Machine operations” that provide logical/ arithmetic operations and flow control. It seems as if machine language is somehow using these functionalities of the ALU and PC and making them communicate with one another. How though? We built those in two separate units. They are it’s own entity. How does machine language bridge this gap?

The the best analogy I can think of to describe my problem is. It seems like the ALU and PC (memory) are engineers, and they can do such and such tasks. The machine language is like their project manager/boss, it tells them what to do and helps them communicate to bring their work together. I want to know how that PM is implemented. The problem is from my POV, the engineers are somehow doing PMs job even though they don't know how. Because so far, we've only built two components: ALU and memory. Now it's telling me that this machine language comes from no where and bridges this gap? Where is its hardware component?

Not only is that the problem. But who’s the one decoding these mnemonics? The ALU and PC obviously can’t do that. How does the computer know (@8:38 in unit 4.1) that mnemonic translates to go do a certain task? All we have is an ALU and memory so far. Who is handling these decoding operations?

Another problem, how is the 2s complement system implemented? How does the 16-bit computer know 0000 0000 0000 0001 is ONE? Who's telling it that? It seems like we just built hardware and said "hey here's this system called 2s complement. it's a translation of our decimal system". And the computer just somehow magically recognizes it. Again, same problem here. All we have is an ALU and memory. Who and what is doing this decoding?

I cannot seem to understand anything that has some encoding/decoding mechanism implemented when all we have are two components that have no functionality in decoding.

I understand how these hardware components are conceptually, but my issue is connecting all the dots together. It feels as if we went from creating these hardware with limited functionality, and now we are adding all these concepts and operations that seem impossible with THAT limited functionality. These are serious issues that I have while trying to understand the whole, big picture. There are more issues. But I think if I get these, the rest will make sense.




Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

ivant
First some terminology: ALU - Arithmetic and Logic Unit, PC - Program Counter, RAM - Random Access Memory, ROM - Read Only Memory, CPU - Central Processing Unit (or Processor), Register - a memory which can hold one machine word.

The CPU contains one ALUs, one PC and 2 registers: A and D. It also has two kinds of connections between them: data connections and control connections.

The CPU "receives" an instruction on it's instruction input. It's just a 16-bit value. The CPU sets its internal control and data paths depending on the various bits in the instruction. For example, if the the leftmost bit (bit 15) is 0, this means that CPU should just treat the value as data and store it in register A. For this it has to set the load input of register A to 1, and its input to instruction bits. It also has to increment the value in the PC by 1. With this its work for this instruction is done.

For other instructions it may need to do some more complex things, like using the ALU to add two numbers, or to set the PC to a specific number, etc. But it all depends on the current inputs and the values in its registers.

The CPU communicates with the outside world through its inputs and outputs. It receives data and instructions in its inputs, and it sends data and "commands" through its outputs. These commands are much simpler than the instructions. They just tell the RAM to load or store a value in a specific address, and they also tell the ROM to read the data from specific address (pc).

The computer needs some more parts besides the CPU. Namely, RAM for the data, ROM for the instructions and keyboard and display for input and output (I/O). We wire them in such a way, so the ROM receives its address from the pc output of the CPU and sends its data to the instruction input of the CPU.

We also connect the RAM, Video and Keyboard to the the inM CPU input and addressM, writeM and outM outputs.

Now let's say that the ROM contains some data, like

ROM[0] = 0000 0000 0000 0001
ROM[1] = 1110 1100 0001 0000

When we start the computer, the CPU's registers A, D and PC are all zeros. This means that the ROM will produce the data stored in ROM[0] and it will send it to the instruction input of the CPU.

The CPU will see this data and will act accordingly: it will treat it as value and just store it in register A and it will increment the PC. So:

A = 0000 0000 0000 0001
D = 0000 0000 0000 0000
PC = 0000 0000 0000 0001

Now this new PC value will be seen by the ROM and it will fetch the contents of ROM[1] and send them to the instruction input of the CPU.

The CPU will have to act according to this new instruction that just came. It looks at its various bits and sees that:
1. This is a computation instruction (Instr[15] = 1)
2. We need to send the value of A to the ALU (Instr[12] = 0). The other ALU input is always the contests of D.
3 We send the bits 6:11 to the ALU as its control bits:

ALU: x=0000 0000 0000 0001
ALU: y=0000 0000 0000 0000
ALU: control bits=110000

So the ALU outputs its x input:out = 0000 0000 0000 0001 (we ignore z and n for the moment).

4. The CPU stores the output of the ALU in register D (Instr[4] = 1)
5. It increments the PC because bits [0:2] are all zeros.

The effect of all this is that now the register D also contains the same value as A, and that we are pointing to the next instruction:

A = 0000 0000 0000 0001
D = 0000 0000 0000 0001
PC = 0000 0000 0000 0010

These two instructions are an example of a small program (or software). They made the CPU store specific values in its internal registers. Further instructions may make it do something even more exciting like adding some numbers, reading and writing values in specific addresses in RAM, and jumping to a different instruction instead of always going to the next one. By deciding what values we put in ROM we can make the computer do various stuff, for example draw some image or let you play tetris. This is the essence of how software commands the hardware.

Now, we people aren't very good with numbers like 1110 1100 0001 0000. We prefer to write something a little bit more understandable, so we give the instructions a "name" or mnemonic. For this instruction this is D=A. So we can write the same program like this:

@0
D=A

This is much clearer to us and we can understand what's going on. But now the computer doesn't! It only understands binary sequences. One way to fix this is to manually translate the mnemonics to binary and store that in the ROM. But we are cleverer than that. We can write a small program, which reads these mnemonics and translates them for us. And we call this program Assembler. And we call this language Assembly language. (Note that different CPUs will have different assembly languages).

If this machine that we're developing was the first computer in the world, we'd have to write this Assembler in assembly itself, and we'll have to manually convert it to binary. After that we can use this program to do that for us and we never again have to manually convert assembly to binary.

Luckily we do have other computers, so we can use them to convert our assembly programs to binary. This is called cross-assembly, because the machine that does the job cannot execute the resulting binary. It's meant for the target machine.
Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

ivant
In reply to this post by cdrh
cdrh wrote
Another problem, how is the 2s complement system implemented? How does the 16-bit computer know 0000 0000 0000 0001 is ONE? Who's telling it that? It seems like we just built hardware and said "hey here's this system called 2s complement. it's a translation of our decimal system". And the computer just somehow magically recognizes it. Again, same problem here. All we have is an ALU and memory. Who and what is doing this decoding?
You are closer to the truth than you realize! The computer doesn't know anything about two's complement, or for decimal numbers... or for that matter for binary numbers! It just "knows" about 0 (low voltage) and 1 (high voltage) and it has some gates which make it produce specific outputs. For example, we create a gate, which has two inputs and two outputs, with the following truth table:

X Y | C S
0 0 | 0 0
0 1 | 0 1
1 0 | 0 1
1 1 | 1 0

We call it half-adder, because 0 + 0 = 00, 0 + 1 = 01, 1 + 1 = 01 and 1 + 1 = 10. Does this mean that the computer now knows how to add? No. It is us who interpret this as addition. The computer wouldn't care if the last row produced 11 instead. It doesn't know how to add.

But we know how to combine gates like the half adder and full adder and AND, NOT, OR, MUX, etc. to make it act in ways which we want it to. For example we can create a circuit of 1 half-adder and 15 full-adders in such a way, that if we give it two sequences of 16 binary digits, which we interpret as two 16-bit binary numbers, it will produce another sequence of 16 binary digits, which if again interpreted as a 16-bit binary number is exactly the sum of the 2 inputs.

So great! We have a calculator which can add. Now we want to make it subtract as well. One way would be to create a separate circuit for subtraction. It would be more complex, because we need to handle borrows. Also, if the first number is smaller than the second, we'll have a negative result. How do we represent this?

(I'll start using 4 bit numbers because it's too much writing)
With 4 bits we can represent 2^4 = 16 different combinations. We can interpret these as numbers from 0 (0000) to 15 (1111). But we want to be able to represent negative numbers as well. We can use 2's complement representation, which goes like this:

1000  -8
1001  -7
1010  -6
1011  -5
1100  -4
1101  -3
1110  -2
1111  -1
0000   0
0001   1
0010   2
0011   3
0100   4
0101   5
0110   6
0111   7

So, numbers with their highest bit 1 are negative numbers. To find their opposite number, we need to compute their 2's complement. This is done by inverting all the bits and adding 1 to the result (and ignoring the overflow if there is any). For example the 2's complement of 1011 (-5) is 0100 + 1 = 0101 (which is 5). Two's complement of 0101 (5) is 1010 + 1 = 1011 (-5). What about 0? It's 2's complement is 1111 + 1 = 1|0000, or again 0 because we just ignore the overflow.

Why using this representation? Because it allows us to treat the subtraction as addition. For example instead of computing 5 - 3 we'll compute 5 + (-3).

0101 - 0011 =
0101 + 1101 =
1|0010  which is the representation of 2.

Another example: 3 - 6

0011 - 0110 =
0011 + 1010 =
1101 (-3)

So, if we choose to interpret the binary strings as numbers encoded in 2's complement, we can use the same adder to also implement subtraction. Notice that we only had to add a small circuitry to compute 2's complement.

Hopefully, I managed to clear things up for you. If you have more questions about that just ask and I'll try to answer.
Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

xedover
In reply to this post by cdrh
This is a very interesting question, and I doubt I could give a better answer than Ivant.

But I can recommend some other reading/videos ... Charles Petzold's book "Code" is an excellent and easy to understand read on building a computer starting from simple electromagnetic relays (like used for telegraphs), and on (1) off (0).

Then there's Ben Eater's YouTube series on building an 8-bit computer on breadboards. He has a great way of both explaining and demonstrating the complete process.

In short, the "software" are the one's and zero's that we input into the computer's memory, so that when the hardware (the memory, cpu, alu, etc.) fetches it, it then instructs the hardware to perform specific tasks. The definition of what those one's and zero's mean are somewhat arbitrary, in that its up to the designer of the system to decide what each combination means. Building the hardware is also somewhat arbitrary and is designed to perform specific task when given specific signals.

Take a look at an Altair 8800, for example. These old original computers had no screen monitors, no keyboards, no disk storage. The hardware, it the guts of the machine (the chips, etc). The software is what the user inputs into the memory via the front panel switches.
Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

cdrh
I've only briefly read through, but I just want to thank both of you fine gentlemen/ or women for the responses. I will get back to this soon once the holidays are over. Thank you again. I've been struggling with this for so long, and I hesitated to ask because I didn't know if my questions made any sense.
Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

MarinaP
In reply to this post by ivant
Ivant, thanks for this great reply.  

Here is another link talking about complement.  http://mathforum.org/library/drmath/view/55728.html
Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

cdrh
This post was updated on .
In reply to this post by ivant
I’ve formatted this response by addressing paragraph by paragraph. I want to apologize ahead of time for the amount of questions that I have. I’m completely new to the whole computer science world.

Starting with paragraphs 1-2.
Putting it in a mathematical formula, you are saying the CPU = ALU + PC + (2)Registers. These (2)Registers are connected via data connections and control connections?  So essentially, the CPU is another chip on it’s own that has its own inputs and outputs? But it encapsulates the ALU, PC, and Memory(Registers) and their functionality?

I digress here. I know I haven’t finished the course and it’s entirety, and I don’t mean to criticize or come off in some kind of arrogant way(because god knows how people interpret tone through reading and processing text). But why haven’t we built the the CPU yet? Is there a reason why you guys decided to talk about machine language first? Also, what about the control unit?

Paragraph 3-6
You said  “The CPU "receives" an instruction on it's instruction input”. Ok now addressing the “receives an instruction”. Who/what is writing that instruction so the CPU can read it? Is it the ROM? If so, does that mean that when the ROM was created it was pre-loaded with the instructions? How does the CPU know what to do with these instructions (this question in particular probably has to do with the construction of the implementation of the CPU)?  I’m guessing this might be where the control unit comes into play? I’ve read about the Fetch-Decode-Execute cycle. Is this what you are talking about?

Paragraph 7
I’m sorry but I don’t know what you mean by “inM CPU input and addressM, writeM and outM outputs. What does the ‘M’ stand for? Are you referring to the state of the selected register?

Paragraph 10-11
So here you say “This means that the ROM will PRODUCE the data stored in ROM[0] and it will send it to the instruction input of the CPU.” I don’t get what you mean by PRODUCE. This goes back to my question from above “does that mean that when the ROM was created it was pre-loaded with the instructions?” If it is “produced”, how is it produced? Lastly, you say that it’s producing “DATA stored in ROM[0] and it will send to the INSTRUCTION INPUT of the CPU.” So this DATA, the instruction or a value? I’m confused because in Paragraph 11, you then say “it will treat it as value”. Is that because the leftmost bit is 0 as you stated in your example in Paragraph 3?

Paragraph 14
Also, in step 2, you said “The other ALU input is always the CONTESTS of D.” Did you mean “contents”?

Paragraph 20
Definitely a light bulb moment for me. I think I’m starting to get the picture. Thank you so much. You have no idea how confusing this was all to me.

Paragraph 23-25
Ok this is where it gets confusing again for me. So at this point, we have the ROM that is ‘x’ size which has the instructions. We have a keyboard, mouse, and monitor connected. So are you saying we’ve fired up the computer and now we have to write the Assembler? You say, “we’ll have to MANUALLY convert it to binary”. How do we go about doing this “manual” process? Then you say we have other computers that does the translation for us, so is there another computer within our current computer that feeds the translated binary results to the “target machine”/our current computer?

Lastly, does machine language = instructions? Are they the same thing?

As for 2s complement part, thank you so much. I finally have a better understanding of it now.  

Thank you so much for this amazing response. I truly, truly appreciate it from the bottom of my heart.
Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

cdrh
In reply to this post by xedover
Thank you so much xedover. I am going to follow and build the computer from scratch using Ben Eater's videos. This is exactly what I was looking for.
Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

WBahn
Administrator
In reply to this post by cdrh
I think part of your problem is that you are trying to understand the entire forest when all you have seen so far is some of the trees. To use some other analogies, it would be like taking a machine shop course in which you will build an entire engine from scratch and then, after the first couple lessons where you build the pistons, connecting rods and valves getting ahead of yourself and being confused by how the valves know when to open. You haven't built enough of the machine yet to answer those questions. Continuing with that analogy, you are also essentially asking how the fuel needed to run an engine can possibly be delivered by a truck that has an engine in it when we haven't finished building an engine yet.

So perhaps this might help with some of the big picture difficulties you are having, at least long enough for you to get to the point where you have built enough of the machine and software to understand it far better.

Just like we can use a truck that has an engine to deliver the fuel and other things we need to make THIS engine, so too can we use other computers to write and compile the assembler that we will use to assemble programs for THIS computer. We don't have to do absolutely everything to make THIS computer using only what we have available with THIS computer at any given moment. That would be akin to walking into that machining class on day one and being told that before you can make a piston you have to make a lathe and before you can do that you have to make a steel foundry and before that you have to mine the ore and before that you need to make some tools out of stones you find lying around, so let's get started.

Now, it is completely reasonable to ask how the first assembler was ever written if all assemblers today are written on computers using a high-level language and compiler and so forth. Fair question -- and one whose true answer is almost certainly very complicated and largely lost to history because, like so many things, computer technology has been a highly diversified, decentralized, and incremental evolutionary process. But here's what I hope you find to be at least a conceptually plausible route. At one point we had computers that were not stored-program computers. Instead, the "programs" on these computers were actually circuit boards that were hardwired to do a certain task and we changed programs by physically swapping out boards. So when people started exploring stored-program computers, they had these other computers available that they could wire up a suitable board for and have it do some of the processing that the new computer wasn't ready for yet. Also, even if they didn't have that, writing the first version of an assembler directly in machine code is far from a herculean task because the first one doesn't have to be a full-up assembler. It only has to be capable enough to be used to make a better second-version of the assembler. It doesn't take long at all before you do have all the software tools you need write and assemble programs using an evolutionary chain of software that never used anything but that hardware. Not the way you would like to do it and not the way anyone ever had to do it, but it CAN be done.

It's sorta like the lathe in that machine shop. That lathe was built using a lathe, but if you walked the history back you see the common theme that a lathe with a given level of capability was first built using lathes of lesser capability and, at some point, were built with lathes that no one today would even recognize as such.

As for why explore the machine language before building the CPU. Well, they had to choose which one to present first and that's what they chose. They could have chosen the other way and, if they had, people such as yourself would be asking similar questions going the other way -- how do we know we need to have this Mux at that point and why to we route that signal to there? The answer would be because that's what's needed to carryout the instructions in the machine language that you haven't seen yet. In our case, the answer to many of the questions about why the machine language is the way it is is because that's what the hardware that we haven't seen yet can do. In actuality, the machine language and the hardware design would have occurred at the same time in a very iterative process that eventually settled on what the authors chose. But presenting it that way, while interesting and instructive and suitable for courses focused more on the engineering process, would be a huge distraction given the educational goals that the authors are trying to achieve here, so they present them separately in such a way that how one influenced the other is largely ignored.


Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

ivant
In reply to this post by cdrh
cdrh wrote
Starting with paragraphs 1-2.
Putting it in a mathematical formula, you are saying the CPU = ALU + PC + (2)Registers. These (2)Registers are connected via data connections and control connections?  So essentially, the CPU is another chip on it’s own that has its own inputs and outputs? But it encapsulates the ALU, PC, and Memory(Registers) and their functionality?
Well, CPU = ALU + PC + (2)Registers is almost correct for this specific CPU. Real world CPUs (esp. modern ones) are much more complex. Also, there are other things in the CPU, like the control mechanisms and data buses.

If by "chip" you mean the things which we define in the HDL (like NOT, AND, Mux, etc.), then yes, you are correct. The CPU "encapsulates" the ALU, PC. in much the same way that the ALU "encapsulates" AND16, ADD16.

cdrh wrote
I digress here. I know I haven’t finished the course and it’s entirety, and I don’t mean to criticize or come off in some kind of arrogant way(because god knows how people interpret tone through reading and processing text). But why haven’t we built the the CPU yet? Is there a reason why you guys decided to talk about machine language first? Also, what about the control unit?
This is a chicken-or-the-egg kind of problem. On the one hand, you are right, they could've continued building the CPU and the whole computer (which happens in the next chapter, BTW), and then start by writing programs in machine and assembly languages.

On the other hand, we already built a lot of things and are about to put them together. There are many ways to do that: there are many processor architectures, which differ widely in things like instruction set, number and types of registers, addressing mechanisms, and so on. By showing you how programs for this computer would look like and work, you're able to see how to use these components to implement the whole thing.

In the real world this would be an interactive process. You'd start with some idea of how you want to be able to program your computer, then start implementing it, see some things you can do better, or are too hard to implement, and you go back and change somethings. And repeat.

cdrh wrote
Paragraph 3-6
You said  “The CPU "receives" an instruction on it's instruction input”. Ok now addressing the “receives an instruction”. Who/what is writing that instruction so the CPU can read it? Is it the ROM? If so, does that mean that when the ROM was created it was pre-loaded with the instructions? How does the CPU know what to do with these instructions (this question in particular probably has to do with the construction of the implementation of the CPU)?  I’m guessing this might be where the control unit comes into play? I’ve read about the Fetch-Decode-Execute cycle. Is this what you are talking about?
In the case of the HACK computer the instructions come from the ROM. But this is not really important for the CPU. What it needs to "know" is, that there is an instruction on its inputs and how to execute it. How it gets there is a problem for the computer designer to decide.

The execution of instructions is covered in the next chapter.

cdrh wrote
Paragraph 7
I’m sorry but I don’t know what you mean by “inM CPU input and addressM, writeM and outM outputs. What does the ‘M’ stand for? Are you referring to the state of the selected register?
These are the names of the inputs and outputs of the CPU chip. I thought you've already went through that chapter when I wrote that explanation.

cdrh wrote
Paragraph 10-11
So here you say “This means that the ROM will PRODUCE the data stored in ROM[0] and it will send it to the instruction input of the CPU.” I don’t get what you mean by PRODUCE. This goes back to my question from above “does that mean that when the ROM was created it was pre-loaded with the instructions?” If it is “produced”, how is it produced? Lastly, you say that it’s producing “DATA stored in ROM[0] and it will send to the INSTRUCTION INPUT of the CPU.” So this DATA, the instruction or a value? I’m confused because in Paragraph 11, you then say “it will treat it as value”. Is that because the leftmost bit is 0 as you stated in your example in Paragraph 3?
What is 1101001100101100? Is it an instruction or value? It can be either, depending on the context where it is used. If it comes on the instruction inputs of the CPU it will assume it is an instruction and try to execute it. If it comes on the data input, it will not try to execute it even if it looks like a valid instruction.

You can think of the ROM chip as a memory, which doesn't change (Read-Only Memory - ROM). When you ask what's in address 0, you'll get a specific 16-bit number. Another number (could be equal or different to the first) is in address 1, and so on. And these numbers cannot be changed. You'll always get the same ones from the same ROM chip. But another ROM chip could contain different numbers.

In the case of the HACK computer, we put our programs in the ROM. We also put some data there (though it's in the form of instructions, like @256, which would load the number 256 in the A register).

cdrh wrote
Paragraph 14
Also, in step 2, you said “The other ALU input is always the CONTESTS of D.” Did you mean “contents”?
Yup.

cdrh wrote
Paragraph 23-25
Ok this is where it gets confusing again for me. So at this point, we have the ROM that is ‘x’ size which has the instructions. We have a keyboard, mouse, and monitor connected. So are you saying we’ve fired up the computer and now we have to write the Assembler? You say, “we’ll have to MANUALLY convert it to binary”. How do we go about doing this “manual” process? Then you say we have other computers that does the translation for us, so is there another computer within our current computer that feeds the translated binary results to the “target machine”/our current computer?
I think WBahn's reply nicely explains this part. You can also read this thread for additional insights.

cdrh wrote
Lastly, does machine language = instructions? Are they the same thing?
More or less, yes. The machine language for a specific machine, or family of machines, is the set of instructions which this machine can execute, along with their semantics (description of what each instruction does).
Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

xedover
In reply to this post by cdrh
cdrh wrote
Thank you so much xedover. I am going to follow and build the computer from scratch using Ben Eater's videos. This is exactly what I was looking for.
I had so much fun building this using real hardware, and Ben Eater explains how it all works so effortlessly, and makes it all so very easy to understand.

I haven't actually completed building mine yet, but I have watched all his videos, some, many times over.

And I did that first, before discovering this nand2tetris course, which I think helped my understanding a lot. This course introduced a few new and different concepts from what I understood from Ben Eater's videos, but in the end, its all complementary to each other.
Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

cdrh
Hi, I'm back again . I have a different set of questions regarding this chapter, but I don't know if I should post in the same thread I have with different questions that aren't directly related. Anyways if you would like me to delete the reply and make a new thread, just let me know.

@10:15, unit 4.2, Noam talks about addressing modes. I understand the register part. I do not get direct and indirect addressing. He says “Sometimes we have direct access to memory” what does that even mean? When would we not have direct access to memory? Moving forward, the picture shows R1 being added into M[200]. I’m assuming that it means the contents/data from R1 are being added to M[200]. Now he says “In which case we're telling the computer to directly address not just the register 1, but also a memory address that we just specified inside the command.” What does the term “directly address” mean? I’m sorry, but some of these terminologies are quite confusing to me because they are used synonymously and nebulously at times. And what are you accessing? The data of that cell or the address of that cell? From my understanding, it seems that "direct addressing" is when you take the data (not the address) from that RAM cell and write its value into the D-register. And "indirect addressing" is you have to read the address of that RAM cell FIRST, and now you can write the its value into the D-register. Am I correct? @ 7:01, unit 4.6, Noam is showing examples of typical operations. I’m confused at RAM[17] = 10. How does the machine acquire the constant 10? I thought doing @10 means A = RAM[10], and subsequently makes M = RAM[10], meaning that it’s just the address location. How is it pulling a raw value constant? I'm guessing it's already there from the instruction set of the HACK architecture.

Question about I/O devices. In the lecture, Noam talks about the screen memory map, and he says its designated in the RAM. So when you go out and buy RAM, is there a certain section within the memory that is designated for the screen memory map? Or…. is there is some kind of ROM inside the monitor that has the screen memory map? Then, once you plug it into the motherboard/GPU/computer, it’ll write its screen memory map to the RAM. Which is the correct understanding?

Off topic, in unit 4.5 @ 6:02, he talks about the screen memory map being 256 by 512, b/w, meaning there are 256*512=131,072 total pixels. I think there’s a mistake because he said it was 13,000. Just pointing that out if you guys want to fix that.

In unit 4.6 @ 9:20, Noam shows that the program is being stored/written into the instruction memory, the ROM. I’m confused here because I thought you can’t write to the ROM and you can only read from it. So from my understanding, the ROM structure (architecture) is similar to RAM with the difference being RAM has cells that are empty and ROM has cells that pre-loaded with the defined instruction set. My understanding is that the program that Noam showed would be loaded into RAM, and the ROM is basically used as a way for the CPU to translate symbolic to binary. This is definitely a wrong understanding, so please correct me. Does the ROM also include the assembler? Agh…… I’m so lost here.
Reply | Threaded
Open this post in threaded view
|

Re: Problems understanding Hardware/Software interface

WBahn
Administrator
As a general rule, it's best to adhere to a "one topic per thread" rule, as the thread contents tends to become all jumbled as people respond to different parts, often without knowing that it contains multiple topics.

So why don't you copy this new stuff into a new thread and, after you've done that, I'll try to respond.

Oh, and I'm assuming that these 'units' you are referring to are video lecturers? Where are they located? Coursera? Or someplace else? If possible, could you provide a link to them? Keep in mind that this Q&A forum is really a separate entity from the rest of the N2T universe (or perhaps more appropriate to say that it is off in its own neck of the galaxy).