Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

classic Classic list List threaded Threaded
27 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ouverson
This post was updated on .
I'm going through the material on Machine Language a second time to acquire a better understanding of the material prior to getting my hands dirty on projects.

This morning as I was getting ready to starting studying material, a question came to mind:

Why didn’t this chapter/week focus on machine language only; why introduce assembly now? Wouldn't there have been a benefit in thinking (and solving problems) in machine language only - prior to moving to assembly language?

I suspect it has something to do with the next chapter/week, "Computer Architecture". I also suspect that when I get through the entire course I will have a greater appreciation.

P.S. On "Module 4: Machine Language Roadmap" under "Key concepts":

op codes, mnemonics, binary machine language, symbolic machine language, assembly, low-level arithmetic, logical, addressing, branching, and I/O commands, CPU emulation, low-level programming.

I assume there was a typo and it should have been "logical addressing" and not "logical, addressing".
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ivant
Assembly is just a thin veneer on top of the machine language. It's not worth it to spend one for each. Also, it's hard for humans to write in machine code directly, because every instruction is a big number. Having to write something like:
1111010010111110
1001001110010010
0010111010010000
...
for a whole chapter would be quite challenging but for the wrong reasons.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

WBahn
Administrator
In reply to this post by ouverson
There are a few different ways to organize the material. Yes, the present organization has you pull out of hardware-mode thinking into software-mode thinking for a chapter and then delve back into hardware-mode for Chapter 5. But I think this is outweighed by being able to see the objective before implementing it better -- the work you do on Chapter 5 is to implement a CPU and computer architecture that implements a specific instruction set architecture. You can't do that unless you know something about the set of instructions you are trying to implement.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ouverson
I think I'll hang fire on this one until I've finished my second lap with the material.

Maybe after I get through the material a second time I'll have a greater appreciation of why this jump was necessary.

Thank you both.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

WBahn
Administrator
This post was updated on .
By all means skip it if you want. The entire project is set up so that, in theory, you can pick one at random and do it without having completed the prior projects -- at least in terms of not having to have your completed work from prior projects to build on. So if you think having the human-friendly assembly instructions at this point is a waste of time, then skip it and move on.

You may find that to be the case and, if not, you may realize (perhaps later) why having the human-friendly mapping of the 65 thousand plus machine code patterns to a very small handful of easily remembered assembly mnemonics is useful. But having the assembly certainly isn't required in order to move forward -- you can think in terms of bit fields and truth tables and get along just fine.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ouverson
This post was updated on .
No, I mean hang fire on the question, not the chapter/week!

I'm very happy with how the course is progressing.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ouverson
Quick question regarding the pop quiz on Branching:

My answer: NOT_EQUAL_TO_0
Correct answer: NOT_EQUAL_TO_1

https://www.screencast.com/t/79xdxyWDgGl

Here's the symbolic code with my comment:

  @R0
  D=M

  @WHAT_DOES_THIS_DO    
  D-1;JNE // if D-1 ≠ 0 goto (WHAT_DOES_THIS_DO) instruction

  @R1
  M=0
  @END
  0;JMP

(WHAT_DOES_THIS_DO)
  @R1M=1

(END)
  @END
  0;JMP
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ivant
Let's go instruction by instruction.

@R0 // A = 0
D=M  // D = mem[0]

@WHAT_DOES_THIS_DO // A = the address of the label
D-1;JNE // if D-1 != 0 Jump to the address contained in A.

NOT_EQUAL_TO_0 seems logical. But let's look at it a bit more:

D-1 != 0 is equivalent to D != 1. And D contains mem[0], so these instructions say: "if mem[0] is not equal to 1, jump to WHAT_DOES_THIS_DO".

So the correct answer is the correct one ;)
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

WBahn
Administrator
In reply to this post by ouverson
Part of the issue is establishing the right frame of reference. If we have a label like NOT_EQUAL_TO_0, the question that we have to answer first is WHAT is not equal to zero. We need a frame of reference.

Yes, the JNE jumps to a label if the result of the instruction is anything other than zero. But is that the most useful frame of reference? Probably not -- what we are probably concerned about is not the value of (D-1) but rather the value of D.

We take the jump if and only if D is not equal to 1.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ouverson
In reply to this post by ivant
I had a feeling the answer would be staring me in the face!

P.S. I don't know how I'd get through this course without the timely, conscientious, and expert help. Muchas gracias!
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ouverson
In reply to this post by WBahn
What does the syntax mean: (0x4000) and (0x6000) - when discussing RAM[16384] and RAM[24576].

I know these are the predefined memory addresses of screen and keyboard, but I'm not understanding the "(0x4000)" and "(0x6000)" syntax.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

WBahn
Administrator
ouverson wrote
What does the syntax mean: (0x4000) and (0x6000) - when discussing RAM[16384] and RAM[24576].

I know these are the predefined memory addresses of screen and keyboard, but I'm not understanding the "(0x4000)" and "(0x6000)" syntax.
The "0x" is a common prefix used by many programming languages to indicate that the number is in base 16 (hexadecimal) instead of base 10 (our normal decimal).

Just like the number 387 in decimal is equivalent to

387 = 300 + 80 + 7 = 3x10^2 + 8x10^1 + 7x10^0

A number in another number base just replaces the '10' with whatever the number base is ('16' in this case).

So

0x4000 = 4x16^3 + 0x16^2 + 0x16^1 + 0x16^0 = 16,384
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ouverson
I guess I should have googled "0x6000"; though the question regarding why "0x" would have remained. The history of this convention is interesting.

Question regarding M=1 and what bit gets backend:

Symbolic commands:

@SCREEN
M=1

Equals:

Binary commands:

0100000000000000
1110111110001000

I assumed the compute function M=1 puts the following 16 bits 0000000000000001 in memory location RAM[16384].

I ran in CPU Emulator.

Looks as if my assumption about M=1 is incorrect as the left-most bit of the word is black and not the right-most bit.

Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

WBahn
Administrator
ouverson wrote
I guess I should have googled "0x6000"; though the question regarding why "0x" would have remained. The history of this convention is interesting.
I don't know the specific history, but the general rational is pretty simple. You want the lexer (tokenizer) to recognize all of the various forms of numeric values immediately and so when it sees a new token that starts with a digit it knows that it cannot be an identifier (and this is why identifiers in virtually all languages can't start with a digit) and (in most languages) has to be a number of some kind. The second character then determines which subcategory of numeric value it is (in the case of integer literals -- tokenizing floating point values is a bit more complicated). The 'x' is from hexadecminal. Languages that support binary often use a 'b' in the second location. One thing that is annoying (in C and some of the languages derived from it) is that if an integer starts with a 0 and then has a digit in the second location, the value is interpreted as being octal (base-8). For most C programmers they learn this the hard way when they make a table of values in their code and use leading zeros to make the table nice and pretty and then all hell breaks loose. Several more modern languages have eliminated octal for this reason -- personally I would have preferred that they simply require something like a 'c' flag as the second character and continued supporting the base, as it is sometimes quite useful.

Question regarding M=1 and what bit gets backend:

Symbolic commands:

@SCREEN
M=1

Equals:

Binary commands:

0100000000000000
1110111110001000

I assumed the compute function M=1 puts the following 16 bits 0000000000000001 in memory location RAM[16384].

I ran in CPU Emulator.

Looks as if my assumption about M=1 is incorrect as the left-most bit of the word is black and not the right-most bit.
The screen mapping is done so that as you go from left to right across the screen (i.e., increasing column index) you move across the value stored in the memory location in increasing bit value. So the lsb maps to the left-most pixel of the sixteen pixels mapped to that address. Not only does this make for a consistent conceptual relationship, it also results in a very simple mathematical relationship between the row/column indices and the address of and bit location within the value in memory that stores the state of this pixel. This is described in Section 4.2.5 of the text.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ouverson
I'm still not understanding the following:

1. "(counting from LSB to MSB)"
2. @SCREEN, M=1

Please allow me to walk through these subjects:

1. "(counting from LSB to MSB)"

Let's say I want to blacken the pixel at r=8 and c=508

The word to operate on: RAM[16384 + 8*32 + 508/16] = RAM[16384 + 256 + 31] = RAM[16671]

The pixel to operate on: 508%16 = 12

When I look at my gird (where I have blackened the pixel on row 8 and column 508) I can see that the 12th pixel is blackened - counting from the left starting at 0.

Therefore, RAM[16771] should be assigned the value 0000000000001000

This is not (as I understand it) "(counting from LSB to MSB)" but the inverse MSB to LSB.

2. @SCREEN, M=1

How does the value "1" blacken the left-most-bit and not the right-most-bit in RAM[16384]? 1 in binary is 0000000000000001 and not 1000000000000000.

I appreciate your help.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

WBahn
Administrator
This post was updated on .
ouverson wrote
I'm still not understanding the following:

1. "(counting from LSB to MSB)"
2. @SCREEN, M=1

Please allow me to walk through these subjects:

1. "(counting from LSB to MSB)"

Let's say I want to blacken the pixel at r=8 and c=508

The word to operate on: RAM[16384 + 8*32 + 508/16] = RAM[16384 + 256 + 31] = RAM[16671]

The pixel to operate on: 508%16 = 12

When I look at my gird (where I have blackened the pixel on row 8 and column 508) I can see that the 12th pixel is blackened - counting from the left starting at 0.

Therefore, RAM[16771] should be assigned the value 0000000000001000
Have you actually tried this? Does that pixel actually correspond to that value written to that address?

This is not (as I understand it) "(counting from LSB to MSB)" but the inverse MSB to LSB.

2. @SCREEN, M=1

How does the value "1" blacken the left-most-bit and not the right-most-bit in RAM[16384]? 1 in binary is 0000000000000001 and not 1000000000000000.

I appreciate your help.
The pixels are mapped from LSB to MSB. If we write the bits according to the left-to-right pixels they control, then the 5th pixel from the left edge of the screen would be

 0000100000000000

Since we are going from LSB to MSB, this is

LSB -> 0000100000000000 <- MSB

So the value that this represents is 16.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ouverson
Oh, the leftmost bit is the LSB. I guess I was being too literal about LSB and MSB.

When Shimon did the demo he was arbitrarily turning the bits on and off and I never connected the dots on which bit (leftmost or rightmost) was the LSB.

Yes, I did play with the CPU Emulator and when I entered 0000000000000001 (what I thought represented 1) and the top left most bit was black it through me off.

Also, the book had M=1 which set the top leftmost bit to black.

Now that I know that 1000000000000000 corresponds to 1 withing the context of the grid layout it makes sense.

Am I tracking now?

P.S. Reminds of the words debit and credit in accounting: debit means "left side" and not "negative/subtract"; the fact that we think about the debit and credit within the context of you check register throws us off.
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

WBahn
Administrator
Normally when you write a binary value, the lsb is on the far right because that is how we have developed the positional numbering system we use.

If we write the bits in the order that they map to the pixels, THEN the bit that maps to the lsb is the far left bit, which means that we should clearly label the msb and/or lsb in the diagram to communicate this non-usual way of writing the value.

The alternative would be to write the bits in the normal value order (lsb on the right) and clearly label which pixel they correspond to. This might look something like this:

right pixel => 0000000000010000 <= left pixel

We could also draw a representation of sixteen pixels in the physical ordering on the screen and below that draw the representation of the sixteen bits in normal value order and then draw arrows between corresponding bits and pixels.

Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

ouverson
This post was updated on .
Thank you for explaining. I learned something new - sweet!

I read an article, "Understanding the most and least significant bit " - https://bit-calculator.com/most-and-least-significant-bit

At the end of the article, the author states, "It is also worth to note, that the abbreviations MSB and LSB (in capitals) stand for Most Significant Byte, and Least Significant Byte, for the same reason why MB stands for megabyte and Mb stands for megabit."

What are your thoughts on this naming convention? Is it followed much? Only by the pedantic? :)
Reply | Threaded
Open this post in threaded view
|

Re: Why didn’t this chapter/week focus on machine language only; why introduce assembly now?

WBahn
Administrator
ouverson wrote
Thank you for explaining. I learned something new - sweet!

I read an article, "Understanding the most and least significant bit " - https://bit-calculator.com/most-and-least-significant-bit

At the end of the article, the author states, "It is also worth to note, that the abbreviations MSB and LSB (in capitals) stand for Most Significant Byte, and Least Significant Byte, for the same reason why MB stands for megabyte and Mb stands for megabit."

What are your thoughts on this naming convention? Is it followed much? Only by the pedantic? :)
As a rule I follow that naming convention, but when all I am talking about is bits (no bytes anywhere around) I tend to unthinkingly fall into using a much broader convention in which abbreviations are generally written all upper-case.

In cases where the context makes it clear which is being discussed, it's largely a non-issue. But in instances where both bits and bytes are being discussed, it is very useful and perhaps extremely important to make the distinction by following the convention.
12