Why do real PCs use a 32 bit architecture?

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Why do real PCs use a 32 bit architecture?

Wojciech Morawiec
Constructing the 16 bit equivalents of the basic logic gates in chapter 1, I wondered about the following thing:
If a Not16 is implemented simply by duplicating the known Not structure 16 times and then using it as a single Not gate with sub-adresses, what reason is there to combine 32 of them into one bus in real PCs? This choice seems rather arbitrary to me, since I could either leave the Not gates separated or merge them into some other bus width, e.g. 10 bits or 23 bits.

Cheers,
Wojciech
Reply | Threaded
Open this post in threaded view
|

Re: Why do real PCs use a 32 bit architecture?

cadet1620
Administrator
Wojciech Morawiec wrote
Constructing the 16 bit equivalents of the basic logic gates in chapter 1, I wondered about the following thing:
If a Not16 is implemented simply by duplicating the known Not structure 16 times and then using it as a single Not gate with sub-adresses, what reason is there to combine 32 of them into one bus in real PCs? This choice seems rather arbitrary to me, since I could either leave the Not gates separated or merge them into some other bus width, e.g. 10 bits or 23 bits.

Cheers,
Wojciech
The architecture size of a computer generally refers to the data transfer width used throughout the computer. The wider the data buses, the faster the computer can move data around, hence the faster programs run. Bus widths are usually a multiple of the basic character width. We've pretty much settled on 8 bits as the basic character size, so 8-, 16-, 32-, and 64-bit buses are what's in use. In the past when 6-bit characters were in use, 18-, 36-, and 60-bit systems existed.

Since we are not building physical ICs in this course, it's really just a convenience to create chips like Not16 so that we don't need to type 16 Nots into HDL whenever we want to invert a bus.

--Mark
Reply | Threaded
Open this post in threaded view
|

Re: Why do real PCs use a 32 bit architecture?

JustinM
Makes me wonder why 128-bit systems aren't around. I imagine the cost doesn't match the benefits.
Reply | Threaded
Open this post in threaded view
|

Re: Why do real PCs use a 32 bit architecture?

hevisko
In reply to this post by Wojciech Morawiec
Wojciech Morawiec wrote
Constructing the 16 bit equivalents of the basic logic gates in chapter 1, I wondered about the following thing:
If a Not16 is implemented simply by duplicating the known Not structure 16 times and then using it as a single Not gate with sub-adresses, what reason is there to combine 32 of them into one bus in real PCs? This choice seems rather arbitrary to me, since I could either leave the Not gates separated or merge them into some other bus width, e.g. 10 bits or 23 bits.
Well, when I started using/playing on computers, "Real PCs" was only 8bits (Apple //e, Commodore 64 etc.) but nowhere days the "real PCs" are "64bit"

The original IBM PC (8088), was also "8bits", but actually 16bit in other aspects, ie. 64k (8bits) that any code block/data max, but using paging/segments, it was able to go up to 1Mbyte in total address space. The 8086, was  a 16bit data bus, while the 8088 had a 8bit data bus, though they shared the exact same registers/instructions and address bus, just the memory was "cheaper" using 8bits instead of 16bits at a time.

The big "8/16/32/64/128bit" "mentality" (if I may call it that) is based on the register sizes and the typical address space based on that. Another definition is the multiplication/division operands and output. Ie. a "real" 32bit cpu would take 2x 32bit operands for multiplication with a 64bit output.

However, to totally confuse things, the nicer RISC type CPUs (where you have multiple instructions per clock cycle, like the SPARCs) already had minimum 128bit data bus (That was the 32bit SPARCv8 as eg. in the Sparc20s) and I might be mistaken, but I thought the UltraSPARCs (v9 64bit) had 256bit data busses, as they needed 4xinstruction words per clock tick. When you talk about the big irons (Like the olde Sun SPARC E25k etc.) they were moving multiple kilobytes per clock tick between memory modules and CPU cache modules (though they take multiple of ticks to setup the communication to get the right memory blocks). So even here the term "data bus" is getting very weak/vague as a differentiation for "bit size":)

Oh, on the 128bit CPU subject, I recall that there was an Alpha CPU in the pipeline with 128bit registers etc. unfortunately the Alpha CPUs never took off, which was a pity as they would've been a nice high end competition for Intel.
simple geek enjoying un*xen