|
|
I might be asking this question a bit too early given my knowledge (just started this course), but I'd still like to understand how the actual HDL code is "ran" in the hardware simulator. I understand that the order in which the chips are described in HDL doesn't matter (they're just descriptors of the individual chips after all), but I would expect that at least that the order in which the signals are "sent" can matter - so how is this done?
I would assume that if there are say 2 inputs, this could be in some sort of depth-first way, simply take the inputs a,b, find the all the chips that are connected directly to a and b, and have no other input, and calculate their value. Repeat on outputs, or something like that. But I'm not sure this is physically accurate. I haven't given this that much thought - I'm just wrote the above as an example and to describe what I'm trying to understand.
Either way, I'd like to know how the signals are pushed through in the hardware simulator, and why it is done this way.
|
Administrator
|
There are a number of ways in which a simulation like this can be carried out, some simpler than others. I'm not sure of the exact approach used by this particular simulator, but it is definitely as the simpler (and hence more restrictive) end of the spectrum.
One common way to carry out the simulation of combinatorial circuits is to simply cycle through all of the signals and update them until nothing changes.
Each signal has a single driver (i.e., it connected to exactly one output pin, or is an input to the circuit overall). The simulator has to decide what to do with signals that either have no driver or that have multiple drivers. The best approach is to throw an error. I believe this simulator throws an error for multiple drivers but it assumes that undriven inputs are a logic LO. This latter behavior is dangerous and I believe it causes a lot of confusion and frustration among students who accidentally leave a signal floating and then get strange results but don't realize its because of this unrealistic behavior that the simulator imposes on floating signals.
But let's assume that each signal has exactly one driver.
Now put the chips in whatever order you want (the order they appear within the part is as good as any) and go through for each chip and update the signals it drives based on the current values of the other signals. Set a flag if any of the driven signals change from their prior values. Walk through the list of parts. When you are done, if the 'changed' flag is set, then you clear it and repeat the process. You keep doing this until the changed flag is still clear at the end of a pass.
This simulator probably uses this approach and it does so in a fashion that assumes that the circuit is statically stable even if all of the gates have zero propagation delay through them. This works provided there is no feedback from an output back through the circuit in such a way that it could affect itself. This means that you can't simulate fundamental mode circuits such as latches, flip-flops, or oscillators. That's why this simulator has to provide a DFF part in addition to a NAND part as primitives. You can build a DFF from NAND gates, but you need a more sophisticated simulator to simulate it.
|
|
Thanks for the reply.
I see, that really is a very simple way to test it and if there are no loops, it will eventually stop.
An induction based proof should work - assuming I'm updating an output pin A of some logic gate and all the input pits already have their "final" signal, an update will give us the final signal of A as well. So if we define some sort of "distance" function f that gives us some estimate/bound of how many pins/signals back do I have to go until I reach the basic input of the chip, then we could show that if f(G) is finite for a gate G (which it must be if there is no loop going across G), then f(G) is a bound for how many updates we have to do until the output of G is updated to a value that won't change after any further updates (based on the induction assumption that for any gate H before G, f(H) must be smaller than f(G)). But that's not a really formal proof, just a guess of why it really should work.
Still, I'd like to understand this stuff more precisely:
1, how do more sophisticated hardware simulators do this?
2, what is the physically accurate description of how this happens?
3, why is the way of simulating this that's used in sophisticated hardware simulators, an physically accurate simulation? E.g. what type of chips that are/could be useful might not be possible to simulate, based on this type of "graph" like based simulation?
Of course I see that this falls under electrical engineering, but I'd appreciate a reference book that might cover this in a bit more detail. I don't really know what this "field"/subject is called, and I'd rather avoid going through a physics textbook that covers electricity and then a go through a general electrical engineering book that covers this topic - is there maybe a book that describes these things specifically (perhaps skipping over other things concerning electricity etc.) and presumes no prior physics/electrical engineering knowledge? Or would you please suggest a good starting point of what I should learn to understand these things better?
|
Administrator
|
I think the area you are looking for is the field of modeling and simulation and is more a computer science related field than physics or electrical engineering (although those certainly come into play in the development of suitable device models).
At end of the day a digital circuit is made up of analog components, such as transistors, and can me simulated using a simulator such as SPICE (or its variants). The problem is that digital circuits quickly grow to contain millions of transistors and simulating it as an analog circuit becomes prohibitively slow.
Engineering is about compromises, so we look for models and simulation techniques that meet our needs and this usually means sacrificing somethings in order to have others. In this case sacrificing the fine details of accuracy in order to get simulation results in a usable time frame. So we simulate the building block digital circuits as transistor circuits using full-up analog models in whatever level of detail we need to in order to verify that they will meet certain performance specifications. Then we use simpler, faster models that mathematically describe the behavior embodied by the specification (or by the detailed performance results) to simulate the interactions between these building blocks in larger circuits.
There are many modeling and simulation techniques out there that sit at different places on that compromise spectrum. If we add in a propagation delay from each input to each output then that allows use to simulate feedback circuits but we have to assume that things like setup and hold times are not an issue. If they are (or could be) then we need to build that into the models. There are various ways to do that. One is to add additional states to our digital simulation so instead of LO and HI we might have X (unknown). Some simulators also have RX and FX states (rising and falling through unknown levels).
But these don't capture details that are important for some circuits. For instance, what if two outputs are driving the same node or a node that is not being driven by anything. While we might like to say that this should never happen in a properly designed circuit, they are actually very common situations -- it's how many memory elements work, we overdrive one signal with another signal in order to change memory or we disconnect a signal and rely on the parasitic capacitance of the wire to maintain the voltage on the signal for some period of time. So we incorporation the notion of drive strength and charge storage time into our digital models and adapt the simulators to use them.
|
|