Understanding DFF behaviour

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

Understanding DFF behaviour

jbloggz
Hi,

I have been trying to understand the DFF behaviour and exactly how the hardware simulator "implements" it. From playing with it I can see that whatever the "in" value is when the time "ticks" (eg 4 -> 4+, rising edge?), this value will appear at "out" when the time "tocks" (eg. 4+ -> 4, falling edge?). Even if I change the value of "in" after the rising edge, this won't affect the "out" value at the falling edge.

The problem I am having is that I cannot find a physical implementation that behaves like this. I am assuming that the DFF is supposed to be as described here. The problem is that it states on that page "the Q output always takes on the state of the D input at the moment of the clock edge". The "out" takes the value of "in" at exactly the falling edge, not what the value of "in" was at the previous rising edge.

However, I did find on another page (look at figure 5.3.7) the exact behaviour that I was looking for, where the "out" is set at the falling edge to the "in" at the previous rising edge. Is there something different about this one that allows this behaviour?
Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

jbloggz
OK, so I think I've found the answer to my question. After looking at the Logism diagram here, I see that when the clock signal is put into the master flip flop, it is split into 2 signals which both go into an AND gate, however one of those signals is put through a YES gate then a NOT gate before the AND gate.

This means that the output of the AND gate is always 0 (because the two signals going into the AND gate will be opposite), except for a very small time after the rising edge the clock. This is due to the high signal from the clock arriving at the AND gate slightly before the inverted signal, which must first travel through 2 other gates. So for this brief period, there is 2 high signals at the AND gate and it outputs HIGH. So the master flip flop can only be changed during this short time period.

It doesn't seem like the YES gate is necessary to me, however perhaps it is needed to increase the propagation delay so that the AND gate will output high for a long enough period? Is there any other reason to include it?

I think this is a very elegant and am quite impressed by it. I am also happy that I now understand exactly how the DFF in the hardware simulator can be built. (And I also now understand why it is not possible to create the DFF in the hardware simulator using HDL, since it doesn't deal with things like propagation delay).

I'd appreciate if anyone who knows more about this stuff than me can let me know if this all makes sense....
Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

cadet1620
Administrator
What you are calling a YES gate is actually called a "buffer".  In real world hardware there is a limit to the number of inputs that you can connect to any particular output.  This is called "fanout".  Buffers typically are designed to have greater fanout than normal gates so they can be used in these special cases.

In the real world, buffers are almost never to make short pulses as shown in the circuits you found.  That type of circuit is very unreliable.  Those sorts of short pulses occur accidentally in logic and must be taken into consideration when designing.  They're called "hazards".  See
    http://www.marksmath.com/tecs/glitch

Check out
    http://play-hookey.com/digital/sequential/d_nand_flip-flop.html
for a safe master-slave DFF.

The n2t DFF doesn't actually simulate the behavior of hardware implementing a DFF.  It is exactly what you described in your first post — state latched on rising edge, copied to output on falling edge.

Here is the DFF.java source file
/********************************************************************************
 * The contents of this file are subject to the GNU General Public License      *
 * (GPL) Version 2 or later (the "License"); you may not use this file except   *
 * in compliance with the License. You may obtain a copy of the License at      *
 * http://www.gnu.org/copyleft/gpl.html                                         *
 *                                                                              *
 * Software distributed under the License is distributed on an "AS IS" basis,   *
 * without warranty of any kind, either expressed or implied. See the License   *
 * for the specific language governing rights and limitations under the         *
 * License.                                                                     *
 *                                                                              *
 * This file was originally developed as part of the software suite that        *
 * supports the book "The Elements of Computing Systems" by Nisan and Schocken, *
 * MIT Press 2005. If you modify the contents of this file, please document and *
 * mark your changes clearly, for the benefit of others.                        *
 ********************************************************************************/

package builtInChips;

import Hack.Gates.*;

/**
 * The DFF chip.
 */
public class DFF extends BuiltInGate {

    // The state (0/1) of the DFF.
    private short state;

    protected void clockUp() {
        state = inputPins[0].get();
    }

    protected void clockDown() {
        outputPins[0].set(state);
    }
}

--Mark
Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

jbloggz
This post was updated on .
Thank for the reply and explanation about 'glitches'. The safe dff you linked to behaves differently to the built in dff, so is there a safe hardware implementation that behaves like the built-in dff? If not, why would they create a non-realistic dff? It doesn't seem necessary. It would actually be simpler to implement the proper dff.

Would the hack computer still work if the dff was changed so that the input is copied to the output at clockDown()?
Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

cadet1620
Administrator
The correct operation of an edge triggered flip-flop is to ignore the inactive clock edge.
Requiring that the input data be valid at the inactive clock edge would halve the maximum speed of circuits using that DFF.

It would possible to build a DFF that would mimic the simulator's behavior.  The simplest way would be to use two DFFs in series, one positive edge triggered and the other negative edge triggered.  Real world DFFs are available in both flavors.

why would they create a non-realistic dff?
Short answer: they were not trying for realistic; the simulator works with idealized abstract logic.

-Mark

Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

jbloggz
Is the inactive edge the falling edge? So that means that for a proper edge triggered dff, the input is copied to the output at the rising edge?

I still don't understand why they didn't just make the dff behave realistically... There doesn't appear to be any benefit to the way it was implemented.

If I was to change the built-in implementation to work like a proper edge triggered dff, would that cause issues?
Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

cadet1620
Administrator
jbloggz wrote
Is the inactive edge the falling edge? So that means that for a proper edge triggered dff, the input is copied to the output at the rising edge?
The nand2tetris sequential logic is documented to be falling edge triggered (fig 3.5).   Therefore the rising edge is the inactive edge because nothing is supposed to happen when it occurs.  
I still don't understand why they didn't just make the dff behave realistically... There doesn't appear to be any benefit to the way it was implemented.

If I was to change the built-in implementation to work like a proper edge triggered dff, would that cause issues?
As I said, it's a bug.  And it doesn't cause any problems with the normal usage of the simulator—running the test scripts and verifying chip outputs.

The potential problem with fixing this bug is that since it affects how DFFs work, all test scripts that test circuits that might use DFFs will need to be retested and may need updating.

--Mark
Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

jbloggz
Thank you! That's exactly the answer I was hoping for. So it's actually a bug in the built-in dff implementation? I can understand it would be annoying to fix, since all the test scripts rely on it. However it would be nice if this was mentioned somewhere (or did I miss it?).

I might fix it and see how this affects anything.

Is the reason that it's harmless because the input never changes between the rising and falling edges in the simulator?

It's an interesting bug, because it seems like it was done on purpose. I'd love to here the reasons behind it if there were any.
Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

cadet1620
Administrator
[I'm just a volunteer here so all my posts are just my opinion...]

jbloggz wrote
Thank you! That's exactly the answer I was hoping for. So it's actually a bug in the built-in dff implementation? I can understand it would be annoying to fix, since all the test scripts rely on it. However it would be nice if this was mentioned somewhere (or did I miss it?).
To my knowledge There is no bug list.
I might fix it and see how this affects anything.
That would be interesting to know.
Is the reason that it's harmless because the input never changes between the rising and falling edges in the simulator?
Correct.  In normal use the test scripts make input changes and then do "tick, tock;".  Any input changes cause by output changes in the parts in the HDL will (bugs not withstanding 8^) only happen on "tock".
It's an interesting bug, because it seems like it was done on purpose. I'd love to here the reasons behind it if there were any.
Again, I'm guessing.  I suspect that a programmer who was not a hardware specialist was told to implement a master-slave DFF and that's how he thought they worked.

--Mark
Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

jbloggz
This post was updated on .
I just recompiled the DFF.class to behave properly and it works as expected in the simulator:
| time | in  | out |
| 0+   |  0  |  0  |
| 1    |  0  |  0  |
| 1+   |  1  |  0  |
| 2    |  1  |  1  |
| 2+   |  0  |  1  |
| 3    |  0  |  0  |
| 3+   |  1  |  0  |
| 4    |  1  |  1  |
| 4+   |  1  |  0  |
| 5    |  0  |  0  |
| 5+   |  0  |  0  |
| 6    |  1  |  1  |
So the output now gets copied from the input at exactly the falling edge, not what the input was at the previous rising edge. Interestingly, when I use this DFF in my implementation of the Bit, it behaves exactly as it did previously:
| time | in  |load | out |
| 0+   |  0  |  0  |  0  |
| 1    |  0  |  0  |  0  |
| 1+   |  1  |  1  |  0  |
| 2    |  1  |  1  |  1  |
| 2+   |  0  |  1  |  1  |
| 3    |  1  |  1  |  0  |
| 3+   |  1  |  1  |  0  |
| 4    |  0  |  1  |  1  |
So the output is still getting set at the falling edge to what the input was at the rising edge. It seems that the new DFF doesn't affect any of the higher order chips that depend on it. My first guess (without diving into the code) would be that this has something to do with the order of calculations done by the simulator, ie. the DFF calculation is done before the internal input pin from the mux is updated.

I know I've been a bit of a nit picker about all of this, but it's all due to my desire to deeply understand how all this stuff works. Eventually I would like to be able to build the HACK computer (or similar) in real hardware. For now, I'm happy to accept the current behaviour as a quirk of the simulator.

Thanks for all your help and insight!!

Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

cadet1620
Administrator
jbloggz wrote
...my desire [is] to deeply understand how all this stuff works. Eventually I would like to be able to build the HACK computer (or similar) in real hardware.
If you haven't already fond it, I strongly suggest you check out Logisim, its simulation is much closer to correct.

From this post you can get a Hack computer I implemented in Logisim.
Interestingly, when I use this DFF in my implementation of the Bit, it behaves exactly as it did previously
Maybe the simulator's finding another DFF somewhere else.  As a quick debugging hack, you could rebuild DFF.class with 'out' permanently 0 and see if that hard-coded 0 shows up in your Bit test.

--Mark
Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

jbloggz
Can logism actually be used to build a working computer that can run software? At first glance I didn't realise how powerful it was...
you could rebuild DFF.class with 'out' permanently 0 and see if that hard-coded 0 shows up in your Bit test.
That's a good suggestion, I'll try this in the morning.
Reply | Threaded
Open this post in threaded view
|

Re: Understanding DFF behaviour

cadet1620
Administrator
jbloggz wrote
Can logism actually be used to build a working computer that can run software? At first glance I didn't realise how powerful it was...
Yes, it can run simple programs.  You'll need to write a converter to translate .hack files into the format Logisim needs to load data into its ROMs.  I added a '-ol' option to my assembler to directly output Logisim format.

My Hack computer runs about 500 instructions per second on my 6 year old laptop, which would be too slow to draw text on the graphical screen, but works OK writing text to Logisim's built in TTY component.

HackLogisim.zip  (TTY-based no spoilers version)

(Another version which implements the CPU, ALU, etc. directly using Logisim components, rather than my library of "no spoilers" parts, runs about twice as fast.)


--Mark