Confusion about ticks and tocks

classic Classic list List threaded Threaded
31 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Confusion about ticks and tocks

Greemngreek
CONTENTS DELETED
The author has deleted this message.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

xedover
if you've ever seen (and heard) and old pendulum clock, with each swing of the pendulum weight, it releases a latch on the gear to turn by one tooth, giving an audible "tick-tock" as it swings back and forth.

This is the metaphor that is then used in electronic clock cycles.

The idea that at the top and bottom of the square wave cycle, we have switched between the logical voltage values of "on-off"
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

Greemngreek
CONTENTS DELETED
The author has deleted this message.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

xedover
no, that's not how I would describe it...

look at the drawing on the bottom of page 2 (slide 4) of this document, titled Clock
http://www1.idc.ac.il/cs101/lectures/sequential%20logic.pdf up, down, on, off, tick, tock

or this WikiPedia image of a clock and its gears... when the pendulum swings one direction, there's a tick, the other direction, there's a tock. Back to the first position is a complete cycle (tick-tock)

each tick (or tock) is a half-cycle.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

xedover
forgot to add the image:
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

WBahn
Administrator
In reply to this post by Greemngreek
It really doesn't matter and is largely indiscernible.

The problem with simulating circuits whose outputs are fed back to the inputs is that they can oscillate. Imagine a NOT gate whose output is connected to the input. If the input is LO, then the output goes HI, taking the input HI, which takes the output LO, which takes the input LO, which then starts the cycle all over again. In fact, this is basically how they measure the intrinsic speed of a digital technology -- put a large, odd number of inverters configured in a ring and measure the frequency of oscillation. Not surprisingly, this is known as a ring oscillator.

But now imagine we take our NOT gate and connect the output to the input of a DFF and then the output of the DFF back to the input of the NOT gate. What we probably expect to have happen is that the output of the DFF will be a square wave at half the frequency of the clock signal. But, if the DFF is fast enough, the data from the input can make it to the output quick enough so that it makes it back through the NOT gate and back to the input of the DFF while it is still sensitive to the input level. This could continue for several cycles until enough time has passed since the clock signal changed to make the DFF insensitive to the inputs.

I once killed a chip design for exactly this reason when we ported it to a faster technology on a real tight schedule and I didn't do the detailed simulations I should have to verify that this wasn't a problem.

The basic issue is one which the simple logic simulator used in Nand2Tetris can't deal with because it would have to have timing information about all of the gates being simulated. But that's pure overkill. So instead, they break the clock cycle into two pieces. On one piece the input state of each DFF is captured and stored, but the output state is left alone. On the other piece the output state of each DFF is updated and changes to the combinatorial parts of the circuit are determined, but the inputs to the DFFs can not affect their outputs.

This same approach is often used in hardware and is called a master-slave configuration. The master is sensitive to changes on one polarity of the clock and the slave is sensitive to the other. The inputs drive the master, the master drives the slave, and the slave drives the outputs.

The simulator here could have just had 'ticks' in the control file, one per clock cycle, with the simulator automatically dealing with the issue. But they chose to have the control file provide both.

You can either think of the tick as the high side of the clock and the tock as the low side, or the tick as the rising edge and the tock as the falling edge.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

Greemngreek
CONTENTS DELETED
The author has deleted this message.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

Greemngreek
In reply to this post by xedover
CONTENTS DELETED
The author has deleted this message.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

xedover
See this play-hookey.com page for a good introduction to sequential circuits.

Here, you'll be able to play with clocked flip-flop circuits and see what happens when your "tick" the clock on, and "tock" the clock off.

Also, we often don't want anything more than a brief clock pulse when ticked on.
See Ben Eater's video about using a clock pulse for a D Flip-Flop

You may also want to download and play with Logism (a logic simulator)
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

WBahn
Administrator
In reply to this post by Greemngreek
There are several different types of latches and flip flops, so no explanation can be too detailed unless it is restricted to a specific circuit.

Most flip flops are "edge-triggered" (usually on the rising edge, so that's what we'll assume). While the clock is low, the input section of the circuit is sensitive to changes in the input(s). As the clock rises this part of the flip-flop circuit locks in the value and quickly becomes insensitive to further changes on the inputs. In order for this work correctly, the inputs must remain constant at valid logic levels from a short time before the clock starts rising until a short time after it stops rising. These are known as the "setup" and "hold" time requirements. For convenience and consistency, these times are usually referred to the moment the clock signal has risen halfway from LO to HI. At the same time, the changes are propagated to the output. The time it takes for this to happen from when the clock changes is known as the "propagation delay". As long as the propagation delay is longer than the hold time, the changed output can't make it back to the input in time for it to further affect the output on that clock cycle. This all happens on the rising edge of the clock. On the falling edge the internal circuits essentially rearm themselves so as to be ready for the next rising edge.

And, yes, the clock doesn't transition instantly -- it takes time and this is known as the clock transition time. For most purposes, this doesn't matter to the circuit designer using the chip as long as they are careful to be sure that their clock signal meets the minimum and maximum transition times dictated by the chip's specifications.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

Greemngreek
CONTENTS DELETED
The author has deleted this message.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

WBahn
Administrator
If I tell you that the propagation delay from clock to output is 30 ns, what does that mean. It's 30 ns from what event to what event? It's not good enough to say that it's from the clock rising edge to the change in the output, because both of those take a certain amount of time to happen, so we need to agree on when, exactly, to start measuring the time and when, exactly, to stop measuring it. We can't use either the start or the top of the transition because detecting the start time of a change would be hopelessly mired down in the noise that is always present. Detecting when the change stops is even worse because there is usually some kind of exponential asymptotic behavior as it settles into the noise at the final voltage. So, instead, most manufactures specify that the propagation time is measured starting from when the clock signal is halfway from LO to HI and stops when the changing output is halfway from its original level to its final level.

There's a definite arbitrariness to this choice -- there's nothing magical about the 50% level. It's just something that is very easy to specify and measure and, over the years, has become the defacto standard.

One thing that you need to keep in mind is that these are not hard numbers, meaning that the manufacturer is not saying that it WILL take EXACTLY 30 ns from the time the clock reaches 50% for the output to reach 50%, they are only saying that it will take NO MORE than 30 ns for that to happen. So they just have to come up with a specification that they know their chips can meet and they can (and do) pad it by making sure that their chips almost always comfortably beat their own specs.

For parts that have a positive hold time requirement, they will also usually spec a minimum propagation delay precisely to prevent the oscillation I've been talking about. But there's a common trick that is played -- the inputs are intentionally delayed (resulting in a larger setup time) so that the delay is longer than the hold time. Since all of this is internal to the chip, the end result is actually a negative hold time making it so that the chip cannot oscillate even if the propagation delay through the chip from clock to output were zero. You can tell when this has been done because the specs will usually give a hold time of zero (even though strictly speaking it is a negative number, but that would just confuse people).
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

Greemngreek
CONTENTS DELETED
The author has deleted this message.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

WBahn
Administrator
Your phrase, "rising edge to the third vertical line (c) of the clock" is terribly vague.

Rising edge of what? What moment is that "rising edge" measured from? The third vertical line doesn't seem to have anything to do with the clock.

Most manufacturers define the propagation delay as the time from when the rising edge of the clock is halfway between LO and HI to when a changing output is halfway from prior state to new state.

Go look at some data sheets.

For instance, check out Section 7 of the TI data sheet for the 74HC74 Dual D-type FF.

http://www.ti.com/lit/ds/symlink/sn74hc74.pdf

As for why the book says what it does, remember that there are many different ways to build a flip flop and no one description, including the one I've given, can possibly describe them all.

The authors don't have ANY actual flip flops. They wrote a very simple, brain-dead simulator to emulate the basic functionality of a flip flop and ALL of the details of how it works inside are hidden from you. Their description is based on THEIR mental model of how THEIR flip flop behaves.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

Greemngreek
CONTENTS DELETED
The author has deleted this message.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

WBahn
Administrator
Yes, in most commercial FF designs, the new value is propagated to the output pretty much as quickly as possible. The minimum clock period generally ensures that the value is stable at the output before the falling edge of the clock, as this helps eliminate sources of errors within the circuit that can cause metastability, among other things.

As for out(t) = in(t-1), that depends on how you interpret it.

Expressions such as this should be understood to be discrete time, not continuous time (which is why 'n' instead of 't' is often used, but that's getting further into the mathematical weeds than the authors wanted to go).

For this text, the best way to understand this is that the value of the output of the FF just before the next clock rising edge is equal to the value of the input of the FF just before the prior clock rising edge.

It does NOT mean that the value of the output is equal to the value that the input had one clock period prior. This is a continuous time statement and it doesn't necessary hold for the majority of the clock period.

In most timing diagrams that are not focused on the fine timing details, but rather the bigger picture, changes are shown as perfectly vertical edges happening exactly at the clock edges. It is understood that the actual changes occur slightly after the clock edge (since they happen because of the clock edge).


Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

Greemngreek
CONTENTS DELETED
The author has deleted this message.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

WBahn
Administrator
Greemngreek wrote
Here's my final understanding of the working of a DFF:
The input satisfies setup requirements (and hold requirements later), gets cemented during the rising edge of the clock, gets propagated to the output after a delay, the falling edge prepares the DFF for a new input,
Good thus far.

<quote>
 the output (in the new cycle) is not equal to the previous input (just before the rising edge of the previous cycle) till the rising edge of the clock cycle.

Hard to tell since I can't determine where you are defining the rising edge of a clock cycle to be. Is the rising edge at the beginning or the end of the clock cycle?

As I described earlier, for MOST DFF circuits (not all), the output shortly after the rising edge is equal to what the input was shortly before that same rising edge. Once that happens, the output will not change until just after the next rising edge of the clock; but the input can change all it wants to and is simply ignored. If for no other reason, that's why a claim that out(t) = in (t-1) can't always be true for a continuous variable t. We have to understand what information that equation is telling us. It is telling us that the output just after the rising clock edge (no later than the propagation delay from from clock edge to output) is equal to the value that was at the input just before that same rising clock edge (during the setup and hold time for the input relative to the clock -- if the data was not stable and valid during that entire time period, all bets are off).

Here are my concluding doubts:

Is the output at the end of the propagation delay equal to the input before the rising edge?
Yes, provided the setup and hold time requirements were satisfied.

Why isn't the output from the end of the propagation delay to the rising edge of the next clock cycle consistently equal to the input before rising edge of the current cycle? Is the output inconsistent?
It's not the output that can change. It's the input. The input could have changed a hundred times during both the prior clock cycle and the new clock cycle, but the output only gets to change at most once per clock cycle. The DFF samples the input during that tiny brief window in the vicinity of the rising clock edge and makes it's decision about what the output will be for the entire next clock cycle (after the prop delay, of course) and it ignores the input signal at all other times.

If the input is constant at a valid logic level from the beginning to the end of the clock, will the output then follow out(t-1) = in(t)?
No, but in well-designed system it will be close.

This equation, to be interpreted strictly, requires that the output waveform be an exact copy of the input waveform shifted by 1 time unit to the right (and we are assuming that the clock period is exactly one time unit, whatever that may be).

The only way that this equation will be true is if the input waveform changes occur exactly one clock cycle before the output changes occur. Since the output changes occur slightly after the rising clock edge, the input changes would have to occur then, as well (except on the prior edge). If the input for one DFF is coming from the output of another, then this will be a good approximation and we can live with it in most cases. But if the input is coming from something else, such as an external device, then who knows? For instance, a common technique to bring in external data is to sample it at the falling clock edge (usually twice, depending on the nature of the data signal).
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

Greemngreek
CONTENTS DELETED
The author has deleted this message.
Reply | Threaded
Open this post in threaded view
|

Re: Confusion about ticks and tocks

WBahn
Administrator
Basically.

Most hardware engineers tend to naturally think in terms of extreme values so that we can guarantee performance.

So we often don't even look at the typical propagation delay or talk in terms of the actual propagation delay; instead we look at the maximum propagation delay and then talk in terms such as, "such and such will happen no later than the maximum propagation delay after the rising clock edge." In actuality, we are pretty much guaranteed that it will change sooner than that, but we know that, by that limiting time, it will have changed to its new state. So we design our systems that only rely on it being changed no later than that and that, if it changes sooner, nothing bad will happen. In other words, we are careful to NOT design a system that relies on the output not changing before the propagation delay expires, because we can't count on it waiting for any minimum amount of time (unless the manufacturer actually spec'ed a minimum propagation delay for the part, which sometimes they do).

If that sounds pretty lawyeristic, it is -- and for largely the same reason. Lawyers have to speak (or, more to the point, write) in terms that ideally can only be interpreted exactly one way. If two parties to a contract make different interpretations, then bad things happen. Engineers are in the same boat; we need to speak in terms that, ideally, can only be interpreted one way because the devil is in the subtle details and if the engineer and their customer interpret the same sentence differently, bad things happen.

So it can be hard for us to talk in soft generalities without putting in a bunch of caveats.

So, for now at least, let's agree that the term "propagation delay" means the actual amount of time it takes for the output to change after the rising clock edge. Then pretty much by definition the output changes at the end of the propagation delay because that's what this agreed-to use of that term means.

Hope that helps.
12