Computer Time
The gates we've implemented so far all execute instantenously. We can perform Boolean and arithmetic operations, but the operations occur all at once. This is because our implementations have been pure applications of combinatorial logic — the logical analysis of fixed propositions, or propositions whose truth values are predetermined. Computers, however, operate on sequential logic — the logical analysis of variable propositions, or propositions whose truth values are not predetermined. The key difference between these two forms of logic: Combinatorial logic does not account for time, or change, but sequential logic does. With sequential logic, we're concerned with what happend "previously". For computers, the ability to "remember" — memory — is everything. It's what gives the digital calculator an upperhand over an abacus.
Once we implement a way for our machine to recall, or remember, we suddenly have several features:
-
Hardware components can be reused.
-
State can be saved. This allows us to perform loops — repeating computations.
-
Imposing speed limits.
The final feature may seem odd, but it's crucial that computers do not perform computations faster than it can.
In the real world, time is a continuous scalar — a line with infinitely many points. Computers, however, are finite things, so they can only understand discrete values. Thus, we need a way to represent time for a computer. To do so, we divide time into discrete, equal-sized blocks.
In computer terms, the length of each block is called a cycle —
the computer's unit of time. Question: How do we divide time with a
computer? With a device called a clock, or more generally, an
electronic oscillator. The clock is a circuit that takes direct current
(DC), and converts it into alternating current (AC) — 0
and 1
.
This results in square waves, and each period of the square wave —
the time between going from 1
(called a tick) to 0
(called a tock)
and back to 1
— is the length of a block, or a cycle.
With this clock, we can assign each block a unique integer. The first block
is t=1
, the second block is t=2
, the third block is t=3
, and so on.
And within each of these blocks, we can execute some operation with the
gates we implemented. For example, maybe at t=1
a logical AND
is
performed, then at t=2
a logical NOT
is performed, then an ADD16
,
etc:
Notice what we've accomplished — sequential operations; perform an
AND()
, then a NOT()
, then an ADD16()
. Let's iron out some of the
details.
The first problem with our approach is that the clock takes time to go from
0
to 1
(called the rise time) and from 1
to 0
(called the
fall time). These are delays.
The fact is, signals never make instantaneous transitions from 0
to 1
.
This extends to the logic gates themselves — it takes time for
charges to build up and dissipate. This delay, called propogation
delay, slows down the speed at which bits travel from one gate to
another.
Fortunately, there's a fix: Just make the blocks shorter. In other words,
instead of defining the cycle as strictly the time it takes to go from 0
to 1
, we define the cycle as something shorter than that:
By defining the blocks in this way, we leave some time to account for delays; i.e., allowing the system to stabilize. In doing so, we do not have to concern ourselves with the complexities of delay handling. With this notion of time, sequential logic is now more apparent. To repeat, with combinatorial logic, we were effectively computing:
In other words, the output of some Boolean function at time is the the result of operating on an input fed to the function at time This is instantaneous:
With sequential logic, we can now perform the following:
I.e., the output of some Boolean function at time is the result of operating on an input fed to the function at time I.e., whatever input was fed in the previous cycle:
Because of this ability to use the output of the previous cycle, we do not necessarily have to output different bit, or bitstream, at each cycle. Instead, we can pass a single bit or bitstream through all of the wires, modifying that bit or bitstream as it passes through the gates:
This evidences a key point in computing: Sequential logic is what introduces the notion of bits, or more broadly, objects, having a state. And because states exist, we can perform state-dependent computations:
Indeed, it is precisely because sequential logic that we have a common dividing factor among programming languages — mutability (using a single object and changing its state) versus non-mutability (producing new objects and recalling as needed).