Logical Equivalences
Like the algebraic identities, Boolean logic also has laws like
commutativity, associativity, and so on. These laws are called
logical equivalences because their proofs are established by
the fact that a≡b if and only if a is logically
equivalent to b.
Identity Laws
The identity laws provide that:
x∧1≡xx∨0≡x In other words, the conjunction of some variable signal x and a signal
that's always 1 will always return the variable signal x.
Similarly, the disjunction of some variable signal x and a signal
that's always 0 will always return the variable signal x. The
proof:
x10y11z00x∧y10x∨z10 Domination Laws
The domination laws state:
xx∨1≡1∧0≡0 This domination laws provide that if we have a variable signal x and a
constant signal y, then if y=1, the disjunction returns 1,
and if y=0, the conjunction returns 0. The proof:
x0101y0011x∧y0001x∨y0111 Indemponent Laws
The word "indemponent" is a mathematical term: An operation is
indemponent if it can be applied multiple times without changing the
result beyond the initial application. For example, the constant function
f(x)=2 is indempotent. No matter how many times we apply it (i.e.,
passing different values of x), we will always get back 2.
Similarly, the absolute value operator is indempotent:
∣x∣=∣∣x∣∣=∣∣∣x∣∣∣.
Not matter how many times we apply it, we will always get back
∣x∣.
The same phenomenon exists in Boolean logic through the indempotent laws:
Indempotent Laws. Where x is logically equivalent to y:
x∨yx∧y≡x≡y
The proof:
x01y01x∧y01x∨y01 Note that this implies a corollary:
Corollary. Where x is logically equivalent to y: x∧y≡x∨y
Double Negation Laws
The double negation law is similar to the odd-even sign rule in
mathematics. Recall that the odd-even sign rule provides that:
Odd-Even Sign Rule. Where n∈Z,
(−1)n={1if n∈Z∗even −1 if n∈Z∗odd
In other words, −1 raised to an even number power returns 1 (e.g.,
−1⋅−1=1) while −1 raised to an odd number power returns
−1 (e.g., −1⋅−1⋅−1=−1). The same idea applies to the
NOT
operator. ¬(¬x)≡x and
¬(¬(¬x))≡¬x.
Double Negation Law. Where x is some signal: ¬(¬x)≡x
The proof:
x01¬x10¬(¬x)01 Commutative Law
The Boolean commutative law provides that inter-changing the order of
operands in a Boolean equation does not change its result:
(x∧y)(x∨y)≡(y∧x)≡(y∨x) For example, consider the following propositions:
- n<m and m<w
- m<w and n<m
- a<b or a=b
- a=b or a<b
The commutative law provides that the propositions (1) and (2) are the
same. In other words, it doesn't matter if we determine whether n is
less than m first, or if we determine m is less than w first.
This idea is encapsulated in the expression n<m<w. It also provides
that propositions (3) and (4) are the same. It doesn't matter whether we
determine that a is less than b first, or if we determine that
a=b first. Hence the encapsulating expression a≤b.
The Boolean commutative law is closely related to the commutative law of
set theory:
A∪BA∩B=B∪A=B∩A In the diagram above, card(A)=6 and card(B)=5. To
determine the cardinality of A and B combined, it doesn't matter if
we count the number of elements in A first or the number of elements in
B first. We still get:
card(A∪B)=card(B∪A)=card(A)+card(B)−card(A∩B)=6+5−2=9 The same goes for intersection:
card(A∩B)=card(A)+card(B)−card(A∪B)=6+5−9=11−9=2 The commutative law's proof:
Associative Law
The Boolean associative law provides that:
(x∧(y∧z))(x∨(y∨z))≡((x∧y)∧z)≡((x∨y)∨z) For example, consider the following propositions:
- a+b=c, and
- p+q=r
Distributive Law
The distribute law provides:
(x∧(y∨z))(x∨(y∧z))≡(x∧y)∨(x∧z)≡(x∨y)∧(x∨z) De Morgan's Laws
De Morgan's Laws govern how the NOT
operator works alongside the OR
and
NOT
operator. The laws provide:
(1) ¬(x∧y)(2) ¬(x∨y)≡¬(x)∨¬(y)≡¬(x)∧¬(y) We can verify these laws via truth table. Verifying the first corollary:
x0011y0101x∧y0001¬(x∧y)1110¬x1100¬y1010¬(x)∨¬(y)1110 Verifying the second corollary:
x0011y0101x∨y0111¬(x∨y)1000¬x1100¬y1010¬(x)∧¬(y)1000 Absorption Laws
The absorption laws provide a way of dealing with sequences of OR
operations or sequences of AND
operations:
Absorption Laws Where x and y are variable signals:
x∨(x∨y)≡xx∧(x∧y)≡x
The proof:
x01y01x∧y01x∨y01x∧(x∧y)01x∨(x∨y)01 Negation Laws
According to the negation laws, the statement
Mars is Earth's neighbor or Mars is not Earth's neighbor
is always true, and the statement
Mars is Earth's neighbor and Mars is not Earth's neighbor
is always false. Stated formally:
Negation Laws. Where x is a variable signal:
x∨(¬x)x∧(¬x)≡1≡0
Boolean Expressions
Because AND
, OR
, and NOT
are operations (much like how addition and
subtraction are oeprations), the logical operations can be strung together
to form Boolean expressions. For example, here's a simple Boolean
expression:
¬(0∨(1∧1)) Evaluating this expression, we start first with the innermost parenthesized
expression, (1∧1). This evaluates to 1:
¬(0∨(1∧1))≡¬(0∨1) Then we evaluate the result, since that's the next parenthesized
expression:
¬(0∨(1∧1))≡¬(0∨1)≡¬1 The we perform the final operation:
¬(0∨(1∧1))≡¬(0∨1)≡¬1≡0 Boolean Functions
Much like algebra, once we have Boolean expressions, we can begin creating
Boolean functions — generalized Boolean expressions. Boolean functions
are precisely how we create the additional variety of logical operations. For
example, here's a Boolean function:
f(x,y,z)=(x∧y)∨(¬(x)∧z) This function takes three inputs, x, y, and z. Because each
input is either 1 or 0, there are 23=8 possible
combinations.[logic_note]
x00001111y00110011z01010101 Laying out the possible values)or f(x,y,z), we get:
x00001111y00110011z01010101(x∧y)00000011¬x11110000¬(x)∧z01010000f(x,y,z)=(x∧y)∨(¬(x)∧z)01010011 Unlike real functions, Boolean functions have a finite number of possible
outputs. This makes it easy (for a feasible number of n variables;
generally n<4) to lay out all the possible outputs.
Constructing Boolean Functions
Suppose we were given the following truth table:
x00001111y00110011z01010100f10101000 We want to construct a Boolean function that produces this truth table. The
trick to doing so is to focus on one row at a time: First, write a function
that particular row, apply it to the rest of the values. Second, focus on
another row that doesn't match, write a function for that particular row,
apply it, and so on.
For example, let's focus on the first row:
x00001111y00110011z01010100f10101000 A function that produces this row would be:
(¬x)∧(¬y)∧(¬z):
x00001111y00110011z01010100f10101000(¬x)∧(¬y)∧(¬z)10000000 We then consider the next 1 output:
x00001111y00110011z01010100f10101000 A possible function would be: (¬x)∧y∧(¬z). This
results in the truth table:
x00001111y00110011z01010100f10101000(¬x)∧y∧(¬z)00100000 That takes care of the second 1 output. Now we write a function for the
third 1 output:
x00001111y00110011z01010100f10101000 Here a possible function is x∧(¬y)∧(¬z), yielding
the truth table:
x00001111y00110011z01010100f10101000x∧(¬y)∧(¬z)00001000 Putting it all together, we get:
x00001111y00110011z01010100f10101000(¬x)∧(¬y)∧(¬z)10000000(¬x)∧y∧(¬z)00100000x∧(¬y)∧(¬z)00001000 All that's left to do is to just string these three functions with OR
operators:
f(x,y,z)=[(¬x)∧(¬y)∧(¬z)]∨[(¬x)∧y∧(¬z)]∨[x∧(¬y)∧(¬z)] Although this function is correct, it's fairly complex. A better definition
would be to simplify the expression. First, examining the first two
functions:
[(¬x)∧(¬y)∧(¬z)][(¬x)∧y∧(¬z)] we see that in both of these expressions, we have (¬x) and
(¬z). This means that the only fixed values are (¬x) and
(¬z). The value for (¬y) is captured in both expressions.
Thus, all we really need to know is (¬x) and (¬z):
(¬x)∧(¬z) This reduces our function definition to:
f(x,y,z)=[(¬x)∧(¬z)]∨[x∧(¬y)∧(¬z)] We can then reduce the term:
[x∧(¬y)∧(¬z)]
to:
[(¬y)∧(¬z)]
resulting in the definition:
f(x,y,z)=[(¬x)∧(¬z)]∨[(¬y)∧(¬z)]
From there we can reduce the definition even further:
f(x,y,z)=(¬z)∧[(¬x)∨(¬y)]
As we might be able to tell, constructing Boolean functions from a given
truth table is a difficult task. Moreover, finding the shortest possible
equivalent expression for a given Boolean expression is an NP-complete
problem. There is no algorithm for finding such an expression. This means
that, at the moment, a significant aspect of designing the logic circuits
behind processors depends on human ingenuity and labor.
What's more remarkable, however, is that we've seen a demonstration of the
following theorem:
Canonical Representation Theorem. Any Boolean function can be
represented using the operations of ∧ (AND
), ∨ (OR
),
and ¬ (NOT
).
This is a very special theorem. It is because of this theorem that
computers exist. It turns out, however, that we don't actually need the
OR
operator. We can get away with just AND
and NOT
. This is because
the OR
operator can be expressed with AND
and NOT
, following De
Morgan's laws:
x∨y≡¬(¬x)∧(¬y)
Accordingly, we have the more generalized form of the theorem:
General Canonical Representation Theorem. Any Boolean function can be
represented using the operations of ∧ (AND
) and ¬
(NOT
).
But we can go even a step further:
NAND Representation Theorem. Any Boolean function can be represented
using the ↑ (NAND
) operator.
The NAND
(NOT AND
) operator is the result of the function:
f(x,y)=x↑y≡¬(x∧y)
The truth table:
x0011y0101x∧y0001¬(x∧y)≡x↑y1110 The proof of the NAND representation theorem stems from the fact that we
can represent AND
and NOT
with NAND
. From the General Canonical
Representation Theorem, we know that every Boolean function can be be
represented using AND
and NOT
, so if we can write these two operations
with NAND
, the NAND representation theorem directly follows. In this
case, we can. The NOT
operation is simply:
¬x≡¬(x∧x)≡x↑x
and the AND
operation is just:
x∧y≡¬(¬(x∧y))≡¬(x↑y)≡(x↑y)↑(x↑y)