Boolean Logic

In the world of computing, there are only two values: 0 and 1. With just 0 and 1, we can perform a wide variety of operations.

AND

The logical AND has the following truth table:

Formally, the logical AND is represented with the symbol ,{\land,} or, in the case of bitwise operations, the ampersand &. The operation a∧b returns true if, and only if, both a and b are true. Otherwise, it returns false.

OR

The logical OR has the following truth table:

In formal logic, the logical OR is represented with the symbol ,{\lor,} or, in bitwise notation, the pipe character |. Examining the truth table, we see that a|b returns false if and only if a and b are both false. Otherwise, it returns true.

NOT

The logical NOT is a unary operator. In formal logic, we represent the logical NOT with the symbol ¬,{\neg,} and in bitwise logic, we use the exclamation mark ! (some texts use the tilde, ~). The truth table:

With just these three operations — AND, OR, and NOT — we can create much more elaborate and complex operations.

Logical Equivalences

Like the algebraic identities, Boolean logic also has laws like commutativity, associativity, and so on. These laws are called

logical equivalences because their proofs are established by the fact that ab{a \equiv b} if and only if a{a} is logically equivalent to b.{b.}

Identity Laws

The identity laws provide that:

x1xx0x \begin{aligned} x \land 1 \equiv x \\ x \lor 0 \equiv x \end{aligned}

In other words, the conjunction of some variable signal x{x} and a signal that's always 1{1} will always return the variable signal x.{x.} Similarly, the disjunction of some variable signal x{x} and a signal that's always 0{0} will always return the variable signal x.{x.} The proof:

xyzxyxz1101101000 \begin{array}{c:c:c:c:c:c} & x & y & z & x \land y & x \lor z \\ \hline & 1 & 1 & 0 & 1 & 1 \\ & 0 & 1 & 0 & 0 & 0 \end{array}

Domination Laws

The domination laws state:

x11x00 \begin{aligned} x &\lor 1 \equiv 1 \\ x &\land 0 \equiv 0 \end{aligned}

This domination laws provide that if we have a variable signal x{x} and a constant signal y,{y,} then if y=1,{y = 1,} the disjunction returns 1,{1,} and if y=0,{y = 0,} the conjunction returns 0.{0.} The proof:

xyxyxy0000100101011111 \begin{array}{c:c:c:c:c} & x & y & x \land y & x \lor y \\ \hline & 0 & 0 & 0 & 0 \\ & 1 & 0 & 0 & 1 \\ & 0 & 1 & 0 & 1 \\ & 1 & 1 & 1 & 1 \\ \end{array}

Indemponent Laws

The word "indemponent" is a mathematical term: An operation is indemponent if it can be applied multiple times without changing the result beyond the initial application. For example, the constant function f(x)=2{f(x) = 2} is indempotent. No matter how many times we apply it (i.e., passing different values of x{x}), we will always get back 2.{2.} Similarly, the absolute value operator is indempotent: x=x=x.{\lvert x \rvert = \lvert \lvert x \rvert \rvert =\lvert \lvert \lvert x \rvert \rvert \rvert.} Not matter how many times we apply it, we will always get back x.{\lvert x \rvert.}

The same phenomenon exists in Boolean logic through the indempotent laws:

Indempotent Laws. Where x{x} is logically equivalent to y:{y:}

xyxxyy \begin{aligned} x \lor y &\equiv x \\ x \land y &\equiv y \end{aligned}

The proof:

xyxyxy00001111 \begin{array}{c:c:c:c:c} & x & y & x \land y & x \lor y \\ \hline & 0 & 0 & 0 & 0 \\ & 1 & 1 & 1 & 1 \\ \end{array}

Note that this implies a corollary:

Corollary. Where x{x} is logically equivalent to y:{y:} xyxyx \land y \equiv x \lor y

Double Negation Laws

The double negation law is similar to the odd-even sign rule in mathematics. Recall that the odd-even sign rule provides that:

Odd-Even Sign Rule. Where nZ,{n \in \uint,}

(1)n={1if  nZeven 1   if  nZodd (-1)^n = \begin{cases} 1 \text{if}~~ n \in \uint*{\text{even}} \ -1 ~~~\text{if}~~ n \in \uint*{\text{odd}} \end{cases}

In other words, 1{-1} raised to an even number power returns 1{1} (e.g., 11=1{-1 \cdot -1 = 1}) while 1{-1} raised to an odd number power returns 1{-1} (e.g., 111=1{-1 \cdot -1 \cdot -1 = -1}). The same idea applies to the NOT operator. ¬(¬x)x{\neg(\neg x) \equiv x} and ¬(¬(¬x))¬x.{\neg (\neg (\neg x)) \equiv \neg x.}

Double Negation Law. Where x{x} is some signal: ¬(¬x)x\neg (\neg x) \equiv x

The proof:

x¬x¬(¬x)010101 \begin{array}{c:c:c:c} & x & \neg x & \neg(\neg x) \\ \hline & 0 & 1 & 0 \\ & 1 & 0 & 1 \end{array}

Commutative Law

The Boolean commutative law provides that inter-changing the order of operands in a Boolean equation does not change its result:

(xy)(yx)(xy)(yx) \begin{aligned} (x \land y) &\equiv (y \land x) \\ (x \lor y) &\equiv (y \lor x) \end{aligned}

For example, consider the following propositions:

  1. n<m{n < m} and m<w{m < w}
  2. m<w{m < w} and n<m{n < m}
  3. a<b{a < b} or a=b{a = b}
  4. a=b{a = b} or a<b{a < b}

The commutative law provides that the propositions (1) and (2) are the same. In other words, it doesn't matter if we determine whether n{n} is less than m{m} first, or if we determine m{m} is less than w{w} first. This idea is encapsulated in the expression n<m<w.{n < m < w.} It also provides that propositions (3) and (4) are the same. It doesn't matter whether we determine that a{a} is less than b{b} first, or if we determine that a=b{a = b} first. Hence the encapsulating expression ab.{a \leq b.}

The Boolean commutative law is closely related to the commutative law of set theory:

AB=BAAB=BA \begin{aligned} A \cup B &= B \cup A \\ A \cap B &= B \cap A \\ \end{aligned}

In the diagram above, card(A)=6{\text{card}(A) = 6} and card(B)=5.{\text{card}(B) = 5.} To determine the cardinality of A{A} and B{B} combined, it doesn't matter if we count the number of elements in A{A} first or the number of elements in B{B} first. We still get:

card(AB)=card(BA)=card(A)+card(B)card(AB)=6+52=9 \begin{aligned} \text{card}(A \cup B) &= \text{card}(B \cup A) = \text{card}(A) + \text{card}(B) - \text{card}(A \cap B) \\ &= 6 + 5 - 2 \\ &= 9 \end{aligned}

The same goes for intersection:

card(AB)=card(A)+card(B)card(AB)=6+59=119=2 \begin{aligned} \text{card}(A \cap B) &= \text{card}(A) + \text{card}(B) - \text{card}(A \cup B) \\ &= 6 + 5 - 9 \\ &= 11 - 9\\ &= 2 \end{aligned}

The commutative law's proof:

Associative Law

The Boolean associative law provides that:

(x(yz))((xy)z)(x(yz))((xy)z) \begin{aligned} (x \land (y \land z)) &\equiv ((x \land y) \land z) \\ (x \lor (y \lor z)) &\equiv ((x \lor y) \lor z) \\ \end{aligned}

For example, consider the following propositions:

  1. a+b=c{a + b = c}, and
  2. p+q=r{p + q = r}

Distributive Law

The distribute law provides:

(x(yz))(xy)(xz)(x(yz))(xy)(xz) \begin{aligned} (x \land (y \lor z)) &\equiv (x \land y) \lor (x \land z) \\ (x \lor (y \land z)) &\equiv (x \lor y) \land (x \lor z) \\ \end{aligned}

De Morgan's Laws

De Morgan's Laws govern how the NOT operator works alongside the OR and NOT operator. The laws provide:

(1)  ¬(xy)¬(x)¬(y)(2)  ¬(xy)¬(x)¬(y) \begin{aligned} (1)~~\neg (x \land y) &\equiv \neg (x) \lor \neg (y) \\ (2)~~\neg (x \lor y) &\equiv \neg (x) \land \neg (y) \end{aligned}

We can verify these laws via truth table. Verifying the first corollary:

xyxy¬(xy)¬x¬y¬(x)¬(y)0001111010110110010111110000 \begin{array}{c:c:c:c:c:c:c:c} & x & y & x \land y & \neg (x \land y) & \neg x & \neg y & \neg(x) \lor \neg(y) \\ \hline & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ & 0 & 1 & 0 & 1 & 1 & 0 & 1 \\ & 1 & 0 & 0 & 1 & 0 & 1 & 1 \\ & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ \end{array}

Verifying the second corollary:

xyxy¬(xy)¬x¬y¬(x)¬(y)0001111011010010100101110000 \begin{array}{c:c:c:c:c:c:c:c} & x & y & x \lor y & \neg (x \lor y) & \neg x & \neg y & \neg(x) \land \neg(y) \\ \hline & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ & 0 & 1 & 1 & 0 & 1 & 0 & 0 \\ & 1 & 0 & 1 & 0 & 0 & 1 & 0 \\ & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ \end{array}

Absorption Laws

The absorption laws provide a way of dealing with sequences of OR operations or sequences of AND operations:

Absorption Laws Where x{x} and y{y} are variable signals:

x(xy)xx(xy)x x \lor (x \lor y) \equiv x \\ x \land (x \land y) \equiv x \\

The proof:

xyxyxyx(xy)x(xy)000000111111 \begin{array}{c:c:c:c:c:c:c} & x & y & x \land y & x \lor y & x \land (x \land y) & x \lor (x \lor y) \\ \hline & 0 & 0 & 0 & 0 & 0 & 0\\ & 1 & 1 & 1 & 1 & 1 & 1\\ \end{array}

Negation Laws

According to the negation laws, the statement

Mars is Earth's neighbor or Mars is not Earth's neighbor

is always true, and the statement

Mars is Earth's neighbor and Mars is not Earth's neighbor

is always false. Stated formally:

Negation Laws. Where x{x} is a variable signal:

x(¬x)1x(¬x)0 \begin{aligned} x \lor (\neg x) &\equiv 1 \\ x \land (\neg x) &\equiv 0 \end{aligned}

Boolean Expressions

Because AND, OR, and NOT are operations (much like how addition and subtraction are oeprations), the logical operations can be strung together to form Boolean expressions. For example, here's a simple Boolean expression:

¬(0(11)) \neg (0 \lor (1 \land 1))

Evaluating this expression, we start first with the innermost parenthesized expression, (11).{(1 \land 1).} This evaluates to 1:{1:}

¬(0(11))¬(01) \begin{aligned} \neg (0 \lor (1 \land 1)) &\equiv \neg (0 \lor 1) \end{aligned}

Then we evaluate the result, since that's the next parenthesized expression:

¬(0(11))¬(01)¬1 \begin{aligned} \neg (0 \lor (1 \land 1)) &\equiv \neg (0 \lor 1) \\ &\equiv \neg 1 \\ \end{aligned}

The we perform the final operation:

¬(0(11))¬(01)¬10 \begin{aligned} \neg (0 \lor (1 \land 1)) &\equiv \neg (0 \lor 1) \\ &\equiv \neg 1 \\ &\equiv 0 \\ \end{aligned}

Boolean Functions

Much like algebra, once we have Boolean expressions, we can begin creating

Boolean functions — generalized Boolean expressions. Boolean functions are precisely how we create the additional variety of logical operations. For example, here's a Boolean function:

f(x,y,z)=(xy)(¬(x)z) f(x, y, z) = (x \land y) \lor (\neg (x) \land z)

This function takes three inputs, x,{x,} y,{y,} and z.{z.} Because each input is either 1{1} or 0,{0,} there are 23=8{2^3 = 8} possible combinations.[logic_note]

xyz000001010011100101110111 \begin{array}{c:c:c:c} & x & y & z \\ \hline & 0 & 0 & 0 \\ & 0 & 0 & 1 \\ & 0 & 1 & 0 \\ & 0 & 1 & 1 \\ & 1 & 0 & 0 \\ & 1 & 0 & 1 \\ & 1 & 1 & 0 \\ & 1 & 1 & 1 \\ \end{array}

Laying out the possible values)or f(x,y,z),{f(x, y, z),} we get:

xyz(xy)¬x¬(x)zf(x,y,z)=(xy)(¬(x)z)00001000010111010010001101111000000101000011010011111001 \begin{array}{c:c:c:c:c:c:c:c} & x & y & z & (x \land y) & \neg x & \neg(x) \land z & f(x, y, z) = (x \land y) \lor (\neg (x) \land z) \\ \hline & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ & 0 & 0 & 1 & 0 & 1 & 1 & 1 \\ & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ & 0 & 1 & 1 & 0 & 1 & 1 & 1 \\ & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ & 1 & 1 & 0 & 1 & 0 & 0 & 1 \\ & 1 & 1 & 1 & 1 & 0 & 0 & 1 \\ \end{array}

Unlike real functions, Boolean functions have a finite number of possible outputs. This makes it easy (for a feasible number of n{n} variables; generally n<4{n < 4}) to lay out all the possible outputs.

Constructing Boolean Functions

Suppose we were given the following truth table:

xyzf00010010010101101001101011001100 \begin{array}{c:c:c:c:c} & x & y & z & f \\ \hline & 0 & 0 & 0 & 1 \\ & 0 & 0 & 1 & 0 \\ & 0 & 1 & 0 & 1 \\ & 0 & 1 & 1 & 0 \\ & 1 & 0 & 0 & 1 \\ & 1 & 0 & 1 & 0 \\ & 1 & 1 & 0 & 0 \\ & 1 & 1 & 0 & 0 \\ \end{array}

We want to construct a Boolean function that produces this truth table. The trick to doing so is to focus on one row at a time: First, write a function that particular row, apply it to the rest of the values. Second, focus on another row that doesn't match, write a function for that particular row, apply it, and so on.

For example, let's focus on the first row:

xyzf00010010010101101001101011001100 \begin{array}{c:c:c:c:c} & x & y & z & f \\ \hline & \color{salmon} 0 & \color{salmon} 0 & \color{salmon} 0 & \color{salmon} 1 \\ & 0 & 0 & 1 & 0 \\ & 0 & 1 & 0 & 1 \\ & 0 & 1 & 1 & 0 \\ & 1 & 0 & 0 & 1 \\ & 1 & 0 & 1 & 0 \\ & 1 & 1 & 0 & 0 \\ & 1 & 1 & 0 & 0 \\ \end{array}

A function that produces this row would be: (¬x)(¬y)(¬z):{(\neg x) \land (\neg y) \land (\neg z):}

xyzf(¬x)(¬y)(¬z)0001100100010100110010010101001100011000 \begin{array}{c:c:c:c:c:c} & x & y & z & f & (\neg x) \land (\neg y) \land (\neg z) \\ \hline & \color{salmon} 0 & \color{salmon} 0 & \color{salmon} 0 & \color{salmon} 1 & \color{salmon} 1 \\ & 0 & 0 & 1 & 0 & \color{salmon} 0 \\ & 0 & 1 & 0 & 1 & \color{salmon} 0 \\ & 0 & 1 & 1 & 0 & \color{salmon} 0 \\ & 1 & 0 & 0 & 1 & \color{salmon} 0 \\ & 1 & 0 & 1 & 0 & \color{salmon} 0 \\ & 1 & 1 & 0 & 0 & \color{salmon} 0 \\ & 1 & 1 & 0 & 0 & \color{salmon} 0 \\ \end{array}

We then consider the next 1{1} output:

xyzf00010010010101101001101011001100 \begin{array}{c:c:c:c:c} & x & y & z & f \\ \hline & 0 & 0 & 0 & 1 \\ & 0 & 0 & 1 & 0 \\ & \color{cornflowerblue} 0 & \color{cornflowerblue} 1 & \color{cornflowerblue} 0 & \color{cornflowerblue} 1 \\ & 0 & 1 & 1 & 0 \\ & 1 & 0 & 0 & 1 \\ & 1 & 0 & 1 & 0 \\ & 1 & 1 & 0 & 0 \\ & 1 & 1 & 0 & 0 \\ \end{array}

A possible function would be: (¬x)y(¬z).{(\neg x) \land y \land (\neg z).} This results in the truth table:

xyzf(¬x)y(¬z)0001000100010110110010010101001100011000 \begin{array}{c:c:c:c:c} & x & y & z & f & (\neg x) \land y \land (\neg z) \\ \hline & 0 & 0 & 0 & 1 & \color{cornflowerblue} 0 \\ & 0 & 0 & 1 & 0 & \color{cornflowerblue} 0 \\ & \color{cornflowerblue} 0 & \color{cornflowerblue} 1 & \color{cornflowerblue} 0 & \color{cornflowerblue} 1 & \color{cornflowerblue} 1 \\ & 0 & 1 & 1 & 0 & \color{cornflowerblue} 0 \\ & 1 & 0 & 0 & 1 & \color{cornflowerblue} 0 \\ & 1 & 0 & 1 & 0 & \color{cornflowerblue} 0 \\ & 1 & 1 & 0 & 0 & \color{cornflowerblue} 0 \\ & 1 & 1 & 0 & 0 & \color{cornflowerblue} 0 \\ \end{array}

That takes care of the second 1{1} output. Now we write a function for the third 1{1} output:

xyzf00010010010101101001101011001100 \begin{array}{c:c:c:c:c} & x & y & z & f \\ \hline & 0 & 0 & 0 & 1 \\ & 0 & 0 & 1 & 0 \\ & 0 & 1 & 0 & 1 \\ & 0 & 1 & 1 & 0 \\ & \color{green} 1 & \color{green} 0 & \color{green} 0 & \color{green} 1 \\ & 1 & 0 & 1 & 0 \\ & 1 & 1 & 0 & 0 \\ & 1 & 1 & 0 & 0 \\ \end{array}

Here a possible function is x(¬y)(¬z),{x \land (\neg y) \land (\neg z),} yielding the truth table:

xyzfx(¬y)(¬z)0001000100010100110010011101001100011000 \begin{array}{c:c:c:c:c} & x & y & z & f & x \land (\neg y) \land (\neg z) \\ \hline & 0 & 0 & 0 & 1 & \color{green} 0 \\ & 0 & 0 & 1 & 0 & \color{green} 0 \\ & 0 & 1 & 0 & 1 & \color{green} 0 \\ & 0 & 1 & 1 & 0 & \color{green} 0 \\ & \color{green} 1 & \color{green} 0 & \color{green} 0 & \color{green} 1 & \color{green} 1 \\ & 1 & 0 & 1 & 0 & \color{green} 0 \\ & 1 & 1 & 0 & 0 & \color{green} 0 \\ & 1 & 1 & 0 & 0 & \color{green} 0 \\ \end{array}

Putting it all together, we get:

xyzf(¬x)(¬y)(¬z)(¬x)y(¬z)x(¬y)(¬z)00011000010000010101001100001001001101000011000001100000 \begin{array}{c:c:c:c:c} & x & y & z & f & \color{salmon} (\neg x) \land (\neg y) \land (\neg z) & \color{cornflowerblue} (\neg x) \land y \land (\neg z) & \color{green} x \land (\neg y) \land (\neg z) \\ \hline & \color{salmon} 0 & \color{salmon} 0 & \color{salmon} 0 & \color{salmon} 1 & \color{salmon} 1 & \color{cornflowerblue} 0 & \color{green} 0 \\ & 0 & 0 & 1 & 0 & \color{salmon} 0 & \color{cornflowerblue} 0 & \color{green} 0 \\ & \color{cornflowerblue} 0 & \color{cornflowerblue} 1 & \color{cornflowerblue} 0 & \color{cornflowerblue} 1 & \color{salmon} 0 & \color{cornflowerblue} 1 & \color{green} 0 \\ & 0 & 1 & 1 & 0 & \color{salmon} 0 & \color{cornflowerblue} 0 & \color{green} 0 \\ & \color{green} 1 & \color{green} 0 & \color{green} 0 & \color{green} 1 & \color{salmon} 0 & \color{cornflowerblue} 0 & \color{green} 1 \\ & 1 & 0 & 1 & 0 & \color{salmon} 0 & \color{cornflowerblue} 0 & \color{green} 0 \\ & 1 & 1 & 0 & 0 & \color{salmon} 0 & \color{cornflowerblue} 0 & \color{green} 0 \\ & 1 & 1 & 0 & 0 & \color{salmon} 0 & \color{cornflowerblue} 0 & \color{green} 0 \\ \end{array}

All that's left to do is to just string these three functions with OR operators:

f(x,y,z)=[(¬x)(¬y)(¬z)][(¬x)y(¬z)][x(¬y)(¬z)] f(x, y, z) = [(\neg x) \land (\neg y) \land (\neg z)] \lor [(\neg x) \land y \land (\neg z)] \lor [x \land (\neg y) \land (\neg z)]

Although this function is correct, it's fairly complex. A better definition would be to simplify the expression. First, examining the first two functions:

[(¬x)(¬y)(¬z)][(¬x)y(¬z)] \begin{aligned} & [(\neg x) \land (\neg y) \land (\neg z)] \\ & [(\neg x) \land y \land (\neg z)] \end{aligned}

we see that in both of these expressions, we have (¬x){(\neg x)} and (¬z).{(\neg z).} This means that the only fixed values are (¬x){(\neg x)} and (¬z).{(\neg z).} The value for (¬y){(\neg y)} is captured in both expressions. Thus, all we really need to know is (¬x){(\neg x)} and (¬z):{(\neg z):}

(¬x)(¬z) (\neg x) \land (\neg z)

This reduces our function definition to:

f(x,y,z)=[(¬x)(¬z)][x(¬y)(¬z)] f(x, y, z) = [(\neg x) \land (\neg z)] \lor [x \land (\neg y) \land (\neg z)]

We can then reduce the term:

[x(¬y)(¬z)][x \land (\neg y) \land (\neg z)]

to:

[(¬y)(¬z)][(\neg y) \land (\neg z)]

resulting in the definition:

f(x,y,z)=[(¬x)(¬z)][(¬y)(¬z)]f(x, y, z) = [(\neg x) \land (\neg z)] \lor [(\neg y) \land (\neg z)]

From there we can reduce the definition even further:

f(x,y,z)=(¬z)[(¬x)(¬y)]f(x, y, z) = (\neg z) \land [(\neg x) \lor (\neg y)]

As we might be able to tell, constructing Boolean functions from a given truth table is a difficult task. Moreover, finding the shortest possible equivalent expression for a given Boolean expression is an NP-complete problem. There is no algorithm for finding such an expression. This means that, at the moment, a significant aspect of designing the logic circuits behind processors depends on human ingenuity and labor.

What's more remarkable, however, is that we've seen a demonstration of the following theorem:

Canonical Representation Theorem. Any Boolean function can be represented using the operations of {\land} (AND), {\lor} (OR), and ¬{\neg} (NOT).

This is a very special theorem. It is because of this theorem that computers exist. It turns out, however, that we don't actually need the OR operator. We can get away with just AND and NOT. This is because the OR operator can be expressed with AND and NOT, following De Morgan's laws:

xy¬(¬x)(¬y)x \lor y \equiv \neg(\neg x) \land (\neg y)

Accordingly, we have the more generalized form of the theorem:

General Canonical Representation Theorem. Any Boolean function can be represented using the operations of {\land} (AND) and ¬{\neg} (NOT).

But we can go even a step further:

NAND Representation Theorem. Any Boolean function can be represented using the {\uparrow} (NAND) operator.

The NAND (NOT AND) operator is the result of the function:

f(x,y)=xy¬(xy)f(x,y) = x \uparrow y \equiv \neg (x \land y)

The truth table:

xyxy¬(xy)xy0001010110011110 \begin{array}{c:c:c:c:c} & x & y & x \land y & \neg(x \land y) \equiv x \uparrow y \\ \hline & 0 & 0 & 0 & 1 \\ & 0 & 1 & 0 & 1 \\ & 1 & 0 & 0 & 1 \\ & 1 & 1 & 1 & 0 \end{array}

The proof of the NAND representation theorem stems from the fact that we can represent AND and NOT with NAND. From the General Canonical Representation Theorem, we know that every Boolean function can be be represented using AND and NOT, so if we can write these two operations with NAND, the NAND representation theorem directly follows. In this case, we can. The NOT operation is simply:

¬x¬(xx)xx\neg x \equiv \neg (x \land x) \equiv x \uparrow x

and the AND operation is just:

xy¬(¬(xy))¬(xy)(xy)(xy) \begin{aligned} x \land y &\equiv \neg (\neg(x \land y)) \\ &\equiv \neg (x \uparrow y) \\ &\equiv (x \uparrow y) \uparrow (x \uparrow y) \end{aligned}