Algebraic Structures

This chapter covers notes on algebraic structures.

  1. Operations
    1. Counting Binary Operations
  2. Algebraic Structure
  3. Axiomatic Properties
    1. Commutativity
    2. Associativity
    3. Identity
    4. Inverse
    5. Distributivity
  4. Axiomatic Algebras
    1. Semigroup
    2. Commutative Semigroup
    3. Monoid
    4. Commutative Monoids
    5. Groups
      1. Properties of Groups
    6. Abelian Group
  5. Rings
    1. Ordered Rings
  6. Fields
  7. Isomorphisms

Operations

We begin by defining an operation. As we saw in the last chapter, an operation is a kind of map. We now provide an explicit definition:

operation. Let S{S} be a set and n∈N.{n \in \nat.} Then the operation  ⋆ {\op} is a function that assigns every tuple (a1,a2,…,an)∈An{(a_1, a_2, \ldots, a_n) \in A^n} exactly one element  ⋆ (a1,a2,…an)∈A,{\op(a_1,a_2, \ldots a_n) \in A,} called the result of  ⋆ {\op} on (a1,a2,…,an),{(a_1, a_2, \ldots, a_n),} which we may also denote as a1 ⋆ a2 ⋆ … ⋆ an.{a_1 \op a_2 \op \ldots \op a_n.} We call the symbol  ⋆ {\op} an operator, the function  ⋆ :Anβ†’A{\op: A^n \to A} the operation, A{A} the carrier, and n{n} the arity of  ⋆ .{\op.} If n=3,{n=3,} then  ⋆ :AΓ—AΓ—Aβ†’A,{\op: A \times A \times A \to A,} in which case we call  ⋆ {\op} a ternary operation. If n=2,{n = 2,} then  ⋆ :AΓ—Aβ†’A,{\op: A \times A \to A,} and  ⋆ {\op} is called a binary operation. If n=1,{n=1,} then  ⋆ :Aβ†’A,{\op: A \to A,} and  ⋆ {\op} is called a unary operation. If n=0,{n=0,} then  ⋆ {\op} is called a nullary operation, or constant.

In these materials, we will focus primarily on binary and unary operations. From the definition above, we can say that, given a,b∈A,{a,b \in A,} a unary operation may be written as f(a){f(a)} or f(b){f(b)} and a binary operation may be written as f(a,b).{f(a,b).} However, we don't usually think of operations as functions (though we will soon enough), and the way operations are written depends heavily on the problem at hand:

OperationEquivalentTypeExample
 ⋆ aΒ b{\op a~b}f(a,b){f(a,b)}Binary PrefixΓ—aΒ b{\times a~b}
a ⋆ b{a \op b}f(a,b){f(a,b)}Binary InfixaΓ·b{a \div b}
aΒ b ⋆ {a~b \op}f(a,b){f(a,b)}Binary PostfixaΒ bΒ +{a~b~+}
 ⋆ a{\op a}f(a){f(a)}Unary Prefixβˆ’a{-a}
a ⋆ {a \op}f(a){f(a)}Unary Postfixa!{a!}
 ⋆ a ⋆ {\op a \op}f(a){f(a)}Delimited∣a∣{\abs{a}}

In general, we'll use the notation a  ⋆  b{a ~\op~ b} for binary operations, and the notation  ⋆ a{\op a} for unary operations. For each of the operations above, we call a{a} and b{b} the operands of  ⋆ .{\op.}

Something to observe right off the bat: Since  ⋆ {\op} is a function, it follows that, where a,b∈A,{a,b \in A,} and  ⋆ :A2β†’A,{\op:A^2 \to A,} we have a ⋆ b∈A.{a \op b \in A.} Furthermore, note that the operator  ⋆ {\op} must always return some element in A.{A.} It can't return an element not in the carrier. In other words, for the operation  ⋆ ,{\op,} the only things that exist in the universe are the things in its carrier A.{A.} For example, suppose:

B={ 0,1 } B = \set{0,1}

One operation we might define on this set is Γ—{\times} (multiplication). This is a valid binary operation, since the operator always returns an element of the set B.{B.}

Γ—(0,0)=0Γ—0=0Γ—(0,1)=0Γ—1=0Γ—(1,0)=1Γ—0=0Γ—(1,1)=1Γ—1=1 \eqs{ \times(0,0) &= 0 \times 0 &= 0 \\ \times(0,1) &= 0 \times 1 &= 0 \\ \times(1,0) &= 1 \times 0 &= 0 \\ \times(1,1) &= 1 \times 1 &= 1 }

Returning to the definition of an operation, there are two nuances we must always keep in mind. First, a ⋆ b{a \op b} must be uniquely defined. That is, it should be crystal clear what the element a ⋆ b{a \op b} is. For example, suppose we defined an operation β–‘{\square} on the integers as: aβ–‘b{a \square b} is a natural number whose square is aΓ—b.{a \times b.} This would not qualify as an operation. Why? Because if a=2{a = 2} and b=8,{b = 8,} then ab=16{ab = 16} and aβ–‘b=4βˆ¨βˆ’4.{a \square b = 4 \lor -4.}

Second, given a carrier set S{S} and a,b∈S,{a,b \in S,} a ⋆ b{a \op b} must be in S{S} for  ⋆ {\op} to be an operation. If a ⋆ bβˆ‰S,{a \op b \notin S,} then  ⋆ {\op} is what we call a partial operation. If a ⋆ b∈S{a \op b \in S} for all a,b∈S,{a,b \in S,} we say that S{S} is closed under  ⋆ .{\op.}

closed carrier. Let S{S} be a carrier set and  ⋆ {\op} be an operation on S.{S.} We say that S{S} is closed under  ⋆ {\op} if, and only if, for all a,b{a,b} in S,{S,} it is true that a ⋆ b∈S.{a \op b \in S.} Otherwise, we say that S{S} is not closed under  ⋆ .{\op.}

For example, subtraction is a partial operation on the set of natural numbers N.{\nat.} Why? Because subtraction does not always return a element of N,{\nat,} the carrier set: 0βˆ’1=βˆ’1,{0 - 1 = -1,} and βˆ’1βˆ‰N.{-1 \notin \nat.} Likewise, division is a partial operation on the set integers, because a/b{a/b} does not always result in an integer: 3/4=0.75.{3/4 = 0.75.} Division is, however, is an operation on the set of positive real numbers, because a/b{a/b} always returns a real number where a,b∈R.{a,b \in \reals.} It is not an operation on the set of all real numbers, because a/0{a/0} does not return a real number.

Counting Binary Operations

Suppose we have a carrier set S={ a,b }.{S = \set{a,b}.} Then the Cartesian product of this set is:

SΓ—S={(a,a)Β Β (a,b)(b,a)Β Β (b,b)}. S \times S = \lset{ \eqs{ (a,a)~&~(a,b) \\ (b,a)~&~(b,b) } }.

Following the definition of an operation, an operation  ⋆ {\op} maps each of the pairs above to an element of S.{S.} For example, some mappings might be:

(a,a){(a,a)}a{a}
(a,b){(a,b)}a{a}
(b,a){(b,a)}a{a}
(b,b){(b,b)}a{a}
(a,a){(a,a)}b{b}
(a,b){(a,b)}b{b}
(b,a){(b,a)}b{b}
(b,b){(b,b)}a{a}
(a,a){(a,a)}a{a}
(a,b){(a,b)}b{b}
(b,a){(b,a)}a{a}
(b,b){(b,b)}b{b}
(a,a){(a,a)}b{b}
(a,b){(a,b)}a{a}
(b,a){(b,a)}b{b}
(b,b){(b,b)}a{a}

Thinking more carefully about this, we can draw a few inferences. First, an operation is a mapping from each element of the Cartesian product of the carrier set to an element of the carrier set. Thus, given a carrier set S{S} where the cardinality of S{S} is n,{n,} the cardinality of SΓ—S{S \times S} is nΓ—n=n2.{n \times n = n^2.} And since a binary operation is a function of each element of SΓ—S{S \times S} to an element of S,{S,} we have nn2{n^{n^2}} possible operations. Thus, for our simple carrier set S={ a,b },{S = \set{a,b},} we get (2)22=24=16{(2)^{2^2} = 2^{4} = 16} possible operations.

Notice what this means: Without further information, if  ⋆ {\op} is an operation, then a ⋆ b{a \op b} is a unique element. That is, a ⋆ b{a \op b} is a different element from b ⋆ a.{b \op a.} This means that we can never assert that a ⋆ b=b ⋆ a{a \op b = b \op a} unless we assume, better yet, prove, that  ⋆ {\op} is commutative (we'll study commutativity shortly). Likewise, given a c{c} in the carrier set S,{S,} without more, the element a ⋆ (b ⋆ c){a \op (b \op c)} is a distinct element from (a ⋆ b) ⋆ c.{(a \op b) \op c.} Which means that we cannot assert that a ⋆ (b ⋆ c)=(a ⋆ b) ⋆ c{a \op (b \op c) = (a \op b) \op c} without assuming or proving that  ⋆ {\op} is associative.

Algebraic Structure

An algebraic structure, despite its complicated-sounding name, has a surprisingly short definition:

algebraic structure. An algebraic structure, or algebra, is a tuple:

(S,Β {  ⋆ 0, ⋆ 1, ⋆ 2, …,  ⋆ nβˆ’1 }) (S,~\set{\op_0, \op_1, \op_2,~\ldots,~\op_{n-1}})

where S{S} is the carrier set, and  ⋆ i{\op_i} are operations on A.{A.}

From the definition above, an algebra or algebraic structure is a tuple with two objects: (1) a set, and (2) a set of operations β€” functions β€” that take some member(s) of the set as a arguments, and return some member(s) of the set as their image. That set could be anything β€” a set of numbers, vectors, tuples, matrices, circuits, Boolean values, chess pieces, politicians, etc. Given that operations are functions, functions are relations, and relations are really just rules for how objects are paired, it follows that the moment we take a set and define rules for how things in that set can be combined, we've created a an algebra.

Within reason, we may denote an algebra with the notational form (set,ops).{(\text{set},\text{ops}).} For example, the algebra (R,{ + }){(\reals, \set{+})} is an algebra of the reals and addition. Working with just a single operation, we can omit the braces and simply write (R,+).{(\reals, +).} If the algebra includes multiple operations, we adopt the convention of using braces. For example, (R,{ +,Γ—,βˆ’,/ }){(\reals,\set{+,\times,-,/})} is an algebra of the reals and the operations of addition, multiplication, subtraction, and division.

Axiomatic Properties

We begin by presenting axiomic properties. These are properties of an algebra's carrier set or operations that appear so frequently in proofs that appear so frequently that it would be inconvenient to state the same assumptions over and over again. Accordingly, we encapsulate the body of assumptions into a single package called an axiomatic property. This practice isn't unique to abstract algebra β€” we see it in all other areas of mathematics.

Commutativity

We say that an operation  ⋆ {\op} is commutative on a set A{A} if, and only if, for all a,b∈A,{a,b \in A,} it is true that a ⋆ b=b ⋆ a.{a \op b = b \op a.}

commutativity. Let S{S} be a carrier set and  ⋆ {\op} an operation on S.{S.} Then  ⋆ {\op} is commutative if, and only if, for all a,b∈S,{a,b \in S,} it is true that a ⋆ b=b ⋆ a.{a \op b = b \op a.} Otherwise, we say that the operation  ⋆ {\op} is noncommutative.

Some examples: The operation of +{+} on the integers is commutative: 1+2=2+1.{1+2=2+1.} Likewise, the operation of Γ—{\times} is commutative on the integers 2Γ—3=3Γ—2.{2 \times 3 = 3 \times 2.} The operation of division on the reals, however, is not: 1/2β‰ 2/1.{1/2 \neq 2/1.} We say the addition is commutative on the reals, and division is noncommutative on the reals. Subtraction is not commutative on the integers either, aβˆ’bβ‰ bβˆ’a,{a-b \neq b-a,} with the single exception that a=b.{a=b.} This does not, however, motivate us to say "partially commutative" because an operation is commutative if, and only if, it holds for all members of the carrier. Even if there were such a motivation, it wouldn't be very useful. (We follow the ancient mantra of mathematics β€” if it does not need to be said/defined, do not say/define it.)

A few more bits of detail: First, since  ⋆ {\op} is a function, the commutative property essentially tells us that f(a,b)=f(b,a).{{f}(a,b) = {f}(b,a).} In the world of functions, we say that f{{f}} is a symmetric function. Given a ⋆ b{a \op b} and  ⋆ {\op} is commutative, we may use the cute phrase, "a{a} commutes with b,{b,} and b{b} commutes with a.{a.}" Thus, when an operation  ⋆ {\op} on a set S{S} is commutative, then elements that are placed on the  ⋆ {\op} bus are free to sit wherever they'd like β€”  ⋆ {\op} will send them where they must regardless.

Associativity

Next, we say that an operation  ⋆ {\op} is associative if, and only if, for all a,b∈A,{a,b \in A,} we can rest assured that a ⋆ (b ⋆ c)=(a ⋆ b) ⋆ c.{a \op (b \op c) = (a \op b) \op c.} Associativity is a property that arises when we have more than two operands β€” with just one or two operands, associativity is unnecessary, because commutativity/noncommutativity answers all of our questions. When we have more than two operands, however, we're confronted with a question that commutativity can't answer: Given a ⋆ b ⋆ c,{a \op b \op c,} can we do a ⋆ b{a \op b} first? Put differently, associativity tells us whether applying commutativity is at our discretion. If an operation is nonassociative, then no β€” there are rules to follow. If an operation is associative, then yes β€” it's a free for all.

associativity. Let S{S} be a carrier set and  ⋆ {\op} and operation on S.{S.} Then  ⋆ {\op} is associative if, and only if, for all a,b,c∈S,{a,b,c \in S,} it is true that (a ⋆ b) ⋆ c=a ⋆ (b ⋆ c).{(a \op b) \op c = a \op (b \op c).} Otherwise, we say that the operation  ⋆ {\op} is nonassociative.

Once more, since  ⋆ {\op} is just a function, we can think of associativity in functional terms as f(f(x,y),z)=f(x,f(y,z)).{f(f(x,y),z) = f(x,f(y,z)).} Importantly, the fact that an operation is associative does not imply that it's commutative. For example, the concatenation operation β§Ί{\con} on strings is associative, but noncommutative. For those unfamiliar, β§Ί{\con} is an operation that takes two sequences of symbols, and concatenates, or joins them, into a single sequence of symbols. For example, β€œabβ€β§Ίβ€œcd”=β€œabcd”,{\string{ab} \con \string{cd} = \string{abcd},} or β€œHiβ€β§Ίβ€œΒ Β β€β§Ίβ€œthere.”=β€œHiΒ there.”{\string{Hi} \con \string{~~} \con \string{there.} = \string{Hi there.}} Thus, it follows that (aβ§Ίb)β§Ίc=aβ§Ί(bβ§Ίc)=abc,{(a \con b) \con c = a \con (b \con c) = abc,} but it's not the case that (aβ§Ίb)β§Ίc=(bβ§Ίa)β§Ίc.{(a \con b) \con c = (b \con a) \con c.}

The same goes in the other direction: The commutativity does not imply associativity. Perhaps the most important example of this in real life is floating point addition (and by extension, multiplication). (a+βˆ’a)+b=b{(a + -a) + b = b} is always true, but a+(βˆ’a+b){a + (-a + b)} is not guaranteed to equal b{b} because of precision loss, and the probability that a+(βˆ’a+b)β‰ b{a + (-a + b) \neq b} increases if bβˆ’a{b - a} is very close to 0.{0.} Failing to consider this fact can kill; the Patriot Missile Failure of 1991 killed 28 U.S. soldiers and injured about 100 others because of incorrect floating point arithmetic.

Identity

The next property we're interested in is whether the carrier set contains an identity.

identity element. Let S{S} be a carrier set and  ⋆ {\op} be an operation on S.{S.} If, and only if, there exists an element e∈S{e \in S} such that, for all m∈S,{m \in S,} e ⋆ m=m{e \op m = m} and m ⋆ e=m,{m \op e = m,} then we say that e{e} is an identity element of  ⋆ .{\op.}

For example, for the algebra (R,+),{(\reals, +),} the identity element is 0.{0.} For the algebra (R,Γ—),{(\reals, \times),} the identity element is 1.{1.} Not all algebras have identity elements. The algebra (Nβˆ–2N,+),{(\nodds, +),} where Nβˆ–2N{\nodds} is the set of all odd natural numbers, has no identity element β€” adding an odd natural x{x} to another odd natural y{y} will always return some new odd natural z.{z.}

We must be careful with the identity element's definition β€” it's deceptively simple. There are three pieces to the definition: (1) e ⋆ m=m,{e \op m = m,} (2) m ⋆ e=m,{m \op e = m,} and (3) condition 1 and condition 2 hold for all the elements of the carrier set. Thus, if we want to prove that some element e{e} is an identity element, we must prove all three of these pieces. As it turns out, we can satisfy condition 1 without satisfying condition 2, condition 2 without condition 1, or neither condition 1 nor condition 2. This leads to the notion of a one-sided identity.

one-sided identity. Let S{S} be a carrier set and  ⋆ {\op} an operation on S.{S.} Given an element e∈S,{e \in S,} we say that e{e} is a left-sided identity if, and only if, for all s∈S,{s \in S,} e ⋆ s=s.{e \op s = s.} We say that e{e} is a right-sided identity if, and only if, for all s∈S,{s \in S,} s ⋆ e=s.{s \op e = s.} If e ⋆ s=s{e \op s = s} and s ⋆ eβ‰ s,{s \op e \neq s,} then e{e} is a strictly left-sided identity, and if s ⋆ e=s{s \op e = s} and e ⋆ sβ‰ s,{e \op s \neq s,} then e{e} is a strictly right-sided identity. If e{e} is either strictly left-sided or strictly right-sided, we say that e{e} is a one-sided identity. If e{e} is both left-sided and right-sided, then e{e} is an identity element.

Suppose we're given an algebra (S,βˆ—).{(S,\ast).} We're told that βˆ—{\ast} has the left identity e1∈S{e_1 \in S} and the right identity e2∈S.{e_2 \in S.} From the definition above, we can infer that, given a∈S{a \in S} and b∈S,{b \in S,}

e1βˆ—a=abβˆ—e2=b. e_1 \ast a = a \\ b \ast e_2 = b.

What if a=e2{a = e_2} and b=e1?{b = e_1?} Well, a simple substitution shows:

e1βˆ—e2=e2e1βˆ—e2=e1. e_1 \ast e_2 = e_2 \\ e_1 \ast e_2 = e_1.

That is, e1=e2=e.{e_1 = e_2 = e.} Consequently, we have the following lemma.

lemma. Given a carrier set S{S} and a binary operation  ⋆ {\op} on S,{S,} there is at most one identity element with respect to  ⋆ .{\op.}

Inverse

If a carrier set has an identity element, we often want to ask whether every element of the carrier set has a left inverse, right inverse, or both left and right (inverse). Here is the definition of a left-inverse:

left inverse. Let S{S} be a carrier set,  ⋆ {\op} an operation on S,{S,} and e{e} the identity element of S.{S.} We say that every a∈S{a \in S} has a left inverse aβˆ’1{a^{-1}} if, and only if,

aβˆ’1 ⋆ a=e. a^{-1} \op a = e.

And here is the definition of the right inverse:

right inverse. Let S{S} be a carrier set,  ⋆ {\op} an operation on S,{S,} and e{e} the identity element of S.{S.} We say that every a∈S{a \in S} has a right inverse aβˆ’1{a^{-1}} if, and only if,

a ⋆ aβˆ’1=e. a \op a^{-1} = e.

If every element of the carrier set has a right and left inverse, we say that the every element of the carrier set has an inverse.

inverse. Let S{S} be a carrier set and  ⋆ {\op} be an operation on S.{S.} Every a∈S{a \in S} has an inverse aβˆ’1∈S{a^{-1} \in S} if, and only if,

a ⋆ aβˆ’1=aβˆ’1 ⋆ a=e. a \op a^{-1} = a^{-1} \op a = e.

Distributivity

The third property we'll consider requires us to introduce a second binary operator  ⍝ .{\lamp.} Thus, we know have an algebra (S,{  ⋆ , ⍝  }),{(S, \set{\op, \lamp}),} where S{S} is a carrier set.1 With two operations, a question we're interested in is whether the operations are left distributive, right distributive, or both left and right distributive.

Left-distributivity is defined as follows:

left-distributivity. Let S{S} be a carrier, and  ⋆ {\op} and  ⍝ {\lamp} be operations on S.{S.} We say that  ⋆ {\op} is left-distributive over  ⍝ {\lamp} if, and only if, for all a,b,c∈S,{a,b,c \in S,}

a ⋆ (b ⍝ c)=(a ⋆ b) ⍝ (a ⋆ c). a \op (b \lamp c) = (a \op b) \lamp (a \op c).

Right-distributivity is defined as follows:

right-distributivity. Let S{S} be a carrier, and  ⋆ {\op} and  ⍝ {\lamp} be operations on S.{S.} We say that  ⋆ {\op} is right-distributive over  ⍝ {\lamp} if, and only if, for all a,b,c∈S,{a,b,c \in S,}

(b ⍝ c) ⋆ a=(b ⋆ a) ⍝ (c ⋆ a). (b \lamp c) \op a = (b \op a) \lamp (c \op a).

If an operation  ⋆ {\op} is both left- and right-distributive over an operation  ⍝ ,{\lamp,} we say that  ⋆ {\op} distributes over  ⍝ ,{\lamp,} or  ⋆ {\op} is distributive over  ⍝ .{\lamp.}

distributivity. Let S{S} be a carrier, and  ⋆ {\op} and  ⍝ {\lamp} be operations on S.{S.} We say that  ⋆ {\op} is distributive over  ⍝ {\lamp} if, and only if, for all a,b,c∈S,{a,b,c \in S,}

a ⋆ (b ⍝ c)=(a ⋆ b) ⍝ (a ⋆ c)=(b ⋆ a) ⍝ (c ⋆ a). a \op (b \lamp c) = (a \op b) \lamp (a \op c) = (b \op a) \lamp (c \op a).

A few things to observe from the definition of distributivity: First, it's a property that always involves two distinct operations. Accordingly, the statement "multiplication is distributive" is akin to an incomplete sentence in natural language; it's ambiguous. Why? Because the adjective "distributive" entails that the distributive "thing" distributes. And if something distributes, there's always a distributor (the thing that distributes) and a distributee (the thing that receives the distribution). Thus, when we state that "x{x} is distributive" without more, the audience is told the distributor, but not the distributee, which leaves them with no way of verifying whether the statement is true or false β€” it's ambiguous. And in the world of mathematics, ambiguous statements are useless. Accordingly, to state that something is distributive, the statement must always be either of these forms:

(operation1){({operation}_1)} is distributive over (operation2).{({operation}_2).}

(operation1){({operation}_1)} distributes over (operation2).{({operation}_2).}

Second, distributivity is our first encounter of a property that relates one operation with another. We'll see that there are many other such properties, but distributivity is by far the most common. Casting distributivity in functional terms, suppose  ⋆ =f{\op = f} and  ⍝ =g.{\lamp = g.} If f{f} is left-distributive over g,{g,} we get:

a ⋆ (b ⍝ c)=(a ⋆ b) ⍝ (a ⋆ c)aΒ fΒ (bΒ gΒ c)=(aΒ fΒ b)Β gΒ (aΒ fΒ c)aΒ fΒ (g(b,c))=(f(a,b))Β gΒ (f(a,c))f(a,g(b,c))=g(f(a,b),f(a,c))\eqs{ a \op (b \lamp c) &= (a \op b) \lamp (a \op c) \\ a ~f~ (b ~g~ c) &= (a ~f~ b) ~g~ (a ~f~ c) \\ a ~f~ (g(b,c)) &= (f(a,b)) ~g~ (f(a,c)) \\ f(a,g(b,c)) &= g(f(a,b),f(a,c)) \\ }

If f{f} is right-distributive over g,{g,} we get:

(b ⍝ c) ⋆ a=(b ⋆ a) ⍝ (c ⋆ a)(bΒ gΒ c)Β fΒ a=(bΒ fΒ a)Β gΒ (cΒ fΒ a)(g(b,c))Β fΒ a=(f(b,a))Β gΒ (f(c,a))f(g(b,c),a)=g(f(b,a),f(c,a))\eqs{ (b \lamp c) \op a &= (b \op a) \lamp (c \op a) \\ (b ~g~ c) ~f~ a &= (b ~f~ a) ~g~ (c ~f~ a) \\ (g(b,c)) ~f~ a &= (f(b,a)) ~g~ (f(c,a)) \\ f(g(b,c),a) &= g(f(b,a),f(c,a)) }

And if f{f} is distributive over g,{g,} we have:

a ⋆ (b ⍝ c)=(a ⋆ b) ⍝ (a ⋆ c)=(b ⋆ a) ⍝ (c ⋆ a)aΒ fΒ (bΒ gΒ c)=(aΒ fΒ b)Β gΒ (aΒ fΒ c)=(bΒ fΒ a)Β gΒ (cΒ fΒ a)aΒ fΒ (g(b,c))=(f(a,b))Β gΒ (f(a,c))=(f(b,a))Β gΒ (f(c,a))f(a,g(b,c))=g(f(a,b),f(a,c))=g(f(b,a),f(c,a)) \eqs{ a \op (b \lamp c) &= (a \op b) \lamp (a \op c) = (b \op a) \lamp (c \op a) \\ a ~f~ (b ~g~ c) &= (a ~f~ b) ~g~ (a ~f~ c) = (b ~f~ a) ~g~ (c ~f~ a) \\ a ~f~ (g(b,c)) &= (f(a,b)) ~g~ (f(a,c)) = (f(b,a)) ~g~ (f(c,a)) \\ f(a,g(b,c)) &= g (f(a,b),f(a,c)) = g (f(b,a),f(c,a)) }

Axiomatic Algebras

Having defined the axiomatic properties, we turn to axiomatic algebras. Like the axiomatic properties, an axiomatic algebra is an algebra that mathematicians have given a special name, because (1) the algebra has some interesting combination of the axiomatic properties and (2) the algebra appears over and over again in many proofs.

Before we look at the algebras themselves, let's establish some conventions associated with algebras. As we saw, an algebra is a tuple (S,O),{(S,\Oo),} where S{S} is some set of elements and O{\Oo} is a set of operations on those elements. Now that we're discussing algebras, we'll want to refer to them without having to write (S,O){(S,\Oo)} over and over again. This particularly the case once we start talking about larger algebras. The convenion in abstract algebra is to use fraktur letters for algebraic structures. For example, an algebra (G,+){(G,+)} may be denoted G.{\mathfrak{G}.} This notation has the benefits of (1) conciseness and (2) the ability to distinguish the carrier G{G} and the algebra G.{\mathfrak{G}.}

Semigroup

The two most basic algebras are the associative semigroup and the commutative semigroup. As we'll see later, recognizing these algebras can allow us to deduce some interesting propositions. First, let's consider the semigroup, a very simple algebra:

semigroup. Let S{S} be a carrier set and  ⋆ {\op} an operation on S.{S.} Then the algebra (S, ⋆ ){(S, \op)} is an associative semigroup if, and only if,  ⋆ {\op} is associative on S.{S.}

For example, the algebras (N,+),{(\nat, +),} (Z,+),{(\uint, +),} (N,Γ—),{(\nat, \times),} and (Z,Γ—){(\uint, \times)} are all semigroups. The algebra (Z,βˆ’){(\uint, -)} is not a semigroup, because βˆ’{-} is not associative on the integers. The algebra (N,βˆ’){(\nat,-)} is not a semigroup, because βˆ’{-} is not even an operation on N.{\nat.} Of note, we often refer to algebras with the following syntactic form:

<property>{\ltn \text{property} \gtn} <algebra>{\ltn \text{algebra} \gtn} of the <carrier>{\ltn \text{carrier} \gtn}

For example, the semigroup (Z,+){(\uint,+)} is called the additive semigroup of the integers, and the semigroup (N,Γ—){(\nat,\times)} is called the multiplicative semigroup of the naturals.

Commutative Semigroup

The next simplest algebra is the commutative semigroup:

commutative semigroup. Let S{S} be a carrier set and  ⋆ {\op} an operation on S.{S.} Then the algebra (S, ⋆ ){(S, \op)} is a commutative semigroup if, and only if,  ⋆ {\op} is commutative on S.{S.}

For example, the algebras (N,+),{(\nat, +),} (Z,+),{(\uint, +),} (N,Γ—){(\nat, \times)} and (Z,Γ—){(\uint, \times)} are all Abelian semigroups. The operations of addition and multiplication are commutative on N{\nat} and Z.{\uint.}

Monoid

The third simplest algebra is the monoid.

monoid. Let S{S} be a carrier set and  ⋆ {\op} an operation on S.{S.} Then the algebra (S, ⋆ ){(S, \op)} is an monoid if, and only if, the following propositions are true: (1) (S, ⋆ ){(S, \op)} is an associative semigroup, and (2) for all a∈S,{a \in S,} there exists an identity element e∈S.{e \in S.}

For example, the algebras (N,+){(\nat,+)} and (Z,+){(\uint,+)} are monoids, because their carrier sets contain the additive identity 0.{0.} Likewise, the algebras (N,Γ—){(\nat, \times)} and (Z,Γ—){(\uint,\times)} are monoids, because they're carrier sets contain the multiplicative identity 1.{1.} In contrast, the algebra (2Z,Γ—){(2 \uint, \times)} is a semigroup but not a monoid, because the multiplicative identity 1{1} is not a member of the set of even integers 2Z.{2 \uint.}

Commutative Monoids

If  ⋆ {\op} is associative but not commutative, we say that (S, ⋆ ){(S, \op)} is an associative monoid. If  ⋆ {\op} is commutative but not associative, we say that (S, ⋆ ){(S, \op)} is a commutative monoid.

Groups

The fourth simplest algebra is the group. By far, they are most common of the simple algebras. In fact, they're so common that there's an entire subfield of abstract algebra dedicated to them, called group theory.

group. Let S{S} be a carrier set and  ⋆ {\op} be an operation on S.{S.} Then the algebra (S, ⋆ ){(S, \op)} is a group if, and only if, all of the following propositions are true: (1)  ⋆ {\op} is associative. (2) For all a∈S,{a \in S,} there exists an identity element e∈S.{e \in S.} And (3) For all a∈S,{a \in S,} there exists an inverse aβˆ’1∈S.{a^{-1} \in S.} If the carrier S{S} is a finite set, then we say that the group is a finite group.

Notice that from the definition above, the group is really just an associative monoid with an addon: Every element a∈S{a \in S} has an inverse aβˆ’1∈S{a^{-1} \in S} such that a ⋆ aβˆ’1=e.{a \op a^{-1} = e.}

Let's consider some examples. The algebra (Z,+){(\uint, +)} is a group, called the additive group of integers. Likewise, the algebra (Q,+){(\rat, +)} is a group, called the additive group of rationals. If we defined the natural numbers as excluding zero (which we do not in these materials), then (N,+){(\nat, +)} is not a group, because there is no inverse.

Importantly, the definition of a group does not require commutativity. Thus, a given group G=(S, ⋆ ),{G = (S, \op),} without further information, does not tell us whether  ⋆ {\op} is commutative on S.{S.} If it is commutative, then we say that G{G} is an Abelian group.

Properties of Groups

Recall that in our discussion of the identity element, we showed that an algebra can have at most one identity element. Since a group, by definition, must have an identity element, we have the following property:

group identity. Given a group (G, ⋆ ){(G,\op)} there exists exactly one element e∈G,{e \in G,} called the group identity, such that, for all a∈G,{a \in G,} it follows that a ⋆ e=a.{a \op e = a.}

Now consider this question: Can an element a∈G{a \in G} have two inverses? Well, let's suppose a{a} has the inverses aβ€²{a'} and aβ€²β€².{a''.} Then we have:

a′ ⋆ (a ⋆ aβ€²β€²)=a′ ⋆ e=aβ€²(a′ ⋆ a) ⋆ aβ€²β€²=e ⋆ aβ€²β€²=aβ€²β€². \eqs{ a' \op (a \op a'') &= a' \op e = a' \\ (a' \op a) \op a'' &= e \op a'' = a''. }

But since  ⋆ {\op} is associative by definition, it follows that:

a′ ⋆ (a ⋆ ′′)=(a′ ⋆ a) ⋆ aβ€²β€²aβ€²=aβ€²β€². \eqs{ a' \op (a \op'') &= (a' \op a) \op a'' \\ a' &= a''. }

Hence, we have the following property:

group inverse. Given a group (G, ⋆ ){(G,\op)} there exists exactly one element i∈G,{i \in G,} called the group inverse, such that for all a∈G,{a \in G,} it follows that a ⋆ i{a \op i} is the identity element e∈G.{e \in G.}

Abelian Group

The term "group" without more, communicates an assocative monoid with the additional property of every element having an inverse. For commutative monoids, we have the Abelian group (named after the mathematician Niels Abel, a forefather of group theory).

abelian group. Let S{S} be a carrier set and  ⋆ {\op} be an operation on S.{S.} Then the algebra (S, ⋆ ){(S, \op)} is an Abelian group if, and only if, all of the following propositions are true: (1)  ⋆ {\op} is commutative. (2) For all a∈S,{a \in S,} there exists an identity element e∈S.{e \in S.} And (3) For all a∈S,{a \in S,} there exists an inverse aβˆ’1∈S.{a^{-1} \in S.}

The algebras of (Z,+),{(\uint,+),} (Z,Γ—),{(\uint,\times),} (N,+){(\nat,+)} and (N,Γ—){(\nat,\times)} are all Abelian groups. For example, the algebra (Ξ£,β§Ί){(\Sigma, \con)} where Ξ£{\Sigma} is an alphabet and β§Ί{\con} is concatenation is a group, but it is not a semigroup (recall the earlier demonstration that string concatenation is associative but noncommutative).

Rings

We now turn to our first "big" structure β€” the ring. We say "big" because we're now looking at a structure with two operations.

ring. Let R{\mathfrak{R}} be the algebra (R,{ +,βˆ—β€‰}),{(\R, \set{+,*}),} where R{\R} is a set, and +{+} and βˆ—{*} are operations on R.{\R.} Then R{\mathfrak{R}} is a ring if, and only if, the following propositions are true:

  1. (R,+){(\R,+)} is an Abelian group.
  2. For all a,b,c∈R,{a,b,c \in \R,} it follows that (aβˆ—b)βˆ—c=aβˆ—(bβˆ—c).{(a*b)*c = a*(b*c).} That is, βˆ—{*} is associative.
  3. For all a,b,c∈R:{a,b,c \in \R:} aβˆ—(b+c)=(aβˆ—b)+(aβˆ—c),{a*(b+c)=(a*b)+(a*c),} and (b+c)βˆ—a=(bβˆ—a)+(cβˆ—a).{(b+c)*a=(b*a)+(c*a).} That is, βˆ—{*} is distributive over +.{+.}

commutative ring. A ring R=(R,{ +,βˆ—β€‰}){\mathfrak{R}=(R,\set{+,*})} is a commutative ring if, and only if, R{\mathfrak{R}} is both a ring and multiplication is commutative in R.{R.}

The structure (Z,{ +,βˆ—β€‰}){(\uint,\set{+,*})} is a commutative ring. It has an additive identity, 0, and a multiplicative identity, 1. It also has an additive inverse; for every n∈Z,{n \in \uint,} there exists βˆ’n∈Z.{-n \in \uint.}

Ordered Rings

The structure (Z,{ +,βˆ—β€‰}){(\uint,\set{+,*})} is an example of an ordered ring.

ordered ring. A ring (S,{ +,βˆ—β€‰}){(S,\set{+,*})} is ordered if, and only if, there exists a nonempty set PβŠ‚S,{P \subset S,} called the positive elements of S,{S,} with the following properties:

  1. If a,b∈P,{a,b \in P,} then a+b∈P.{a+b \in P.}
  2. If a,b∈P,{a,b \in P,} then aβˆ—b∈P.{a*b \in P.}
  3. If a∈S,{a \in S,} then exactly one of the following is true: a∈P,{a \in P,} a=0,{a = 0,} or βˆ’a∈P.{-a \in P.}

Fields

field. A field is a structure F=(F,{ +,βˆ—β€‰}),{\mathfrak{F}=(F,\set{+,*}),} where F{F} is a set and { +,βˆ—β€‰}{\set{+,*}} are binary operations on F,{F,} satisfying the following properties, called the field axioms:

  1. (F,+){(F,+)} is an Abelian group.
  2. (Fβˆ–{ 0 },{β€‰βˆ—β€‰}){(F \smallsetminus \set{0}, \set{*})} is an Abelian group.
  3. βˆ—{*} is distributive over +{+} in F.{F.}
  4. 0β‰ 1.{0 \neq 1.}

Isomorphisms

Consider the following pairs of strings:

(β€œmadam”,Β Β β€œrotor”)(β€œpaper”,Β Β β€œtitle”)(β€œegg”,Β Β β€œadd”) \eqs{ &\ar{\string{madam},~~\string{rotor}} \\[1em] &\ar{\string{paper},~~\string{title}} \\[1em] &\ar{\string{egg},~~\string{add}} }

Do we see a pattern to these strings? Let's create a grid for each pair above. We'll have one row per string, one cell per character, and the following coloring rule: For all the characters of a given string S,{S,} if there exists a pair of characters (a,b)∈SΓ—S{(a,b) \in S \times S} such that a=b,{a = b,} then a{a} and b{b} are assigned the color powder blue. Otherwise, no color is assigned. Putting it all together, we have:

m a d a m
r o t o r
p a p e r
t i t l e
e g g
a d d

Notice that "madam" and "rotor" have some relationship, "paper" and "title" have another relationship, and "egg" and "add" have another relationship. All of these relationships are examples of isomorphisms β€” the relationship of structural similarity between objects. The string "rotor" is an isomorph of "madam" and the string "madam" is an isomorph of "rotor." Likewise, "paper" is an isomorph of "title" and vice versa, and "egg" is an isomorph of "add" and vice versa.

Footnotes

  1. The symbol  ⍝ {\lamp} is called a lamp, and is used to denote a comment in the APL programming language. We use it here as a way to distinguish operations. ↩