Graph Theory

The following materials are notes on graph theory.

  1. Terminology
    1. Simple Graphs
    2. Subgraphs
    3. Walks
    4. Paths
      1. Hamiltonian Paths
    5. Trails
    6. Circuits
    7. Cycles
    8. Connectedness
    9. Trees
    10. Graph Operations
    11. Digraphs
  2. Adjacency Matrices

Graph theory is a fairly nascent field, and there are many pieces of terminology. We lay all of them out now for expediency.

Terminology

Visually, are circles connected with lines:

Each circle is called a vertex (or node), and each line is called an edge (or link). Below is a general definition of a graph. The numbers are simply objects associated with each node (what we can think of as labels or datum). The objects chosen is a matter of application. They might be computers, people, chemical compounds, it's entirely up to us. Graph theory, in general, isn't particularly interested in what the vertices or edges represent. Instead, we're more interested in the general properties of graphs as objects in themselves.

graph. A graph G{G} is an ordered pair of sets (V,E).{(V, E).} The set V{V} is a set of objects called vertices. The set EV×V{E \subseteq V \times V} is a set of relations called edges. For all edges (a,b)E,{(a,b) \in E,} we may denote an edge with a variable e=(a,b).{e = (a,b).} Given a graph G{G} and absent ambiguity, we denote the set of all vertices of G{G} with the notation v(G),{\v(G),} and the set of all edges of G{G} with the notation e(G).{\e(G).} Given the edge e=(a,b){e = (a,b)} we say that a{a} and b{b} are adjacent to one another, and that a{a} and b{b} are incident to e.{e.}

order and size. Given a graph G,{G,} the cardinality of v(G){\v(G)} is called the order of G,{G,} denoted ord(G),{\tx{ord}(G),} and the cardinality of e(G){\e(G)} is called the size of G,{G,} denoted size(G).{\tx{size}(G).}

neighborhood. The neighborhood of a vertex v,{v,} denoted N(v),{\Nn(v),} is the set of all vertices in G{G} adjacent to v.{v.}

degree. Given a vertex v,{v,} the number of edges adjacent to v{v} is called the degree of v,{v,} denoted deg(v).{\tx{deg}(v).}

density. Given a graph G=(V,E),{G=(V,E),} the density of G,{G,} denoted den(G),{\tx{den}(G),} is defined as

den(G)=v(G)e(G), \tx{den}(G) = \frac{\abs{\v(G)}}{\abs{\e(G)}},

where v(G){{\abs{\v(G)}}} is the number of vertices and e(G){{\abs{\e(G)}}} is the number of edges. If e(G){{\abs{\e(G)}}} is proportional to v(G)2,{{\abs{\v(G)}}^2,} then we way that G{G} is dense. Otherwise, we say that G{G} is sparse.

The definitions above are broad. In short, it doesn't preclude a graph from having vertices that connect to themselves, or "duplicate edges."

The edge (e,e){(e,e)} is an example of a loop, and the edges (b,c)1{(b,c)_1} and (b,c)2{(b,c)_2} are examples of parallel edges.

loop. Given a graph G=(V,E){G = (V,E)} and an edge (a,b)E,{(a,b) \in E,} the edge (a,b){(a,b)} is a loop if, and only if, a=b.{a = b.}

parallel edges. Given a graph G=(V,E),{G = (V,E),} if (a,b)E{(a,b) \in E} and (b,a)E{(b,a) \in E} with (a,b)(b,a),{(a,b) \neq (b,a),} then (a,b){(a,b)} and (b,a){(b,a)} are parallel edges.

Simple Graphs

A graph with no loops or parallel edges is called a simple graph.

simple graph. A graph G=(V,E){G = (V,E)} is simple if, and only if, G{G} consists of only V{V} and E,{E,} and E{E} contains no loops and no parallel edges. That is, for all {a,b}E,{\set{a,b} \in E,} {a,b}={b,a}{\set{a,b} = \set{b,a}} and {a,a}E.{\set{a,a} \notin E.}

In the next section we discuss digraphs, where we introduce the notion of directed edges. Per the definition above, simple graphs have no direction. We can also see that given a simple graph Gs,{G_s,} there are, at most, (#V(#V1))/2{(\card{V}(\card{V}-1))/2} edges. This is because #V2,{\card{V}^2,} the Cartesian product of V,{V,} includes self-loops and counts each edge between distinct vertices. Thus, we subtract V{V} and divide by 2.

theorem. Given a simple graph G=(V,E){G = (V,E)} with nN{n \in \nat} vertices, there are, at most, n(n1)2{\dfrac{n(n - 1)}{2}} edges.

This bound is partly why simple graphs are the most widely-used type of graph. The bound no longer exists once we allow parallel edges.

density of a simple graph. Given a simple graph G,{G,} the density of G{G} is defined as density(G)=(2#E)/(#V),{\df{density}(G)=(2 \by \card{E})/(\card{V}),} where #E{\card{E}} is the number of edges, and #V{\card{V}} is the number of vertices.

Subgraphs

In some cases, we're only interested in just a few parts of a graph G,{G,} rather than the entirety of G.{G.} Those parts are called subgraphs.

subgraph. Given a graph G=(V,E),{G=(V,E),} a subgraph of G{G} is a graph G=(V,E),{G' = (V',E'),} such that VV{V' \subseteq V} and EE.{E'\subseteq E.}

The definition above is for a simple graph. Given an edge e=(a,b){e = (a,b)} we say that a{a} and b{b} are neighbors or adjacent, and we say that e{e} is incident to a{a} and b.{b.}

Walks

A sequence of vertices and edges is called a walk.

walk. Let G=(VG,EG){G=(V_G, E_G)} be a graph of nN{n \in \nat} vertices. A walk is a sequence W=(v1,e1,v2,e2,v3,e3,,vn1,en1,vn){W = (v_1, e_1, v_2, e_2, v_3, e_3, \ldots, v_{n-1}, e_{n-1}, v_n)} where each viVG{v_i \in V_G} and eiEG{e_i \in E_G} for all 0in{0 \le i \le n} We define the number of edges in W{W} as the length of the walk.

Paths

A sequences of vertices and edges with no repeated vertices is called a path.

path. Let G=(VG,EG){G=(V_G, E_G)} be a graph of nN{n \in \nat} vertices. G{G} contains a path if, and only if, there exists a walk P=(v1,e1,v2,e2,,vn1,en1,vn){P=(v_1, e_1, v_2, e_2, \ldots, v_{n-1}, e_{n-1}, v_n)} where each viP{v_i \in P} is distinct, for all 1i<n.{1 \le i \lt n.} We denote the set of all vertices comprising P{P} with v(P),{\v(P),} and the set of all all edges as e(P).{\e(P).}

vertex disjoint paths. Two paths P{P} and Q{Q} are vertex disjoint if and only if v(P)v(Q)=.{\v(P) \cap \v(Q) = \nil.}

edge disjoint paths. Two paths P{P} and Q{Q} are edge disjoint if and only if e(P)e(Q)=.{\e(P) \cap \e(Q) = \nil.}

disjoint paths. Two paths P{P} and Q{Q} are disjoint if, and only if, they are both vertex disjoint and edge disjoint.

A Hamiltonian path on a graph G{G} is a path on G{G} containing every vertex of G.{G.}

hamiltonian path. A path P{P} on a graph G=(E,V){G=(E,V)} is a Hamiltonian path if, and only if, v(P)=v(G).{\v(P) = \v(G).}

Trails

A walk with no repeated edges is called a trail. If the first and the last vertices are equal, we call the walk a closed trail.

trail. Let G=(VG,EG){G=(V_G, E_G)} be a graph of nN{n \in \nat} vertices. G{G} contains a trail if, and only if, there exists a walk T=(v1,e1,v2,e2,v3,e3,,vn1,en1,vn){T=(v_1, e_1, v_2, e_2, v_3, e_3, \ldots, v_{n-1}, e_{n-1}, v_n)} where each eiT{e_i \in T} is distinct, for all 1i<n.{1 \le i \lt n.} If vn=v1{v_n = v_1} (that is, a loop) we say that T{T} is a closed trail.

Circuits

A circuit is a trail that starts and finishes at the same node.

circuit. Given a graph G=(VG,EG){G=(V_G, E_G)} with nN{n \in \nat} vertices, we say that G{G} contains a circuit if, and only if, there exists a trail T=(v1,e1,v2,e2,v3,e3,,vn1,en1,vn){T=(v_1, e_1, v_2, e_2, v_3, e_3, \ldots, v_{n-1}, e_{n-1}, v_n)} such that vn=v1,{v_n = v_1,} for all 1i<n,{1 \le i \lt n,}

Cycles

A cycle is a circuit where the only repeated vertices are the first and the last.

cycle. Let G=(VG,EG){G=(V_G, E_G)} be a graph of nN{n \in \nat} vertices. G{G} a cycle if, and only if, there exists a circuit T=(v1,e1,v2,e2,v3,e3,,vn1,en1,vn),{T=(v_1, e_1, v_2, e_2, v_3, e_3, \ldots, v_{n-1}, e_{n-1}, v_n),} where v1=vn{v_1 = v_n} and v1{v_1} is the only repeated vertex for all 1i<n.{1 \le i \lt n.} If G{G} contains a cycle, then we say that G{G} is called cyclic. Otherwise, we say that G{G} is acyclic.

Some cycles have a special property of containing every vertex of G.{G.} We call these cycles Hamiltonian cycles.

hamiltonian cycle. A cycle C{C} on a graph G{G} is a Hamiltonian cycle if, and only if, v(C)=v(G).{\v(C)=\v(G).}

Connectedness

Graphs are called connected if it contains a path from every vertex to every other vertex. That is, there exists a path if we wanted to visit every single vertex.

connectedness. A graph G=(VG,EG){G=(V_G,E_G)} is connected if, and only if, for any two vertices x,yVG,{x,y \in V_G,} there exists a path whose endpoints are x{x} and y.{y.} Otherwise, we say that G{G} is disconnected.

For a digraph D,{D,} if we can can visit every vertex in the correct direction, we call D{D} a strongly-connected graph.

strong-connectedness. A digraph D=(VG,EG){D=(V_G,E_G)} is strongly-connected if, and only if, there exists a walk W=(v1,e1,v2,e2,v3,e3,,vn1,en1,vn),{W=(v_1, e_1, v_2, e_2, v_3, e_3, \ldots, v_{n-1}, e_{n-1}, v_n),} such that, if viVG,{v_i \in V_G,} then W(vi)=vi{W(v_i) = v_i} and A(vi)=vi,{A(v_i) = v_i,} where A{A} is the adjacency function of D.{D.} Otherwise, we say that D{D} is weakly-connected.

Trees

tree. A graph T{T} is called a tree if, and only if, T{T} is acyclic and T{T} is connected.

The spanning tree of a tree T{T} is a tree that contains all of T{T}'s vertices, but not necessarily edges.

spanning tree. Given a tree G,{G,} a spanning tree of G,{G,} denoted t[G],{t[G],} is a tree (vt,et),{(v_t, e_t),} where v[t]=V[T]{v[t] = V[T]} and esE[T].{e_s \subseteq E[T].}

forest. A forest F{F} is a set {T1,T2,,Tn},{\set{T_1, T_2, \ldots, T_n},} where each Ti{T_i} is a tree for all 1i<n{1 \le i \lt n} with nN.{n \in \nat.}

spanning forest. Given a tree T,{T,} the spanning forest of T,{T,} denoted sf[T],{\df{sf}[T],} is a set {t1,t2,,tn},{\set{t_1, t_2, \ldots, t_n},} where each ti{t_i} is a spanning tree of T{T} for all 1i<n{1 \le i \lt n} with nN.{n \in \nat.}

Graph Operations

complete graph. A graph G=(V,E){G = (V,E)} is a complete graph if, and only if, for any aV{a \in V} and any bV,{b \in V,} there exists an edge (a,b)V.{(a,b) \in V.} We denote a complete graph with the notation G[V]2.{G[V]^2.}

graph complement. Given a graph G=(V,E),{G = (V,E),} the complement of G,{G,} denoted Gc,{\nix{G},} is defined as Gc=G[E]G[V]2.{\nix{G} = G[E] \smallsetminus G[V]^2.}

graph union. Given the graph G1=(V1,E1){G_1 = (V_1,E_1)} and G2=(V1,E2),{G_2 = (V_1,E_2),} the union of G1{G_1} and G2{G_2} is defined as G1G2=E1E2.{G_1 \cup G_2 = E_1 \cup E_2.}

Digraphs

Graphs can be directed or undirected. Directed graphs are also called digraphs. For example:

Here, the edges are directed a particular way. By definition, an edge (e,g){(e,g)} exists if and only if an edge (g,e){(g,e)} exists. The "arrowhead" is determined by a function called the adjacency function. This function tells us that a directed edge (e,g){(e,g)} exists, but not (g,e).{(g,e).}

digraph. A digraph D{D} is an ordered triple (VD,ED,AD),{(V_D, E_D, A_D),} where VD{V_D} is a set of vertices, ED{E_D} is a set of directed edges. The function AD{A_D} is a mapping called the arrow function of D,{D,} such that AD:EDVD×VD{A_D : E_D \mapsto V_D \times V_D} associates each directed edge with an ordered pair of vertices. Given a directed edge (u,v),{(u,v),} where u{u} and v{v} are vertices of VD,{V_D,} we call u{u} the edge's source, and v{v} the edge's destination. Given a vertex vVD,{v \in V_D,} the number of edges such that v{v} is a source is called the outdegree of v,{v,} denoted odeg(v).{\df{odeg}(v).} The number of edges such that v{v} is a destination is called the indegree of v,{v,} denoted indeg(v).{\df{indeg}(v).}

Adjacency Matrices

While graph diagrams look pretty, they're not the easiest to draw nor are they the best vehicles for reasoning. Another way to model graphs is with an adjacency table or adjacency matrix. For example, below is a graph diagram and its corresponding adjacency matrix.

a{a}b{b}c{c}d{d}e{e}
a{a}01011
b{b}00100
c{c}00010
d{d}00001
e{e}00000

The left-most column indicates the head (the starting vertex), and the header row indicates the tail (the ending vertex). A more concise form of the adjacency table is an adjacency matrix:

[0101100100000100000100000]. \mx{ 0 & 1 & 0 & 1 & 1 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ }.

This should ring some bells: There is, in fact, a connection between graph theory and linear algebra. We will be seeing some linear algebra applications in these materials.