Biographies Characteristics Analysis

Mathematical model. Metric characteristics of an undirected graph

Statement. If there is a route for two vertices connecting them, then there must be a minimal route connecting these vertices. Let's denote the length of this route asd(v,w).

Definition. the valued(v,w) (finite or infinite) will be called distance between vertices v, w . This distance satisfies the axioms of the metric:

1) d(v,w) 0, andd(v,w) = 0 if and only ifv=w;

2) d(v, w) = d(w, v);

3) d(v, w) d(v, u) + d(u, w).

Definition. diameter of a connected graph is the maximum possible distance between two of its vertices.

Definition. Center A graph is a vertex such that maximum distance between it and any other peak is the smallest of all possible; this distance is called radius graph.

Example 82.

For graph G shown in fig. 3.16, find the radius, diameter and centers.

Rice. 3.16. Count for example 82

Decision.

To determine the centers, radius, diameter of the graph G, find the matrix D(g) distances between graph vertices, elements dij which will be the distances between the vertices v i and vj. For this we use graphical representation graph. Note that the matrix D(g) symmetrical about the main diagonal.

Using the resulting matrix for each graph vertex G define the largest removal from the expression: for i,j = 1, 2, …, 5. As a result, we get: r(v1) = 3,r(v2) = 2,r(v3) = 2,r(v4) = 2,r(v5) = 3. The minimum of the obtained numbers is the radius of the graph G, the maximum is the diameter of the graph G. Means, R(G) = 2 and D(G) = 3, the centers are the vertices v 2 ,v 3 ,v 4.

Computing distances and determining paths in a graph is one of the most obvious and practical problems that arise in graph theory. Let us introduce some necessary definitions.

Eccentricity vertices of the graph - the distance to the vertex that is the most distant from it. For a graph for which it is not defined the weight its edges, the distance is defined as the number of edges.

Radius graph is the minimum eccentricity of the vertices, and diameter the graph is the maximum eccentricity of the vertices.

Centre graph form vertices whose eccentricity equal to the radius. The graph center can consist of one, several or all graph vertices.

Peripheral the vertices have an eccentricity equal to the diameter.

A simple chain with length equal to the diameter of the graph is called diametrical .

Theorem 12.1.In a connected graph, the diameter is at most the rank of its adjacency matrix.

Theorem 12.2.(Jordan) Every tree has a center consisting of one or two adjacent vertices.

Theorem 12.3.If the diameter of the tree is even, then the tree has a single center, and all diametrical chains pass through it; if the diameter is odd, then there are two centers and all diametrical chains contain an edge connecting them.

Obviously practical value the center of the graph. If, for example, we are talking about a road graph with vertices-cities, then in the mathematical center it is advisable to place administrative center, warehouses, etc. The same approach can be applied to a weighted graph, where the distances are the weights of the edges. As a weight, you can take the Euclidean distance, time or cost of movement between points.

Example 12.5. Find the radius, diameter and center of the graph shown in fig. 12.1.

Decision. In this problem, it is convenient to use distance matrix S. The element of this square symmetric matrix is ​​equal to the distance between the vertex i and top j. For the graph shown in Fig. 12.1, the distance matrix has next view:

. (12.2)

Let's calculate the eccentricity of each vertex. This value can be defined as the maximum element of the corresponding column of the distance matrix (or row, since the matrix S symmetrical). We get

Graph Radius r is the minimum eccentricity of the vertices. AT this case r= 2. Vertices No. 2, No. 4, and No. 5 have such an eccentricity. These vertices form the center of the graph. Graph diameter d is the maximum eccentricity of the vertices. In this case d= 3. Vertices No. 1 and No. 3 have such an eccentricity; this is the periphery of the graph. In the studied graph, the vertices turned out to be either central or peripheral. There are other vertices in graphs of higher order.

The eccentricities of the vertices of a small graph can be easily calculated by direct calculation from the figure. However, the graph is not always defined by its drawing. In addition, the graph may have big size. Therefore, another way of solving the previous problem is needed. The following theorem is known.

Theorem 12.4. Let be the adjacency matrix of the graph G without loops and , where . Then it is equal to the number of routes of length k from vertex to vertex .

Solving problems of graph theory using various transformations of the adjacency matrix is ​​called algebraic method .

Example 12.6. Find the distance matrix of the graph shown in fig. 12.1, by the algebraic method.

Decision. The adjacency matrix of this graph is:

.

We will fill in the distance matrix by considering the degrees of the adjacency matrix. Adjacency matrix units show pairs of vertices that have a distance of one between them (that is, they are connected by a single edge).

.

The diagonal elements of the distance matrix are zeros. Multiply the adjacency matrix by itself:

.

According to the theorem between vertices 2 and 3, 1 and 4, etc. there are some routes of length 2 (because the degree of the matrix is ​​two). The number of routes is not used here, the very fact of the existence of a route and its length are important, which is indicated by a non-zero element of the degree of the matrix, which does not coincide with the element noted when calculating a route of lesser length. We put 2 in the empty elements of the distance matrix and get next approximation:

.

The distance between vertices 1 and 3 remains unknown. We will multiply the adjacency matrix on itself until the matrix non-null element will not appear . Then the corresponding element of the distance matrix equal to the degree adjacency matrices: . At the next step, we get

In the last section, we emphasized that the adjacency matrix $A$ introduced there, or rather the vertex adjacency matrix of a graph, plays a very important role. essential role in graph theory. We noted as the advantages of this matrix - it is square of the order, equal to the number rows of the incidence matrix $B$, i.e., as a rule, contains lesser number elements. Secondly, this matrix stores all information about the edges of the graph and, for a given numbering of vertices, uniquely describes the graph. The adjacency matrix, like the incidence matrix of a graph, is a (0,1)-matrix, i.e. its elements can be considered as elements of other algebraic structures, and not only as elements of the set of integers. In particular, we noted that the elements of the adjacency matrix can be considered as elements of Boolean algebra, subject to the laws of Boolean arithmetic, but did not explain this properly. Before filling this gap, we emphasize the advantages of the adjacency matrix, which follow from its squareness.

To do this, we recall the rules for matrix multiplication. Let arbitrary matrices with numerical elements be given: matrix $A$ of dimension $n\times m$ with elements $a_(ik)$ and matrix $B$ of dimension $m\times q$ with elements $b_(kj)$. A matrix $C$ of dimension $n\times q$ is called the product of matrix $A$ and $B$ (order is important) if its elements $c_(ij)$ are defined as follows: $c_(ij) = \sum\limits_( k = 1)^m (a_(ik) b_(kj))$. The product of matrices is written in the usual way $AB=C$. As you can see, the product of matrices requires consistency in the sizes of the first and second factors (the number of columns of the first factor matrix is ​​equal to the number of rows of the second factor matrix). This requirement disappears if we consider square matrices of the same order and, therefore, we can consider arbitrary powers of a square matrix. This is one of the benefits square matrices in front of the rectangles. Another advantage is that we can give a graph interpretation to the degree elements of the adjacency matrix.

Let the adjacency matrix $A$ be of the form: $A = \left(((\begin(array)(*c) (a_(11) ) & (a_(12) ) & (...) & (a_(1n ) ) \\ (a_(21) ) & (a_(22) ) & (...) & (a_(2n) ) \\ (...) & (...) & (...) & (...) \\ (a_(n1) ) & (a_(n2) ) & (...) & (a_(nn) ) \\ \end(array) )) \right)$, and its $ k$th power — $A^k = \left(((\begin(array)(*c) (a_(11)^((k)) ) & (a_(12)^((k)) ) & (...) & (a_(1n)^((k)) ) \\ (a_(21)^((k)) ) & (a_(22)^((k)) ) & (.. .) & (a_(2n)^((k)) ) \\ (...) & (...) & (...) & (...) \\ (a_(n1)^(( k)) ) & (a_(n2)^((k)) ) & (...) & (a_(nn)^((k)) ) \\ \end(array) )) \right)$, where $k = 2,3,...$ It is obvious that $A^k$, like the matrix $A$, will be a symmetric matrix.

Let $k=2$. Then $a_(ij)^((2)) = \sum\limits_(k = 1)^n (a_(il) a_(lj))$ ($i,j = 1,2,...,n $), and each term $a_(il) a_(lj)$ is equal to either $0$ or $1$. The case when $a_(il) a_(lj) = 1$ means that there are two edges in the graph: the edge $\(i,l\)$ (since $a_(il) = 1)$ and the edge $\( l,j\)$ (since $a_(lj) = 1$) and hence the path $\(( \(i,l\), \(l,j\) )\)$ from $i$- th vertex to $j$-th of length two (a path of two edges). Here we are talking about a path, not a chain, since the direction is indicated - from from the $i$-th vertex to the $j$-th one. Thus, $a_(ij)^((2))$ gives us the number of all paths on the graph (in the geometric interpretation of the graph) of length 2 leading from the $i$th vertex to the $j$th one.

If $k=3$ then $A^3 = A^2A = AA^2 = AAA$ and $a_(ij)^((3)) = \sum\limits_(l_1 = 1)^n (a_( il_1 ) ) a_(l_1 j)^((2)) = $ $\sum\limits_(l_1 = 1)^n (a_(il_1 ) ) \left((\sum\limits_(l_2 = 1)^n ( a_(l_1 l_2 ) a_(l_2 j) ) ) \right) =$ $\sum\limits_(l_1 = 1)^n (\sum\limits_(l_2 = 1)^n (a_(il_1 ) ) ) a_( l_1 l_2 ) a_(l_2 j) = \sum\limits_(l_1 ,l_2 = 1)^n (a_(il_1 ) a_(l_1 l_2 ) a_(l_2 j) )$.

The term $a_(il_1 ) a_(l_1 l_2 ) a_(l_2 j) $, if it is equal to 1, defines a path of length 3 going from the $i$-th vertex to the $j$-th vertex and passing through the vertices $l_1$ and $l_2$. Then $a_(ij)^((3))$ gives us the number of paths of length 3 connecting the $i$th and $j$th vertices. AT general case$a_(ij)^((k))$ specifies the number of paths of length $k$ connecting the $i$th and $j$th vertices. Moreover, $a_(ij)^((k)) = \sum\limits_(l_1 ,l_2 ,...,l_(k - 1) = 1)^n (a_(il_1 ) a_(l_1 l_2 ) .. .) a_(l_(k - 2) l_(k - 1) ) a_(l_(k - 1) j)$.

It is clear that the quantity $a_(ii)^((k)) $ gives us the number of closed paths of length $k$ starting and ending at the vertex $i$. Thus, a path of length 2, $a_(il) a_(li)$, means a path passing along the edge $\( i,l \)$ from the vertex $i$ to the vertex $l$ and back. Therefore $a_(ii)^((2)) = s_i$, i.e. the diagonal elements of the matrix $A^2$ are equal to the powers of the corresponding vertices.

Consider now, along with the matrix $A$, the matrix $\dot (A)$, which differs from the matrix $A$ only in that its elements (numbers 0 or 1) are considered as elements of the Boolean algebra. Therefore, actions with such matrices will be carried out according to the rules of Boolean algebra. Since the operations of addition and multiplication of matrices with Boolean elements are reduced to the operations of addition and multiplication of elements of these matrices according to the rules of Boolean arithmetic, we hope that this will not lead to difficulties. A matrix with Boolean elements will be called a Boolean matrix. Obviously, the operations of addition and multiplication of Boolean matrices are closed on the set of Boolean matrices, i.e. the result of these operations will again be a boolean matrix.

Obviously, for a given numbering of vertices, there is a one-to-one correspondence between Boolean adjacency matrices and graphs. Therefore, of interest is the graph interpretation of the actions of addition and exponentiation of Boolean adjacency matrices (in the general case, the product of two symmetric matrices of the same order is not necessarily a symmetric matrix).

The result of adding two Boolean symmetric matrices of the same order will be a Boolean symmetric matrix of the same order with zeros in those places where both terms have zeros and ones in those places where at least one term has a unit. In graph interpretation, this operation is called the operation graph addition. The sum of two graphs, given on the same set of vertices with the same numbering, is called a graph whose vertices i and j are non-adjacent if they are non-adjacent for both summand graphs, and vertices i and j are adjacent if they are adjacent for at least one summand graph.

Let us now interpret the second power of the Boolean adjacency matrix $\dot (A)^2$ with entries $\dot (a)_(ij)^((2)) = \sum\limits_(l = 1)^n (\dot ( a)_(il) \dot (a)_(lj) )$. It is clear that $\dot (a)_(ij)^((2)) = 1$ if at least one term $\dot (a)_(il) \dot (a)_(lj) $ is equal to 1 and $\dot (a)_(ij)^((2)) = 0$ if all terms are equal to 0. If the matrix $\dot (A)$ is the adjacency matrix of some graph, i.e. is a symmetric (0,1)-matrix with a zero main diagonal, then the matrix $\dot (A)^2$, generally speaking, is not an adjacency matrix of a graph in the sense we have adopted, since all its diagonal elements are equal to 1 (if graph has no isolated vertices). In order to look at such matrices as adjacency matrices, we must, when considering the connections between the vertices of some connected system that define this system as a graph, admit the connection of some vertices with themselves. An “edge” that defines the connection of a certain vertex with itself is called loop. We will continue, as before, by the word graph we will understand a graph without loops, and about a graph with loops, if this is not clear from the context, we will say so - a graph with loops.

Consider the sum $\dot (A)^() = \dot (A) + \dot (A)^2$. The matrix $\dot (A)^()$ gives us a graph obtained from the original one by "saturating" it with additional connections corresponding to paths of length 2. That is, the vertices $i$ and $j$ are adjacent in the new graph if they are adjacent in the original graph or these vertices are connected by some path of length 2, and vertices $i$ and $j$ are non-adjacent if they are non-adjacent in the original graph and there is no path of length 2 connecting these vertices.

$\dot (A)^() = \dot (A) + \dot (A)^2 + \dot (A)^3$ is defined similarly. That is, in the graph given by the matrix$\dot (A)^()$ vertices $i$ and $j$ are adjacent if they are adjacent in the graph $\dot (A)^()$ or these vertices are connected by some path of length 3 in the original graph, and vertices $i$ and $j$ are non-adjacent if they are non-adjacent in the graph $\dot (A)^()$ and there is no path of length 3 connecting these vertices in the original graph. Etc.

In general $\dot (A)^([k]) = \sum\limits_(i = 1)^k (\dot (A)^i) $. It is easy to see that all $\dot (A)^([k])$ for $k \ge n - 1$, where $n$ is the order of the matrix $\dot (A)$, are equal. Indeed, if vertices $i$ and $j$ are connected, then there is a path (chain) connecting these vertices, and, therefore, there is a simple path (simple chain) connecting these vertices. The maximum possible simple path in an $n$-vertex graph has length $n-1$ (a simple path connecting all distinct vertices of the graph). Therefore, if in the matrix $\dot (A)^()$ there is 1 in place $(i,j)$, then in the same place in the matrix $\dot (A)^([k])$ for $k \ge n - 1$ will also be 1, since the matrix $\dot (A)^()$ is included as a Boolean term in the definition of the matrix $\dot (A)^([k])$. If in the matrix $\dot (A)^()$ there is 0 instead of $(i,j)$, then this means that there is no simple chain in the graph connecting $i$-th and $j$- th vertex, and, therefore, there is no chain at all connecting these vertices. Hence, in the case under consideration and in the matrix $\dot (A)^([k])$ for $k \ge n - 1$, the place ($i$,$j)$ will be 0. This proves our assertion on the equality of all matrices $\dot (A)^([k])$ for $k \ge n - 1$ to the matrix $\dot (A)^()$ and, therefore, to each other.

The matrix $\dot (A)^()$ is called the matrix of the transitive closure of the matrix$\dot (A)$, as well as the adjacency matrix of the transitive closure of the graph given by the matrix $\dot (A)$. It is quite obvious that the matrix of the transitive closure of a connected graph is the adjacency matrix complete graph, i.e. a square matrix consisting of only ones. This observation also gives us a method for determining the connectivity of a graph: the graph is connected if and only if the matrix of the transitive closure of its adjacency matrix will consist of only ones (it will be the matrix of the complete graph).

The transitive closure matrix also makes it possible to solve the problem of splitting a graph into connected components.

Let us now show how the procedure of transitive closure allows us to construct the so-called "distance matrix". To do this, we determine the distance between the vertices $i$ and $j$. If vertices $i$ and $j$ are connected, then distance between them we will name the length of the minimal (according to the number of traversal of the edges) simple path connecting these vertices; if vertices $i$ and $j$ are disconnected, then we set the distance equal to zero (zero as the negation of some path connecting these vertices). With this definition of distance, the distance between a vertex and itself is equal to 2 (the length of the path along the edge and back). If there is a loop at the vertex, then the distance between the vertex and itself is equal to 1.

To construct a distance matrix for an $n$-vertex graph with an adjacency matrix $A$, which would indicate the distance between any two vertices, we introduce the matrices $A^(\(k\)) = A^([k]) - A^()$, where $k = 2,3,...,n - 1$ and $A^(\(1\)) = A^() = A$. The absence of dots above the matrix notation indicates that we consider the matrices $A^([k])$ ($k = 1,2,...,n - 1)$ as numerical (0,1)-matrices, naturally obtained from the matrices $\dot (A)^([k])$ (we now consider the Boolean elements 0 and 1 as the numbers 0 and 1). It follows from the method of constructing the matrices $A^([k])$ that $A^([k]) \ge A^()$ ($k = 2,3,...,n - 1$) and, hence $A^(\(k\))$ ($k = 1,2,...,n - 1$) are (0,1)-matrices. Moreover, the matrix $A^(\(2\))$ contains 1 only in those places where the vertices determined by this place (row number and column number) are connected by some path of length two and are not connected by a smaller path. Similarly, $A^(\(3\))$ contains 1 only at those places where the vertices defined by this place are connected by a path of length three and not connected by any path of lesser length, and so on. Thus, the matrix $D = \sum\limits_(k = 1)^(n - 1) (k \cdot A^(\(k\)))$ will be the desired distance matrix. The element $d_(ij)$ of this matrix will be equal to the distance between the vertices $i$ and $j$. The distance between the vertices $u$ and $v$ will also be denoted as $d(u,v)$.

Comment. Specific summand product $a_(il_1 ) a_(l_1 l_2 ) ...a_(l_(k - 2) l_(k - 1) ) a_(l_(k - 1) j) = 1$ element $a_(ij )^((k))$ $k$-th power of the adjacency matrix $A^k$ specifies a specific $(i,j)$-path $i\(i,l_1\)l_1 \(l_1 ,l_2 \)l_2 ...l_(k - 2) \(l_(k - 2) ,l_(k - 1) \)l_(k - 1) \(l_(k - 1) ,j\)j$ from $i$ -th vertex to $j$-th. Sequence of adjacent vertices and edges connecting them $i\(i,l_1 \)l_1 \(l_1 ,l_2 \)l_2 ...l_(k - 2) \(l_(k - 2) ,l_(k - 1) \ )l_(k - 1) \(l_(k - 1) ,j\)j$ is also called $(i,j)$-route. A route differs from a chain consisting only of distinct adjacent edges in that equal edges are allowed in the route. A simple route consists of various adjacent vertices and edges, i.e. almost identical to a simple chain.

It is quite obvious that the element $d_(ij) $ of the distance matrix determines the length of the minimal chain connecting the $i$-th vertex with the $j$-th one.

Consider examples of graphs given in Figures 1 and 2, their adjacency matrices and their distance matrices.

Fig.1 (Graph $\Gamma _1$, adjacency matrix $A_1$, distance matrix $D_1$).
$A_1 = \left(((\begin(array)(*c) 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ \end(array) )) \right), $
$D_1 = \left(((\begin(array)(*c) 2 & 1 & 1 & 1 & 2 & 3 & 3 & 3 & 3 & 2 & 3 & 3 & 2 \\ 1 & 2 & 1 & 1 & 2 & 3 & 3 & 3 & 3 & 2 & 3 & 3 & 2 \\ 1 & 1 & 2 & 1 & 1 & 2 & 2 & 2 & 2 & 1 & 2 & 2 & 1 \\ 1 & 1 & 1 & 2 & 2 & 3 & 3 & 3 & 3 & 2 & 3 & 3 & 2 \\ 2 & 2 & 1 & 2 & 2 & 1 & 1 & 1 & 2 & 1 & 2 & 2 & 1 \\ 3 & 3 & 2 & 3 & 1 & 2 & 1 & 1 & 3 & 2 & 3 & 3 & 2 \\ 3 & 3 & 2 & 3 & 1 & 1 & 2 & 1 & 3 & 2 & 3 & 3 & 2 \\ 3 & 3 & 2 & 3 & 1 & 1 & 1 & 2 & 3 & 2 & 3 & 3 & 2 \\ 3 & 3 & 2 & 3 & 2 & 3 & 3 & 3 & 2 & 1 & 1 & 1 & 2 \\ 2 & 2 & 1 & 2 & 1 & 2 & 2 & 2 & 1 & 2 & 1 & 1 & 1 \\ 3 & 3 & 2 & 3 & 2 & 3 & 3 & 3 & 2 & 2 & 2 & 1 & 2 \\ 3 & 3 & 2 & 3 & 2 & 3 & 3 & 3 & 1 & 1 & 1 & 2 & 2 \\ 2 & 2 & 1 & 2 & 1 & 2 & 2 & 2 & 2 & 1 & 2 & 2 & 2 \\ \end(array) )) \right) $


Rice. 2 (Graph $\Gamma _2$, adjacency matrix $A_2$, distance matrix $D_2$).
$A_2 = \left(((\begin(array)(*c) 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \ \ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \end(array) )) \right)$,
$D_2 = \left(((\begin(array)(*c) 2 & 1 & 2 & 3 & 4 & 5 & 6 & 4 & 4 & 5 \\ 1 & 2 & 1 & 2 & 3 & 4 & 5 & ​​3 & 3 & 4 \\ 2 & 1 & 2 & 1 & 2 & 3 & 4 & 2 & 2 & 3 \\ 3 & 2 & 1 & 2 & 1 & 2 & 3 & 1 & 1 & 2 \ \ 4 & 3 & 2 & 1 & 2 & 1 & 2 & 2 & 2 & 3 \\ 5 & 4 & 3 & 2 & 1 & 2 & 1 & 3 & 3 & 4 \\ 6 & 5 & 4 & 3 & 2 & 1 & 2 & 4 & 4 & 5 \\ 4 & 3 & 2 & 1 & 2 & 3 & 4 & 2 & 2 & 3 \\ 4 & 3 & 2 & 1 & 2 & 3 & 4 & 2 & 2 & 1 \\ 5 & 4 & 3 & 2 & 3 & 4 & 5 & 3 & 1 & 2 \\ \end(array) )) \right). $

From the matrices $D_1$ and $D_2$ it is easy to determine diameters$d_1$ of the graph $\Gamma _1$ and $d_2$ of the graph $\Gamma _2$ as the maximum values ​​of the elements of these matrices. So $d_1 = 3$ and $d_2 = 6$.

In addition to the distance matrix, graph theory also considers other matrices, the elements of which are determined in terms of the length of the path. Such, for example, is traversal matrix. AT tour matrix$(i,j)$th element equal to length the longest path (the longest chain) from the $i$-th vertex to the $j$-th one, and if there are no such paths at all, then, in accordance with the definition of the distance, the $(i,j)$-th element of the traversal matrix is ​​set equal to zero .

At the end of the section, we will make a note about methods for determining the minimum and maximum chains using the matrix of distances connecting the $i$-th and $j$-th vertices in the graph.

And now we give some more definitions of graph theory related to distances between vertices and which are easily determined from distance matrices.

Eccentricity$e(v)$ of a vertex $v$ in a connected graph $\Gamma$ is defined as max $d(u,v)$ over all vertices $u$ in $\Gamma$. Radius$r(\Gamma)$ is the smallest of the vertex eccentricities. Note that the largest of the eccentricities is equal to the diameter of the graph. The vertex $v$ is called the central vertex of the graph $\Gamma$ if $e(v) = r(\Gamma)$; Centre graph $\Gamma$ is the set of all central vertices.

So for the graph $\Gamma _1$ from Fig.1, the eccentricity of vertex 13 will be equal to 2 ($e(13) = 2$). Vertices 3, 5 and 10 will have the same eccentricities ($e(3) = e(5) = e(10) = 2$). The eccentricity equal to 2 will be the smallest for the graph $\Gamma _1$, i.e. $r(\Gamma _1) = 2$. The center of the graph $\Gamma _1$ will consist of vertices 3, 5, 10 and 13. The largest eccentricity will be equal to 3 and will be equal, as noted above, to the diameter of the graph $\Gamma _1$.

For the graph $\Gamma _2$ from Fig. 2, the only vertex 4 will have the smallest eccentricity ($e(4) = r(\Gamma _2) = 3$). Therefore, the center of the graph $\Gamma _2$ consists of one vertex 4. The diameter of the graph $\Gamma _2$, as noted above, is 6.

The graph $\Gamma _2$ is a tree, and the structure of the center of any tree is described by the following theorem.

The Jordan-Sylvester theorem. Each tree has a center consisting of either one vertex or two adjacent vertices.

Proof. Denote by $K_1$ the graph consisting of one isolated vertex, and by $K_2$ the graph of two vertices connected by an edge. By definition, we set $e(K_1) = r(K_1) = 0$. Then the assertion of the theorem will hold for $K_1$ and $K_2$. Let us show that any tree $T$ has the same central vertices as the tree $(T)"$ obtained from $T$ by removing all its hanging vertices. It is clear that the distance from a given vertex $u$ to any other vertex $v$ can reach the greatest value only if $v$ is a hanging vertex.

Thus, the eccentricity of each vertex of the tree $(T)"$ is exactly one less than the eccentricity of the same vertex in $T$. eccentricity in $(T)"$, i.e. the centers of the trees $T$ and $(T)"$ coincide. If we continue the process of removing hanging vertices, then we get a sequence of trees with the same center as $T$. Since $T$ is finite, we will necessarily come to either $ K_1$, or to $K_2$ In any case, all vertices of the tree obtained in this way form the center of the tree, which thus consists either of a single vertex or of two adjacent vertices.

Let us now show how, using the distance matrix, one can determine, for example, the minimal path connecting vertex 4 with vertex 8 on the graph $\Gamma _1$. In the matrix $D_1$, the element $d_(48) = 3$. Take the 8th column of the matrix $D_1$ and find in the column all elements of this column equal to 1. At least one such element can be found due to the connectedness of the graph $D_1$. In fact, there will be three such units in the 8th column, and they are located in the 5th, 6th and 7th rows. Let us now take the 4th row and consider in it the elements located in the 5th, 6th and 7th columns. These elements will be 2, 3 and 3 respectively. Only the element located in the 5th column is equal to 2 and, together with the 1 located at the place (5,8), gives the sum 3. Hence, the vertex 5 is included in the chain $\( \(4,?\), \(? ,5\),\(5,8\)\)$. Let us now take the 5th column of the matrix and consider 1 of this column. These will be the elements located in the 3rd, 6th, 7th, 8th, 10th and 13th lines. We return to the 4th row again and see that only at the intersection of the third column and the 4th row is 1, which, combined with 1 in place (3,5), gives a total of 2. Therefore, the desired chain will be $\( \ (4,3\),\(3,5\),\(5,8\)\)$. Looking now at Figure 1, we are convinced of the validity of the solution found.

Although about the bypass matrix modern textbooks say that “there are no effective methods for finding its elements”, we will remember that using the incidence matrix we can find all the chains connecting a pair of vertices in a connected graph, and hence the chains of maximum length.

Let G(V,X) be a pseudograph and let vertices v and w (v¹w) of this graph be connected by a route. Then there necessarily exists a minimal route connecting these vertices. Denote the length of this route d(v, w). We also set d(v, v) =0 for any vertex vнV; d(v, w) = ¥ if there is no route between v and w.

The value d(v,w) defined in this way for any vertices v and w of the graph G(V, X) is called the distance between v and w.

The number of distances in a graph with n vertices is equal to the number of combinations C n 2 .

Let the graph G(V,X) be connected. Let us define the following concepts for it:

Graph diameter: d(G) = maxd(v, w).

Eccentricity (maximum offset) of the vertex: r(v) = maxd(v, w);

Graph Radius: r(G) = min r(v);

Graph center: any vertex vОV such that r(v) = r(G).

The diameter of the graph, the eccentricities of the vertices, the radius of the graph, and the centers of the graph are called the metric characteristics of the graph.

Example. Find the metric characteristics of the graph given by the diagram:

Let us find all distances, taking into account that d(v, w) = d(w, v).

The number of distances in the given column С 5 2 = 5!/3!2! = 10: d(v 1 , v 2) =1, d(v 1 , v 3) = 2, d(v 1 , v 4) = 2, d(v 1 , v 5) = 3, d(v 2 , v 3) = 1, d(v 2 , v 4) = 1, d(v 2 , v 5) = 2, d(v 3 , v 4) = 1, d(v 3 , v 5) = 2, d(v 4 , v 5) = 1.

Graph diameter d(G) =3.

Vertex eccentricities: r(v 1) = 3, r(v 2) = 2, r(v 3) = 2, r(v 4) = 2, r(v 5) = 3.

Graph radius r(G) = 2.

Graph centers v 2 , v 3 , v 4 .

3. Minimum routes in loaded graphs

A graph G(V, X) is called loaded if on the set of edges of the graph there is a function called a weight function that associates with each edge x нX of the graph some number l(x). The value l(x) is called the arc length.

The quantity l(x) can be given different meaning: transportation costs, travel time, distance between points, gasoline consumption, etc.

The sum of the lengths of the edges included in the route is called the length of the route.

Note that if for all x н X l(x) = 1, then the graph can be considered as unloaded.

A route in the graph G(V, X) from vertex v to vertex w (v¹w) is called minimal if it has the minimum length among all routes in graph G(V, X) from vertex v to vertex w.

We restrict ourselves to graphs for which l(x)>0.

When searching for the minimum route in a loaded graph with l(x)>0

we use the same statement as for an unloaded graph, namely:

any minimal route is a simple path.

Consider now the problem of finding the minimum route in a loaded graph.

Let the graph G(V,X) be loaded, the number of vertices n ³ 2, it is necessary to construct a minimal route from v 1 to v n .


Let's give an algorithm.

Step 1. Assign index a(v i) to each vertex: a(v 1) = 0, a(v i) = ¥, i ¹ 1. Color the vertex v 1 and put v = v 1 .

Step 2. For each uncolored vertex v j change the index according to the rule:

a(v j) = min (a(v j), a(v) + l(v, v j)).

Color the vertex for which a(v j) is the smallest.. also color the edge leading to the selected node. this step vertex v j . Put v = v j .

Step 3. If v = v j , finish the procedure since the shortest route from v 1 to v n . if v ¹ v n , then go to step 2.

Comment. Step 2 is impossible if all a(v j)= ¥. In this case, the vertex v n is unreachable.

Let us apply the above algorithm to the graph given by the diagram. Let's find in it the shortest route from v 1 to v 6 .

Step 1. Color the vertex v 1 . Assign indices to the vertices: a(v 1) =0, a(v 2) = a(v 3)=…= a(v n)=¥. We set v 1 = v.

a(v 2) = min(¥, 0+4) = 4,

a(v 3) = min(¥, 0+7) = 7,

a(v 4) = min(¥, 0+3) = 3,

a(v 5) = min (¥, 0+¥) = ¥,

a(v 6) = min (¥, 0+¥) = ¥.

We color the vertex v 4 and the edge (v 1 , v 4 ).

Step 3. Since the vertex v 6 is not colored, we perform step 2, setting v = v 4 .

a(v 2) = min(4, 3+¥) = 4,

a(v 3) = min(7, 3+¥) = 7,

a(v 5) = min(¥, 3+3) = 6,

a(v 6) = min (¥, 3+¥) = ¥.

We color the vertex v 2 and the edge (v 1 , v 2 ).

Step 3. Since the vertex v 6 is not colored, we perform step 2, setting v = v 2 .

a(v 3) = min(7, 4+3) = 7,

a(v 5) = min(6, 4+2) = 6,

a(v 6) = min (¥, 4+¥) = ¥.

We color the vertex v 5 and the edge (v 4 , v 5 ).

Step 3. Since the vertex v 6 is not colored, we perform step 2, setting v = v 5 .

a(v 3) = min(7, 6+¥) = 7,

a(v 6) = min (¥, 6+2) = 8.

We color the vertex v 3 and the edge (v 1 , v 3 ).

Step 3. Since the vertex v 6 is not colored, we perform step 2, setting v = v 3 .

a(v 6) = min(8, 7+2) = 8.

We color the vertex v 6 and the edge (v 5 , v 6 ).

Since the vertex v 6 is colored, we stop the work. We got the minimum route v 1 v 4 v 5 v 6 , the length of which is equal to 8 .

Note that in this case this is not the only minimal route for vertices v 1 and v 6, because in the algorithm it was possible to color instead of the edge (v 4 , v 5 ) the edge (v 2 , v 5 ), then we would get another route of the same length.

4. Tasks on trees

A graph is called acyclic if there are no cycles.

A graph without cycles is called a forest.

A tree is a connected acyclic graph.

Let be a connected undirected graph. Since any two vertices of the graph and are connected, there exist simple chains with ends and . There may be several such chains. Their lengths are non-negative integers. Therefore, between the vertices and there must be simple chains of the shortest length. The length of the chain of the smallest length connecting the vertices and is denoted by the symbol and is called distance between vertices and . A-priory .

It is easy to verify that the concept of distance introduced in this way satisfies the axioms of the metric:

2. if and only if ;

3. ;

4. the triangle inequality is true:

For a fixed vertex of the graph, the distance to the vertex farthest from it: , called eccentricity (maximum removing) tops.

diameter the graph is called the number, equal to the distance between the vertices of the graph most distant from each other:

.

A simple chain whose length is , is called diametrical chain. It is obvious that the diameter of the graph is equal to the largest among all the eccentricities of the vertices of the graph. The top is called peripheral, if .

The smallest of the eccentricities of the vertices of a connected graph is called it radius and denote:

Since the diameter of the graph is equal to the largest of the eccentricities of the vertices, and the radius is the smallest, the radius of the graph cannot be greater than its diameter. The top is called central, if . The set of all central vertices of a graph is called center. The graph center can be one vertex or several vertices. There are graphs whose center coincides with the set of all its vertices. For example, the center of a simple chain consists of two vertices for an even number of its vertices and one for an odd number, and for any cycle all vertices are central.

To illustrate, let's look at the graph in Fig. 4.29. Here

So

Vertex 2 is the center of the graph, and all its other vertices are peripheral. Chain 1, 2, 3 is one of the diametral chains.

For a connected digraph, the distance between vertices and is defined as the distance between vertices and in the undirected duplicate of this graph.

The problem of finding the central vertices of a graph is constantly encountered in practical activities. Let, for example, the vertices of the graph correspond to small villages, and its edges correspond to the roads between them. It is required to optimally place on these settlements let's say shops. AT similar situations the optimality criterion usually consists in optimizing the "worst" case, that is, minimizing the distance from the store to the most remote village. This approach to optimization involves placing stores in settlements that represent the central vertices of the graph.

Graph traversals

It has already been noted that the beginning of graph theory is associated with the problem of Königsberg bridges. This problem, famous in its time, consists in the following. Seven bridges of the city of Koenigsberg (now Kaliningrad) were located on the Pregel River as shown in Fig. 4.30. The task is to leave the house and return back, passing only once on each bridge.

Since only crossings over bridges are essential in the problem, the city plan can be reduced to a graph (more precisely, a multigraph) in which the edges correspond to bridges, and the vertices correspond to various divided parts of the city, which are indicated by letters (Fig. 4.30, on the right). Euler showed that it is impossible to go through all the Königsberg bridges once and go back. In his work, published in 1736, he formulated and solved the following common problem graph theory: under what conditions a connected graph contains a cycle passing through each of its edges.

The cycle in the graph is called Euler if it contains all the edges of the graph. A connected graph that has an Euler cycle is called Euler count. Such a graph can be drawn without lifting the pencil from the paper and without repeating the lines.

For example, the graph shown in Fig. 4.31 is Euler because it contains the Euler cycle 1, 2, 3, 4, 5, 6, 4, 2, 6, 1. There are other Euler cycles in this graph. It is clear that any two such cycles differ from each other only in the order in which the edges are traversed.

Theorem 4.7.(L. Euler, 1736 .) A connected graph is Euler if and only if the degrees of all its vertices are even.

The chain is called Euler if it contains all the edges of the graph.

Theorem 4.8(L. Euler, 1736 .) A multigraph has an Euler chain if and only if it is connected and the number of vertices of odd degree is 0 or 2.



Despite the "similarity" in the definitions for Euler and Hamiltonian cycles, the corresponding theories that establish the existence criteria and algorithms for finding such cycles have little in common. Euler's Terem (Theorem 4.7) makes it easy to determine whether a graph is Euler. Algorithms have been developed that make it quite easy to find the Euler cycles of an Euler graph. As far as Hamiltonian graphs are concerned, the situation here is essentially different. As a rule, it is very difficult to answer the question whether a certain graph is Hamiltonian. General criterion, which is similar to the Euler criterion, does not exist here. But, as it turned out, among the set of all graphs, there are negligibly few Euler graphs, but there are quite a lot of Hamiltonian graphs.