\( \newcommand\al{\alpha} \newcommand\be{\beta} \newcommand\de{\delta} \newcommand\De{\Delta} \newcommand\eps{\epsilon} \newcommand\ga{\gamma} \newcommand\Ga{\Gamma} \newcommand\ka{\kappa} \newcommand\la{\lambda} \newcommand\La{\Lambda} \newcommand\om{\omega} \newcommand\Om{\Omega} \newcommand\sg{\sigma} \newcommand\Sg{\Sigma} \renewcommand\th{\theta} %--- Latex uses \th for a Norse character \newcommand\Th{\Theta} \newcommand\vphi{\varphi} % % some calligraphy % \newcommand\cA{{\mathcal A}} \newcommand\cB{{\mathcal B}} \newcommand\cC{{\mathcal C}} \newcommand\cD{{\mathcal D}} \newcommand\cE{{\mathcal E}} \newcommand\cF{{\mathcal F}} \newcommand\cG{{\mathcal G}} \newcommand\cH{{\mathcal H}} \newcommand\cI{{\mathcal I}} \newcommand\cJ{{\mathcal J}} \newcommand\cK{{\mathcal K}} \newcommand\cL{{\mathcal L}} \newcommand\cM{{\mathcal M}} \newcommand\cN{{\mathcal N}} \newcommand\cO{{\mathcal O}} \newcommand\cP{{\mathcal P}} \newcommand\cQ{{\mathcal Q}} \newcommand\cR{{\mathcal R}} \newcommand\cS{{\mathcal S}} \newcommand\cT{{\mathcal T}} \newcommand\cU{{\mathcal U}} \newcommand\cV{{\mathcal V}} % % fields and rings (and a semigroup) % \newcommand\cx{{\mathbb C}}% complexes \newcommand\fld{{\mathbb F}} \newcommand\flde{{\mathbb E}} \newcommand\ints{{\mathbb Z}} \newcommand\nn{{\mathbb N}}%non-negative integers \newcommand\re{{\mathbb R}}%reals \newcommand\rats{{\mathbb Q}} % % the really useful stuff % \newcommand\comp[1]{{\mkern2mu\overline{\mkern-2mu#1}}} \newcommand\diff{\mathbin{\mkern-1.5mu\setminus\mkern-1.5mu}}% for \setminus \newcommand\res{\mathbin{\mkern-2.0mu\restriction\mkern-2.0mu}} \newcommand\sbs{\subseteq} \newcommand\sps{\supseteq} \newcommand\seq[3]{#1_{#2},\ldots,#1_{#3}} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\im}{im} \DeclareMathOperator{\row}{row} \newcommand\pmat[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand\cprod{\mathbin{\square}} \newcommand\gbin[2]{\genfrac{[}{]}{0pt}{}{#1}{#2}} % % matrix theory % \newcommand\ip[2]{\langle#1,#2\rangle} \newcommand\one{{\bf1}} \DeclareMathOperator{\rk}{rk} \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\col}{col} \newcommand\mat[3]{\mathrm{Mat}_{#1\times #2}(#3)} \newcommand\sm[3]{\sum_{#1=#2}^{#3}} % % some group theory % \newcommand\aut[1]{{\rm Aut}(#1)} \newcommand\fx[1]{{\rm fix}(#1)}% ch2 \newcommand\grp[1]{\langle #1\rangle} \newcommand\nrml{\vartriangleleft} \newcommand\nrmleq{\trianglelefteq} \DeclareMathOperator{\Sym}{Sym} \newcommand\sym[1]{\Sym(#1)} \DeclareMathOperator{\Alt}{Alt} \newcommand\alt[1]{\Alt(#1)} \)

Section2.3Line Graphs and Covers

Let \(D\) be the incidence matrix of an orientation of the graph \(X\). Then

\[DD^T = \De -A,\quad D^TD = 2I +S\]
where \(\De\) is the diagonal matrix of valencies of \(X\) and \(S\) is a symmetric matrix with zero diagonal and off-diagonal entries from \(\{0,1,-1\}\). In other words, \(S\) is a signed adjacency matrix. The entries of \(S\) are indexed by \(E(X)\), and the \(ab\)-entry is non-zero if and only if the edges \(a\) and \(b\) are adjacent in the line graph \(L(X)\) of \(X\), so we have a signed adjacency matrix for \(L(X)\).

A signed ajacency matrix \(T\) of a graph \(Y\) determines a two-fold cover \(Z\) of \(Y\), as follows. The vertex set of the cover is

\[V(Y) \times \{0,1\}\]
and \((u,i)\) and \((v,j)\) are adjacent if and only if \(u\sim v\) and either
\[i=j,\quad T_{u,v}=1\]
\[i\ne j,\quad T_{u,v}=-1.\]
It is easy to check that the map that sends \((u,i)\) to \((u,1-i)\) is an automorphism of \(Z\). In matrix terms, we get the adjacency matrix of \(Z\) by applying the following substitutions to the entries of \(A(Y)\):
\[0\to\pmat{0&0\\0&0},\qquad 1\to\pmat{1&0\\0&1} \qquad -1\to\pmat{0&1\\1&0}.\]

The pairs

are called the fibres of the cover, you might verify that these form an equitable partition of the cover, and that the quotient over the fibres is \(Y\). Thus we see that the characteristic polynomial of the cover is divisible by the characteristic polynomial of the graph.

Here \(|S|=A(Y)\). We offer another view of this result. If \(A\) and \(B\) are symmetric \(01\)-matrices such that \(A\circ B=0\), then \(A-B\) is a signed adjacency matrix and

\[ \pmat{A&B\\B&A}\]
is the adjacency matrix of the corresponding cover. We can now use the following:
\[\frac12\pmat{I&I\\I&-I}\pmat{A&B\\B&A}\pmat{I&I\\I&-I} =\pmat{A+B&0\\0&A-B}.\]

This leads us to an easy construction of the adjacency matrix of the cover.

Now we turn to our line graphs. The cover can be constructed as follows. Choose one arc \((u,v)\) for each edge \(\{u,v\}\) of \(X\). (This defines an orientation of the graph.) Our matrix \(S\) is a signing of \(A(L(X))\)—its rows and columns are indexed by our chosen arcs, and the entry corresponding to a pair of distinct overlapping arcs is 1 if they have the same head or tail, and \(-1\) otherwise. Rather than represent the vertices of the cover by pairs \(((u,v),i)\), we proceed thus: if \((u,v)\) is one of our chosen arcs then we use \((u,v)\) to denote \(((u,v),0)\) and \((v,u)\) for \(((u,v),1)\). The following procedure implements this.

We test that our two constructions agree on the Petersen graph, and see that cover is isomorphic to the line graph of the direct product of the Petersen graph with \(K_2\). (This product is itself a cover of the Petersen graph.)