Feeds:
Posts

The p-group fixed point theorem

The goal of this post is to collect a list of applications of the following theorem, which is perhaps the simplest example of a fixed point theorem.

Theorem: Let $G$ be a finite $p$-group acting on a finite set $X$. Let $X^G$ denote the subset of $X$ consisting of those elements fixed by $G$. Then $|X^G| \equiv |X| \bmod p$; in particular, if $p \nmid |X|$ then $G$ has a fixed point.

Although this theorem is an elementary exercise, it has a surprising number of fundamental corollaries.

Moments, Hankel determinants, orthogonal polynomials, Motzkin paths, and continued fractions

Previously we described all finite-dimensional random algebras with faithful states. In this post we will describe states on the infinite-dimensional $^{\dagger}$-algebra $\mathbb{C}[x]$. Along the way we will run into and connect some beautiful and classical mathematical objects.

A special case of part of the following discussion can be found in an old post on the Catalan numbers.

More about the Schrödinger equation on a finite graph

It looks like the finite graph model is not just a toy model! It’s called a continuous-time quantum random walk and is used in quantum computing in a way similar to how random walks on graphs are used in classical computing. The fact that quantum random walks mix sooner than classical random walks relates to the fact that certain quantum algorithms are faster than their classical counterparts.

I learned this from a paper by Lin, Lippner, and Yau, Quantum tunneling on graphs, that was just posted on the arXiv; apparently the idea goes back to a 1998 paper. I have an idea about another sense in which the finite graph model is not just a toy model, but I have not yet had time to work out the details.

Something very unfortunate has happened: several things I have recently written that could have been blog entries are instead answers on math.SE! In the interest of exposition beyond the Q&A format I am going to “rescue” one of these answers. It is an answer to the following question, which I would like you to test your intuition about:

Flip $150$ coins. What is the probability that, at some point, you flipped at least $7$ consecutive tails?

Jot down a quick estimate; see if you can get within a factor of $2$ or so of the actual answer, which is below the fold.

Walks on graphs and statistical mechanics

I finally learned the solution to a little puzzle that’s been bothering me for awhile.

The setup of the puzzle is as follows. Let $G$ be a weighted undirected graph, e.g. to each edge $i \leftrightarrow j$ is associated a non-negative real number $a_{ij}$, and let $A$ be the corresponding weighted adjacency matrix. If $A$ is stochastic, one can interpret the weights $a_{ij}$ as transition probabilities between the vertices which describe a Markov chain. (The undirected condition then means that the transition probability between two states doesn’t depend on the order in which the transition occurs.) So one can talk about random walks on such a graph, and between any two vertices the most likely walk is the one which maximizes the product of the weights of the corresponding edges.

Suppose you don’t want to maximize a product associated to the edges, but a sum. For example, if the vertices of $G$ are locations to which you want to travel, then maybe you want the most likely random walk to also be the shortest one. If $E_{ij}$ is the distance between vertex $i$ and vertex $j$, then a natural way to do this is to set

$a_{ij} = e^{- \beta E_{ij}}$

where $\beta$ is some positive constant. Then the weight of a path is a monotonically decreasing function of its total length, and (fudging the stochastic constraint a bit) the most likely path between two vertices, at least if $\beta$ is sufficiently large, is going to be the shortest one. In fact, the larger $\beta$ is, the more likely you are to always be on the shortest path, since the contribution from any longer paths becomes vanishingly small. As $\beta \to \infty$, the ring in which the entries of the adjacency matrix lives stops being $\mathbb{R}$ and becomes (a version of) the tropical semiring.

That’s pretty cool, but it’s not what’s been puzzling me. What’s been puzzling me is that matrix entries in powers of $A$ look an awful lot like partition functions in statistical mechanics, with $\beta$ playing the role of the inverse temperature and $E_{ij}$ playing the role of energies. So, for awhile now, I’ve been wondering whether they actually are partition functions of systems I can construct starting from the matrix $A$. It turns out that the answer is yes: the corresponding systems are called one-dimensional vertex models, and in the literature the connection to matrix entries is called the transfer matrix method. I learned this from an expository article by Vaughan Jones, “In and around the origin of quantum groups,” and today I’d like to briefly explain how it works.

The McKay correspondence I

Today we’re going to relate the representation graphs introduced in this blog post to something I blogged about in the very first and second posts in this blog! The result will be a beautiful connection between the finite subgroups of $\text{SU}(2)$, the Platonic solids, and the ADE Dynkin diagrams. This connection has been written about in several other places on the internet, for example here, but I don’t know that any of those places have actually gone through the proof of the big theorem below, which I’d like to (as much for myself as for anyone else who is reading this).

Let $G$ be a finite subgroup of $\text{SL}_2(\mathbb{C})$. Since any inner product on $\mathbb{C}^2$ can be averaged to a $G$-invariant inner product, every finite subgroup of $\text{SL}_2(\mathbb{C})$ is conjugate to a finite subgroup of $\text{SU}(2)$, so we’ll suppose this without loss of generality. The two-dimensional representation $V$ of $G$ coming from this description is therefore faithful and self-dual. Consider the representation graph $\Gamma(V)$, whose vertices are the irreducible representations of $G$ and where the number of edges between $V_i$ and $V_j$ is the multiplicity of $V_j$ in $V_i \otimes V$. We will see that $\Gamma(V)$ is a connected undirected loopless graph whose spectral radius $\lambda(\Gamma(V))$ is $2$. Today our goal is to prove the following.

Theorem: The connected undirected loopless graphs of spectral radius $2$ are precisely the affine Dynkin diagrams $\tilde{A}_n, \tilde{D}_n, \tilde{E}_6, \tilde{E}_7, \tilde{E}_8$.

We’ll describe these graphs later; for now, just keep in mind that they are graphs with a number of vertices which is one greater than their subscript. In a later post we’ll see how these give us a classification of the Platonic solids, and we’ll also discuss other connections.

Walks on graphs and tensor products

Recently I asked a question on MO about some computations I’d done with Catalan numbers awhile ago on this blog, and Scott Morrison gave a beautiful answer explaining them in terms of quantum groups. Now, I don’t exactly know how quantum groups work, but along the way he described a useful connection between walks on graphs and tensor products of representations which at least partially explains one of the results I’d been wondering about and also unites several other related computations that have been on my mind recently.

Let $G$ be a compact group and let $\text{Rep}(G)$ denote the category of finite-dimensional unitary representations of $G$. As in the finite case, due to the existence of Haar measure, $\text{Rep}(G)$ is semisimple (i.e. every unitary representation decomposes uniquely into a sum of irreducible representations), and via the diagonal action it comes equipped with a tensor product with the property that the character of the tensor product is the product of the characters of the factors.

Question: Fix a representation $V \in \text{Rep}(G)$. What is the multiplicity of the trivial representation in $V^{\otimes n}$?

The Lindström-Gessel-Viennot lemma

One of my favorite results in algebraic combinatorics is a surprisingly useful lemma which allows a combinatorial interpretation of the determinant of certain integer matrices. One of its more popular uses is to prove an equivalence between three other definitions of the Schur functions (none of which I have given yet), but I find its other applications equally endearing.

Let $G$ be a locally finite directed acyclic graph, i.e. it has a not necessarily finite vertex set $V$ with finitely many edges between each pair of vertices such that no collection of edges forms a cycle. For example, $G$ could be $\mathbb{Z}^2$ with edges $(x, y) \to (x, y + 1)$ and $(x, y) \to (x + 1, y )$, which we’ll denote the acyclic plane. Assign a weight $w(e)$ to each edge and assign to a path the product of the weights of its edges. Given two vertices $a, b$ let $e(a, b)$ denote the sum of the weights of the paths from $a$ to $b$. Hence even if there are infinitely many such paths this sum is well-defined formally, and if there are only finitely many paths between two vertices then setting each weight to $1$ gives a well-defined non-negative integer.

Let $a_1, ... a_n$ and $b_1, ... b_n$ be a collection of vertices called sources and vertices called sinks. We are interested in $n$-tuples of paths, hereafter to be referred to as $n$-paths, sending each source to a distinct sink. Let $\mathbf{M}$ be the $n \times n$ matrix such that $\mathbf{M}_{ij} = e(a_i, b_j)$. Then the permanent of $\mathbf{M}$ counts the number of $n$-paths, but this is not interesting as permanents are hard to compute.

A $n$-path is called non-intersecting if none of the paths that make it up share a vertex; in particular, each $a_i$ is sent to distinct $b_i$. A non-intersecting path determines a permutation $\pi$ of the vertices; let the sign of a non-intersecting $n$-path be the sign of this permutation.

Lemma (Lindström, Gessel-Viennot): $\det \mathbf{M}$ is the signed sum of the weights of all non-intersecting $n$-paths.

Corollary: If the only possible permutation is $\pi = 1$ (i.e. $G$ is non-permutable), then $\det \mathbf{M}$ is the sum of the weights of all non-intersecting $n$-paths.

Newton’s sums, necklace congruences, and zeta functions II

In a previous post I gave essentially the following definition: given a discrete dynamical system, i.e. a space $X$ and a function $f : X \to X$, and under the assumption that $f^n$ has a finite number of fixed points for all $n$, we define the dynamical zeta function to be the formal power series

$\displaystyle \zeta_X(t) = \exp \sum_{n \ge 1} (\text{Fix } f^n) \frac{t^n}{n}$.

What I didn’t do was motivate this definition, mostly because I hadn’t really worked out the motivation myself. Now that we have an important special case worked out, we can discuss the general case, which will give a purely combinatorial proof of the second half of the Newton-Girard identities.

Newton’s sums, necklace congruences, and zeta functions

The goal of this post is to give a purely combinatorial proof of Newton’s sums which would have interrupted the flow of the previous post. Recall that, in the notation of the previous post, Newton’s sums (also known as the first Newton-Girard identity) state that

$\displaystyle p_k - e_1 p_{k-1} \pm ... = (-1)^{k+1} ke_k$.

One way to motivate a combinatorial proof is to recast the generating function interpretation appropriately. Given a polynomial $C(t)$ with non-negative integer coefficients and $C(0) = 0$, let $r_1, ... r_n$ be the reciprocals of the roots of $C(t) - 1 = 0$. Then

$\displaystyle \frac{t C'(t)}{1 - C(t)} = \sum_{k \ge 1} p_k(r_1, ... r_n) t^k$.

The left hand side of this identity suggests a particular interpretation in terms of the combinatorial species described by $C(t)$. Today we’ll describe this species when $C(t)$ is a polynomial with non-negative integer coefficients and then describe it in the general case, which will handle the extension to the symmetric function case as well.

The method of proof used here is closely related to a post I made previously about the properties of the Frobenius map, and at the end of the post I’ll try to discuss what I think might be going on.