Feeds:
Posts

## Test your intuition: consecutive tails

Something very unfortunate has happened: several things I have recently written that could have been blog entries are instead answers on math.SE! In the interest of exposition beyond the Q&A format I am going to “rescue” one of these answers. It is an answer to the following question, which I would like you to test your intuition about:

Flip $150$ coins. What is the probability that, at some point, you flipped at least $7$ consecutive tails?

Jot down a quick estimate; see if you can get within a factor of $2$ or so of the actual answer, which is below the fold.

## Walks on graphs and statistical mechanics

I finally learned the solution to a little puzzle that’s been bothering me for awhile.

The setup of the puzzle is as follows. Let $G$ be a weighted undirected graph, e.g. to each edge $i \leftrightarrow j$ is associated a non-negative real number $a_{ij}$, and let $A$ be the corresponding weighted adjacency matrix. If $A$ is stochastic, one can interpret the weights $a_{ij}$ as transition probabilities between the vertices which describe a Markov chain. (The undirected condition then means that the transition probability between two states doesn’t depend on the order in which the transition occurs.) So one can talk about random walks on such a graph, and between any two vertices the most likely walk is the one which maximizes the product of the weights of the corresponding edges.

Suppose you don’t want to maximize a product associated to the edges, but a sum. For example, if the vertices of $G$ are locations to which you want to travel, then maybe you want the most likely random walk to also be the shortest one. If $E_{ij}$ is the distance between vertex $i$ and vertex $j$, then a natural way to do this is to set

$a_{ij} = e^{- \beta E_{ij}}$

where $\beta$ is some positive constant. Then the weight of a path is a monotonically decreasing function of its total length, and (fudging the stochastic constraint a bit) the most likely path between two vertices, at least if $\beta$ is sufficiently large, is going to be the shortest one. In fact, the larger $\beta$ is, the more likely you are to always be on the shortest path, since the contribution from any longer paths becomes vanishingly small. As $\beta \to \infty$, the ring in which the entries of the adjacency matrix lives stops being $\mathbb{R}$ and becomes (a version of) the tropical semiring.

That’s pretty cool, but it’s not what’s been puzzling me. What’s been puzzling me is that matrix entries in powers of $A$ look an awful lot like partition functions in statistical mechanics, with $\beta$ playing the role of the inverse temperature and $E_{ij}$ playing the role of energies. So, for awhile now, I’ve been wondering whether they actually are partition functions of systems I can construct starting from the matrix $A$. It turns out that the answer is yes: the corresponding systems are called one-dimensional vertex models, and in the literature the connection to matrix entries is called the transfer matrix method. I learned this from an expository article by Vaughan Jones, “In and around the origin of quantum groups,” and today I’d like to briefly explain how it works.