Something very unfortunate has happened: several things I have recently written that could have been blog entries are instead answers on math.SE! In the interest of exposition beyond the Q&A format I am going to “rescue” one of these answers. It is an answer to the following question, which I would like you to test your intuition about:
Flip coins. What is the probability that, at some point, you flipped at least consecutive tails?
Jot down a quick estimate; see if you can get within a factor of or so of the actual answer, which is below the fold.
Read Full Post »
I finally learned the solution to a little puzzle that’s been bothering me for awhile.
The setup of the puzzle is as follows. Let be a weighted undirected graph, e.g. to each edge is associated a non-negative real number , and let be the corresponding weighted adjacency matrix. If is stochastic, one can interpret the weights as transition probabilities between the vertices which describe a Markov chain. (The undirected condition then means that the transition probability between two states doesn’t depend on the order in which the transition occurs.) So one can talk about random walks on such a graph, and between any two vertices the most likely walk is the one which maximizes the product of the weights of the corresponding edges.
Suppose you don’t want to maximize a product associated to the edges, but a sum. For example, if the vertices of are locations to which you want to travel, then maybe you want the most likely random walk to also be the shortest one. If is the distance between vertex and vertex , then a natural way to do this is to set
where is some positive constant. Then the weight of a path is a monotonically decreasing function of its total length, and (fudging the stochastic constraint a bit) the most likely path between two vertices, at least if is sufficiently large, is going to be the shortest one. In fact, the larger is, the more likely you are to always be on the shortest path, since the contribution from any longer paths becomes vanishingly small. As , the ring in which the entries of the adjacency matrix lives stops being and becomes (a version of) the tropical semiring.
That’s pretty cool, but it’s not what’s been puzzling me. What’s been puzzling me is that matrix entries in powers of look an awful lot like partition functions in statistical mechanics, with playing the role of the inverse temperature and playing the role of energies. So, for awhile now, I’ve been wondering whether they actually are partition functions of systems I can construct starting from the matrix . It turns out that the answer is yes: the corresponding systems are called one-dimensional vertex models, and in the literature the connection to matrix entries is called the transfer matrix method. I learned this from an expository article by Vaughan Jones, “In and around the origin of quantum groups,” and today I’d like to briefly explain how it works.
Read Full Post »