Archive for June, 2009

When someone linked me to Ravi Vakil’s advice for potential graduate students, I was struck by the following passage:

…[M]athematics is so rich and infinite that it is impossible to learn it systematically, and if you wait to master one topic before moving on to the next, you’ll never get anywhere. Instead, you’ll have tendrils of knowledge extending far from your comfort zone [emphasis mine]. Then you can later backfill from these tendrils, and extend your comfort zone; this is much easier to do than learning “forwards”. (Caution: this backfilling is necessary. There can be a temptation to learn lots of fancy words and to use them in fancy sentences without being able to say precisely what you mean. You should feel free to do that, but you should always feel a pang of guilt when you do.)

It’s great to hear this coming from an expert because this is exactly what I’ve been doing for the past year without realizing it. Without formally learning anything, I’ve begun extending tendrils into algebraic topology, category theory, and all sorts of subjects about which I still can’t say anything particularly intelligent. However, from my experience so far I have a tentative list of the benefits of this strategy:

  1. It becomes easier to recognize related concepts or constructions across different subjects, hence to tie them together.
  2. If you have a concept you don’t fully understand sitting in the back of your head, it may come to pass that once you learn the necessary tools to understand it you may re-derive the concept partially based on your memory. As Richard Feynman said, “what I cannot create, I do not understand.
  3. Certain things become better motivated if you can say to yourself something like, “oh, I know why we’re learning about Theorem X; it’s an instance of Phenomenon Y which has lots of other nontrivial instances.” Here I’ll give an example: Pontryagin duality.
  4. You are naturally led to ask lots of questions, and questions are great. “This looks a lot like Theory Z,” you might ask your professor. “What’s the connection?”

The idea that constantly working outside your comfort zone is key to progress appears to me to be a general phenomenon; in two-player games and sports, for example, playing opponents who are better than you is a great way to improve.

What I’m curious about, though, is whether the undergraduate math curriculum explicitly encourages “tendril” behavior. Perhaps it’s just something every math major should be motivated to do independently, but I can’t help but think that Ravi’s advice, which I’ve never seen written down anywhere else, should be more widely acknowledged.

Read Full Post »

(A more appropriate title for this post would probably be “I hate Bourbaki,” but I like it as is.)

I spend a lot of my free time reading research papers, usually in combinatorics; those tend to require the least background. Today I decided to read everything I could find written by one of the great champions of combinatorics, Gian-Carlo Rota, and in his philosophical writings I found the explicit declaration of an opinion I’ve held for some time now.

Consider the following passage from Syntax, Semantics, and the Problem of the Identity of Mathematical Objects:

The real line has been axiomatized in at least six different ways. Mathematicians are still looking for further axiomatizations of the real line, too many to support the justification of axiomatization by the claim that we axiomatize only in order to secure the validity of the theory.

Whatever the reasons, the variety of axiomatizations confirms beyond a doubt that the mathematician thinks of one real line, that is, the identity of the object is presupposed and in fact undoubted.

The mathematician’s search for further axiomatizations presupposes the certainty of the identity of the object, but recognizes that the properties of the object can never be completely revealed. The mathematician wants to find out what else the real line can be. He wants ever more perspectives on one and the same object, and the perspectives of mathematics are precisely the various axiomatizations, which lead to a variety of syntactic systems always interpreted as presenting the same object, that is, as having the same models.

Or the following passage, from Combinatorics, representation theory, and invariant theory: The story of a ménage à trois:


Read Full Post »

In the previous post we used the Polya enumeration theorem to give a sneaky, underhanded proof that

\displaystyle \sum_{m \ge 0} Z(S_m) t^m = \exp \left( z_1 t + \frac{z_2 t^2}{2} + \frac{z_3 t^3}{3} + ... \right).

If you’ve never seen the exponential function used like this, you might be wondering how it can be “explained.”

To explore this question, I’d like to give three other proofs of this result, the last of which will be “direct.” Along the way I’ll be attempting to describe some basic principles of species theory in an informal way. I’ll also give some applications, including to a Putnam problem.


Read Full Post »

I ended the last post by asking whether the proof of baby Polya extends to the multi-parameter setting where we want to keep track of how many of each color we use. In fact, it does. First, we should specify what exactly we’re trying to compute. Recall the setup: we have colors 1, 2, ... n (represented by variables r_1, r_2, ... r_n), and we have a set of slots S with |S| = m acted on by a group G where each slot will be assigned a color. Define f_G(t_1, ... t_n) to be the number of orbits of functions S \to \{ 1, 2, ... n \} under the action of G where color i is used t_i times. (Since the action of G only permutes slots, it doesn’t change the multiset of colors used.) What we want to compute is the generating function

\displaystyle F_G(r_1, r_2, ... r_n) = \sum_{t_1 + ... + t_n = m} f_G(t_1, t_2, ... t_n) r_1^{t_1} r_2^{t_2} ... r_n^{t_n}.

Note that setting r_1 = r_2 = .... = r_n = 1 we recover F_G(n), which doesn’t contain information about particular color combinations. By the orbit-counting lemma, this is equivalent to computing

\displaystyle \frac{1}{|G|} \sum_{g \in G} | \text{Fix}(g) |(r_1, r_2, ... r_n)

where we must now count fixed points in a weighted manner, recording the multiset of colors in each fixed point. How do we go about doing this?


Read Full Post »

We’ve got everything we need to prove the Polya enumeration theorem. To state the theorem, however, requires the language of generating functions, so I thought I’d take the time to establish some of the important ideas. It isn’t possible to do justice to the subject in one post, so I’ll start with some references.

Many people recommend Wilf’s generatingfunctionology, but the terminology is non-standard and somewhat problematic. Nevertheless, it has valuable insight and examples.

I cannot recommend Flajolet and Sedgewick’s Analytic Combinatorics highly enough. It is readable, includes a wide variety of examples as well as very general techniques, and places a great deal of emphasis on asymptotics, computation, and practical applications.

If you can do the usual computations but want to learn some theory, Bergeron, Labelle, and Laroux’s Combinatorial Species and Tree-like Structures is a fascinating introduction to the theory of species that requires fairly little background, although a fair amount of patience. It also contains my favorite proof of Cayley’s formula.

Doubilet, Rota, and Stanley’s On the idea of generating function is part of a fascinating program for understanding generating functions with posets as the fundamental concept. I may have more to say about this perspective once I learn more about it.

While it is by no means comprehensive, this post over at Topological Musings is a good introduction to the basic ideas of species theory.

And a shameless plug: the article I wrote for the Worldwide Online Olympiad Training program about generating functions is available here. I tried to include a wide variety of examples and exercises taken from the AMC exams while focusing on techniques appropriate for high-school problem solvers. There are at least a few minor errors, for which I apologize. You might also be interested in this previous post of mine.

In any case, this post will attempt to be a relatively self-contained discussion of the concepts necessary for understanding the PET.


Read Full Post »

I can’t resist mentioning a joke I heard from an episode of American Dad. Stan Smith has this to say about his training as a negotiator:

Hey, you’ve got one of the CIA’s top negotiators on your side. Y’know, I negotiated my way through negotiator training. I should’ve failed the hell out of that class. That’s how good I am.

It reminds me of some of the issues that cropped up in Scott Aaronson’s discussion of side-splitting proofs, especially Theorem 5. I can’t help but chuckle at the fact that Stan’s line gives both a lower and an upper bound on the quality of his negotiation skills!

Anyone know of other jokes like this one?

Read Full Post »

The orbit-stabilizer theorem implies, very immediately, one of the most important counting results in group theory. The proof is easy enough to give in a paragraph now that we’ve set up the requisite machinery. Remember that we counted fixed points by looking at the size of the stabilizer subgroup. Let’s count them another way. Since a fixed point is really a pair (g, x) such that gx = x, and we’ve been counting them indexed by x, let’s count them indexed by g. We use \text{Fix}(g) to denote the set of fixed points of g. (Note that this is a function of the group action, not the group, but again we’re abusing notation.) Counting the total number of fixed points “vertically,” then “horizontally,” gives the following.

Proposition: \displaystyle \sum_{x \in S} |\text{Stab}(x)| = \sum_{g \in G} | \text{Fix}(g) |.

On the other hand, by the orbit-stabilizer theorem, it’s true for any orbit O that \displaystyle \sum_{x \in O} | \text{Stab}(x) | = |G|, since the cosets of any stabilizer subgroup partition |G|. This immediately gives us the lemma formerly known as Burnside’s, or the Cauchy-Frobenius lemma, which we’ll give a neutral name.

Orbit-counting lemma: The number of orbits in a group action is given by \displaystyle \frac{1}{|G|} \sum_{g \in G} | \text{Fix}(g) |, i.e. the average number of fixed points.

In this post we’ll investigate some consequences of this result.


Read Full Post »

Now that we’ve discussed group actions a bit, it’s time to characterize them. In this post I’d like to take a leaf from Tim Gowers’ book and try to make each step taken in the post “obvious.” While the content of the proofs is not too difficult, its motivation is rarely discussed.

First, it’s important to note that there is a way to take direct sums or disjoint unions (category theorists would say coproducts) of group actions: given a group G acting on two sets S, T, one defines an action on their disjoint union S \sqcup T in the obvious way: pick one action or the other. (Disjoint unions differ from unions in the usual sense because we relabel the elements of the sets so that they cannot intersect.) There’s a great reason to do this, and cycle decomposition showed us a special case: every group action is a direct sum or disjoint union of the action on its orbits.

This is the first step toward a structure theorem. Since the group action cannot “mix” between two orbits, it acts “independently” on orbits, and any question we might want to ask about the group action can be answered by looking at each orbit separately. A group action with a single orbit is called transitive, which means that for every x, y in the underlying set there exists G such that gx = y. So to classify group actions it suffices to classify transitive group actions.


Read Full Post »

Sometimes I worry that I should be more consistent or more lenient about the background I expect of my readers. (Readers, I have to admit that I still don’t really know who you are!) Considering how important I think it is that mathematicians value communicating their ideas to non-specialists (what John Armstrong calls the Generally Interested Lay Audience (GILA)), I should probably be putting my money where my mouth is.

So, inspired by the Unapologetic Mathematician, I have decided on a little project: to build up to a discussion of the Polya enumeration theorem without assuming any prerequisites other than a passing familiarity with group theory. Posts in this project, or any subsequent similar projects, will be labeled “GILA.” The general plan is to talk about group actions in general, the orbit-stabilizer theorem, the lemma formerly known as Burnside’s, and generating functions on the way. Much of this discussion can be found in Section 7 of Stanley’s lectures on algebraic combinatorics.

The PET is a very general way to think about questions of the following nature:

  1. How many ways are there to paint the faces of a cube if we consider two colorings related by some rotation to be the same?
  2. How many ways are there to paint the beads on a necklace if we consider two colorings related by a rotation of the necklace to be the same?
  3. How many ways are there to place some balls into some urns if we consider two placements related by a relabeling of the balls to be the same? (In other words, we want the balls to be “indistinguishable.”)
  4. How many graphs with a fixed number of vertices and edges are there if we consider two graphs related by a relabeling of the vertices to be the same? (In other words, we want the graphs to be “non-isomorphic.”)

The general situation is that we have a (finite) set S of slots and another (for now, finite) set C of objects we want to put into those slots, which it is useful to think of as a set of colors that we can “paint” the slots. We also have a (finite) group G of symmetries that controls when we consider two colorings to be “the same.” To understand this situation, it is first necessary to understand something about group actions.


Read Full Post »

I’ve decided to start blogging a little more about the algebraic combinatorics I’ve learned over the past year. In particular, I’d like to present one of my favorite proofs from Stanley’s Enumerative Combinatorics I.

The theory of Young tableaux is a great example of the richness of modern mathematics: although they can be defined in an elementary combinatorial fashion, interest in the theory is primarily driven (as I understand it) by their applications to representation theory and other fields of mathematics. The standard application is in describing the irreducible representations of the symmetric group in characteristic zero. I’ll be describing a more elementary aspect of the theory: the relationship between Young diagrams and q-binomial coefficients.

Young diagrams can be defined as a visual representations of partitions. A partition of k is a weakly decreasing sequence \lambda_1 \ge \lambda_2 \ge ... \ge \lambda_i such that \lambda_1 + ... + \lambda_i = k. Partitions uniquely describe cycle decompositions, hence conjugacy classes, of the symmetric group, which is a hint at the connection to representation theory. A Young diagram of the partition \lambda consists of i rows of boxes where the j^{th} row has \lambda_j boxes. Let L(m, n) denote the poset of Young diagrams that fit into an m \times n box, ordered by inclusion: equivalently, one can define the full Young lattice and define L(m, n) to be the elements less than or equal to the m \times n partition. (One can then define a standard Young tableau to be a chain in the Young lattice, but we will not need this notion.) One can easily check that the following is true.

Proposition: |L(m, n)| = {m+n \choose m}.

The rest of this post explains the q-analogue of this result.


Read Full Post »

Older Posts »