Feeds:
Posts
Comments

Posts Tagged ‘symmetric functions’

If V is a finite-dimensional complex vector space, then the symmetric group S_n naturally acts on the tensor power V^{\otimes n} by permuting the factors. This action of S_n commutes with the action of \text{GL}(V), so all permutations \sigma : V^{\otimes n} \to V^{\otimes n} are morphisms of \text{GL}(V)-representations. This defines a morphism \mathbb{C}[S_n] \to \text{End}_{\text{GL}(V)}(V^{\otimes n}), and a natural question to ask is whether this map is surjective.

Part of Schur-Weyl duality asserts that the answer is yes. The double commutant theorem plays an important role in the proof and also highlights an important corollary, namely that V^{\otimes n} admits a canonical decomposition

\displaystyle V^{\otimes n} = \bigoplus_{\lambda} V_{\lambda} \otimes S_{\lambda}

where \lambda runs over partitions, V_{\lambda} are some irreducible representations of \text{GL}(V), and S_{\lambda} are the Specht modules, which describe all irreducible representations of S_n. This gives a fundamental relationship between the representation theories of the general linear and symmetric groups; in particular, the assignment V \mapsto V_{\lambda} can be upgraded to a functor called a Schur functor, generalizing the construction of the exterior and symmetric products.

The proof below is more or less from Etingof’s notes on representation theory (Section 4.18). We will prove four versions of Schur-Weyl duality involving \mathfrak{gl}(V), \text{GL}(V), and (in the special case that V is a complex inner product space) \mathfrak{u}(V), \text{U}(V).

(more…)

Read Full Post »

Let G be a group and let

\displaystyle V = \bigoplus_{n \ge 0} V_n

be a graded representation of G, i.e. a functor from G to the category of graded vector spaces with each piece finite-dimensional. Thus G acts on each graded piece V_i individually, each of which is an ordinary finite-dimensional representation. We want to define a character associated to a graded representation, but if a character is to have any hope of uniquely describing a representation it must contain information about the character on every finite-dimensional piece simultaneously. The natural definition here is the graded trace

\displaystyle \chi_V(g) = \sum_{n \ge 0} \chi_{V_n}(g) t^n.

In particular, the graded trace of the identity is the graded dimension or Hilbert series of V.

Classically a case of particular interest is when V_n = \text{Sym}^n(W^{*}) for some fixed representation W, since V = \text{Sym}(W^{*}) is the symmetric algebra (in particular, commutative ring) of polynomial functions on W invariant under G. In the nicest cases (for example when G is finite), V is finitely generated, hence Noetherian, and \text{Spec } V is a variety which describes the quotient W/G.

In a previous post we discussed instead the case where V_n = (W^{*})^{\otimes n} for some fixed representation W, hence V is the tensor algebra of functions on W. I thought it might be interesting to discuss some generalities about these graded representations, so that’s what we’ll be doing today.

(more…)

Read Full Post »

Today I’d like to introduce a totally explicit combinatorial definition of the Schur functions. Let \lambda \vdash n be a partition. A semistandard Young tableau T of shape \lambda is a filling of the Young diagram of \lambda with positive integers that are weakly increasing along rows and strictly increasing along columns. The weight of a tableau T is defined as \mathbf{x}^T = x_1^{T_1} x_2^{T_2} ... where T_i is the total number of times i appears in the tableau.

Definition 4: \displaystyle s_{\lambda}(x_1, x_2, ...) = \sum_T \mathbf{x}^T

where the sum is taken over all semistandard Young tableaux of shape \lambda.

As before we can readily verify that s_{(k)} = h_k, s_{(1^k)} = e_k. This definition will allow us to deduce the Jacobi-Trudi identities for the Schur functions, which describe among other things the action of the fundamental involution \omega. Since I’m trying to emphasize how many different ways there are to define the Schur functions, I’ll call these definitions instead of propositions.

Definition 5: \displaystyle s_{\lambda}= \det(h_{\lambda_i+j-i})_{1 \le i, j \le n}.

Definition 6: \displaystyle s_{\lambda'} = \det(e_{\lambda_i+j-i})_{1 \le i, j \le n}.

(more…)

Read Full Post »

The last time we talked about symmetric functions, I asked whether the vector space \mathcal{R} could be turned into an algebra, i.e. equipped with a nice product. It turns out that the induced representation allows us to construct such a product as follows:

Given representations \rho_1, \rho_2 of S_n, S_m, their tensor product \rho_1 \otimes \rho_2 is a representation of the direct product S_n \times S_m in a natural way. Now, S_n \times S_m injects naturally into S_{n+m}, which gives a new representation

\rho = \text{Ind}_{S_n \times S_m}^{S_{n+m}} \rho_1 \otimes \rho_2.

The character of this representation is called the induction product \rho_1 * \rho_2 of the characters of \rho_1, \rho_2, and with this product \mathcal{R} becomes a graded commutative algebra. (Commutativity and associativity are fairly straightforward to verify.) It now remains to answer the first question: does there exist an algebra homomorphism \phi : \Lambda \to \mathcal{R}? And can we describe the inner product on \Lambda coming from the inner product on \mathcal{R}?

To answer this question we’ll introduce perhaps the most important class of symmetric functions, the Schur functions s_{\lambda}.

N.B. I’ll be skipping even more proofs than usual today, partly because they require the development of a lot of machinery I haven’t described and partly because I don’t understand them all that well. Again, good references are Sagan or EC2.

(more…)

Read Full Post »

Recall that the elementary symmetric functions e_{\lambda} generate the ring of symmetric functions \Lambda as a module over any commutative ring R. A corollary of this result, although I didn’t state it explicitly, is that the elementary symmetric functions e_1, e_2, ... are algebraically independent, hence any ring homomorphism from the symmetric functions is determined freely by the images of e_1, e_2, ... .

A particularly interesting choice of endomorphism \omega : \Lambda \to \Lambda occurs when we set \omega(e_n) = h_n. This endomorphism sends the generating function E(t) for the elementary symmetric polynomials to the generating function H(t) = \frac{1}{E(-t)}, and this is an involution – in other words, it follows that \omega(h_n) = e_n and \omega itself must also be an involution on the ring of symmetric functions. Thus the elementary symmetric functions and complete homogeneous symmetric functions are dual in a very strong sense. This is closely related to the identity

\displaystyle \left( {n \choose d} \right) = (-1)^d {-n \choose d}

and today I’d like to try to explain one interpretation of what’s going on that I learned from Todd Trimble, which is that “the exterior algebra is the symmetric algebra of a purely odd supervector space.”

(more…)

Read Full Post »

In a previous post I gave essentially the following definition: given a discrete dynamical system, i.e. a space X and a function f : X \to X, and under the assumption that f^n has a finite number of fixed points for all n, we define the dynamical zeta function to be the formal power series

\displaystyle \zeta_X(t) = \exp \sum_{n \ge 1} (\text{Fix } f^n) \frac{t^n}{n}.

What I didn’t do was motivate this definition, mostly because I hadn’t really worked out the motivation myself. Now that we have an important special case worked out, we can discuss the general case, which will give a purely combinatorial proof of the second half of the Newton-Girard identities.

(more…)

Read Full Post »

The goal of this post is to give a purely combinatorial proof of Newton’s sums which would have interrupted the flow of the previous post. Recall that, in the notation of the previous post, Newton’s sums (also known as the first Newton-Girard identity) state that

\displaystyle p_k - e_1 p_{k-1} \pm ... = (-1)^{k+1} ke_k.

One way to motivate a combinatorial proof is to recast the generating function interpretation appropriately. Given a polynomial C(t) with non-negative integer coefficients and C(0) = 0, let r_1, ... r_n be the reciprocals of the roots of C(t) - 1 = 0. Then

\displaystyle \frac{t C'(t)}{1 - C(t)} = \sum_{k \ge 1} p_k(r_1, ... r_n) t^k.

The left hand side of this identity suggests a particular interpretation in terms of the combinatorial species described by C(t). Today we’ll describe this species when C(t) is a polynomial with non-negative integer coefficients and then describe it in the general case, which will handle the extension to the symmetric function case as well.

The method of proof used here is closely related to a post I made previously about the properties of the Frobenius map, and at the end of the post I’ll try to discuss what I think might be going on.

(more…)

Read Full Post »

The theory of symmetric functions, which generalizes some ideas that came up in the previous discussion of Polya theory, can be motivated by thinking about polynomial functions of the roots of a monic polynomial P(x) \in \mathbb{Z}[x]. Problems on high school competitions often ask for the sum of the squares or the cubes of the roots of a given polynomial, and those not familiar with symmetric functions will often be surprised that these quantities end up being integers even though the roots themselves aren’t integers. The key is that the sums being asked for are always symmetric in the roots, i.e. invariant under an arbitrary permutation. The coefficients of a monic polynomial are the elementary symmetric polynomials of its roots, which we are given are integers. It follows that any symmetric polynomial that can be written using integer coefficients in terms of the elementary symmetric polynomials must in fact be an integer, and as we will see, every symmetric polynomial with integer coefficients has this property.

These ideas lead naturally to the use of symmetric polynomials in Galois theory, but combinatorialists are interested in symmetric functions for a very different reason involving the representation theory of S_n. This post is a brief overview of the basic results of symmetric function theory relevant to combinatorialists. I’m mostly following Sagan, who recommends Macdonald as a more comprehensive reference.

(more…)

Read Full Post »

I ended the last post by asking whether the proof of baby Polya extends to the multi-parameter setting where we want to keep track of how many of each color we use. In fact, it does. First, we should specify what exactly we’re trying to compute. Recall the setup: we have colors 1, 2, ... n (represented by variables r_1, r_2, ... r_n), and we have a set of slots S with |S| = m acted on by a group G where each slot will be assigned a color. Define f_G(t_1, ... t_n) to be the number of orbits of functions S \to \{ 1, 2, ... n \} under the action of G where color i is used t_i times. (Since the action of G only permutes slots, it doesn’t change the multiset of colors used.) What we want to compute is the generating function

\displaystyle F_G(r_1, r_2, ... r_n) = \sum_{t_1 + ... + t_n = m} f_G(t_1, t_2, ... t_n) r_1^{t_1} r_2^{t_2} ... r_n^{t_n}.

Note that setting r_1 = r_2 = .... = r_n = 1 we recover F_G(n), which doesn’t contain information about particular color combinations. By the orbit-counting lemma, this is equivalent to computing

\displaystyle \frac{1}{|G|} \sum_{g \in G} | \text{Fix}(g) |(r_1, r_2, ... r_n)

where we must now count fixed points in a weighted manner, recording the multiset of colors in each fixed point. How do we go about doing this?

(more…)

Read Full Post »