Recall that the elementary symmetric functions generate the ring of symmetric functions
as a module over any commutative ring
. A corollary of this result, although I didn’t state it explicitly, is that the elementary symmetric functions
are algebraically independent, hence any ring homomorphism from the symmetric functions is determined freely by the images of
.
A particularly interesting choice of endomorphism occurs when we set
. This endomorphism sends the generating function
for the elementary symmetric polynomials to the generating function
, and this is an involution – in other words, it follows that
and
itself must also be an involution on the ring of symmetric functions. Thus the elementary symmetric functions and complete homogeneous symmetric functions are dual in a very strong sense. This is closely related to the identity
and today I’d like to try to explain one interpretation of what’s going on that I learned from Todd Trimble, which is that “the exterior algebra is the symmetric algebra of a purely odd supervector space.”
Naive definitions
The multisets of a set with elements form a monoid under union, so we can take their monoid algebra, which is a generalization of the notion of group algebra. The homogeneous elements of degree
are given by the set of linear combinations of multisets of size
, and this gives the familiar construction of the symmetric algebra
where is the free vector space on a set with
elements (and
is the free commutative algebra on
). The Hilbert series of this algebra is
; in particular, the number of elements of degree
is
. So the symmetric algebra is a good linearization of the notion of multiset. One can think of it as the ring of polynomials
, which is a familiar object.
On the other hand, if we try to linearize subsets in the same way we run into the problem of how to describe the product structure. The solution is to take a leaf out of the book of physics, namely the Pauli exclusion principle, to obtain a physically significant possibility: the product of two subsets with an element in common should be empty. In other words, the product should satisfy
for all . But it’s not hard to see that this implies that
.
In other words, the multiplication must be antisymmetric. This multiplication defines the exterior algebra on ,
.
The Hilbert series of this algebra is ; in particular, the number of elements of rank
is
. One way, therefore, to interpret the identity
is that the symmetric algebra on a vector space of dimension should behave something like the exterior algebra on a vector space of dimension
, whatever that means. So what could that possibly mean?
Negative dimension
What we’re going to do is just start with a normal vector space which is “positive” and throw in a second vector space
which is “negative.” This might be a sensible thing to do if, for example, you were interested in talking about a collection of positively and negatively charged particles. The supervector space we are looking at then is just the direct sum
regarded as a
-graded vector space; we’ll refer to the first component interchangeably as the even or positive part and the second component as the odd or negative part, and we’ll say it has dimension
if
, although what we’re really interested in is
, which we’ll call the Euler characteristic and denote by
.
Example. Let be a sequence of vector spaces and linear maps, and define the supervector space whose even part is all the even-numbered spaces and whose odd part is all the odd-numbered spaces. If the sequence is exact (with the zero vector spaces at the beginning and the end) then the Euler characteristic vanishes. This is the simplest manifestation of a deep relationship with homology theory; see the enlightening answers to another one of my MathOverflow questions here.
One way to think of how the category of supervector spaces should behave is to start with the category
, since every representation decomposes into copies of the trivial representation (the even part) and the sign representation (the odd part). The intertwining operators here are the grade-preserving ones, i.e. the ones that send positive elements to positive elements and negative elements to negative elements; these are the morphisms in
. (The Euler characteristic is then the trace of the non-identity element of
.)
Next we should introduce a tensor product structure. Going off of how we want the -action to behave on each component of the tensor product, the even part of the tensor product
should be
and the odd part should be
.
With this definition, Euler characteristics are multiplicative, so we have categorified the identity . Note that this is the same tensor product identity as the one we used to discuss the orthogonality relations. The unit of this tensor product is the base field
regarded as a purely even space, so that
, and one can check that all the obvious laws hold.
Supercommutativity
The next structure we’d like is a way to identify the tensor products and
, i.e. a braiding. Unlike in the case for
, where the only sensible identification sends
to
on pure tensors, there are two braidings. Let’s suppose for now that any sensible braiding will still have the form
for some scalar on pure homogeneous tensors. By bilinearity it really suffices to define how to switch two even elements, an even and an odd element, and two odd elements, since nice braidings satisfy
. Now, compatibility with units (which are even) implies that
so it follows that ; in other words, even elements don’t introduce any sign changes. By the above considerations it follows that
, hence for odd elements we have
. The only way to avoid the boring braiding is to require that
for odd elements, which is often written
where are the parities of
(as elements of
). One can check that all the associativity axioms hold, so this braiding turns
into a symmetric monoidal category, and now we can define the supersymmetric powers
of a supervector space
as the quotient of the tensor power by the above relation. Even better, we can now define the supercommutative algebra
.
Supercommutativity appears naturally in several settings: in the exterior algebra (which we already knew about), in the cup product in a cohomology ring, and in Clifford algebras, so there is plenty of evidence that this isn’t an ad hoc definition; see also the great answers when I asked why this definition was meaningful at MathOverflow.
Edit, 11/8/09: For a more concise and general way to describe , see also the answers to the MathOverflow question here.
And now something remarkable happens.
Proposition: .
If is purely even of dimension
, then supercommutativity is commutativity and
is a purely even copy of the usual symmetric power. If
is purely odd of dimension
, then supercommutativity is anticommutativity and
is a purely
-signed copy of the usual exterior power. This is the categorification of the set-multiset identity I wrote earlier, which we now know should really be written
. So not only do we know how to interpret this result, but it has a great generalization, which we just have to prove.
Proof. In the general case where
, any product of pure tensors can be rewritten, using supercommutativity, as the product of elements of
followed by elements of
, which gives a direct sum decomposition
where has Euler characteristic
since the odd parts anticommute and the even parts commute. It remains to show that
.
The combinatorial proof for is as follows: we have a set of
elements, of which
are “bad” and the others are “good.” We want to compute the number of multisets with
elements, all of which are good, and we proceed by inclusion-exclusion. We start with the multisets of
arbitrary elements, then remove the multisets with at least one bad element, then add back the multisets with at least two distinct bad elements, and so forth. The proof for
is identical except that we count subsets.
This proof should have a linearization in which we define morphisms making
an exact sequence (except at the ends), but I can’t figure out what the morphisms should be.
Characters
The upshot of all of this is that we can think of the symmetric and exterior algebras on an equal footing. There is a natural way to switch between the two since in the functor
, where
is the base field regarded as a purely odd vector space, interchanges the even and odd parts. In physics this is the relationship between the Fock spaces of fermions and bosons. As usual John Baez has lots to say on this subject, except that I can’t find the TWF I’m thinking of.
This is the same relationship as the one between the elementary symmetric functions and complete homogeneous symmetric functions. To see this, note that the symmetric and antisymmetric algebra constructions are functorial, so they send automorphisms to automorphisms
and
. It turns out that these are actually irreducible representations of
and the traces of an element
on these representations are the complete homogeneous symmetric functions and the elementary symmetric functions of the eigenvalues of
, respectively.
Here’s a riddle for next time: the symmetric group acts on
and
by swapping factors, the first by the trivial representation and the second by the sign representation. These representations have characters the trivial and sign characters, and we showed way before that
and
.
In other words, the trace of the symmetric algebra has something to do with the trivial representation, and the trace of the exterior algebra has something to do with the sign representation. What does this mean?
Great series of posts! I’m still absorbing most of it. I do have a question. You say
“if we try to linearize subsets in the same way we run into the problem of how to describe the product”
Why is that? Subsets also form a monoid under union, so we could also take the monoid ring. Or am I missing something?
Sure, but the problem with this product structure is that the corresponding monoid ring isn’t graded by the number of elements in the subset, since subsets can have nontrivial intersection. We want the gradation so that the Hilbert series is what it “should be.” (The underlying idea, which I didn’t state explicitly, is that certain sufficiently nice generating functions should be Hilbert series of various algebras.)
“This proof should have a linearization in which we define morphisms making
an exact sequence (except at the ends), but I can’t figure out what the morphisms should be. ”
This is a special case of what’s called a “Schur complex.” There’s one for every partition. The data needed to create a Schur complex is a matrix
. If the matrix is sufficiently generic and the partition is rectangular, then the complex will be exact. In this case, this is the Schur complex associated to a horizontal strip (or vertical strip depending on your conventions). I learned about these in Jerzy Weyman’s book Cohomology of Vector Bundles and Syzygies (Section 2.4). That section of the book only requires knowledge of super linear algebra. If you have trouble finding the book, let me know. Otherwise, another reference is the paper Schur functors and Schur complexes by Akin, Buchsbaum, and Weyman. I’ve never looked at this paper though.
By the way, I was having fun with multisets recently. There is another obvious way they form a rig. I wrote some notes on the nLab:
http://nlab.mathforge.org/nlab/show/multiset
An old fashioned set $X$ in the free rig of multisets is idempotent.