Feeds:
Posts

## GILA VI: The cycle index polynomials of the symmetric groups

In the previous post we used the Polya enumeration theorem to give a sneaky, underhanded proof that

$\displaystyle \sum_{m \ge 0} Z(S_m) t^m = \exp \left( z_1 t + \frac{z_2 t^2}{2} + \frac{z_3 t^3}{3} + ... \right)$.

If you’ve never seen the exponential function used like this, you might be wondering how it can be “explained.”

To explore this question, I’d like to give three other proofs of this result, the last of which will be “direct.” Along the way I’ll be attempting to describe some basic principles of species theory in an informal way. I’ll also give some applications, including to a Putnam problem.

Proof 1

Possibly the least satisfying proof is computation: write the RHS as

$\displaystyle \exp \left( z_1 t \right) \exp \left( \frac{z_2 t^2}{2} \right) \exp \left( \frac{z_3 t^3}{3} \right) ...$

and expand by power series, giving

$\displaystyle \left( \sum_{n \ge 0} \frac{z_1^n t^n}{n!} \right) \left( \sum_{n \ge 0} \frac{z_2^n t^{2n} }{2^n n!} \right) \left( \sum_{n \ge 0} \frac{z_3^n t^{3n} }{3^n n!} \right) ...$

This implies that the number of permutations of $m = \sum_{i=1}^{k} ic_i$ elements with cycle decomposition $c_1, c_2, ... c_k$ (and remember, $c_i$ is the number of cycles of size $i$, not the size of a cycle) with should be equal to

$\displaystyle \frac{m!}{c_1! 2^{c_2} c_2! 3^{c_3} c_3! ... }$.

If we can prove this directly, we’ll have proven the desired identity. The proof is actually not so bad: just write down $\{ 1, 2, ... m \}$ in some order to start with. There are $m!$ ways to do this. Now we declare this to be a permutation of the desired cycle type by saying that the first $c_1$ numbers are the fixed points, the next $2c_2$ entries are the cycles of length $2$ (in adjacent pairs), the next $3c_3$ entries are the cycles of length $3$ (in adjacent triplets), and so forth. This description is not unique due to the following two symmetries.

1. The $c_i$ cycles of a given length could have been written down in another order, i.e. $(12)(34)$ is the same as $(34)(12)$. This means that for each $i$ we have overcounted by a factor of $c_i!$.
2. The entries in a given cycle could have been written down in another cyclic order, i.e. $(123)$ is the same as $(231)$. This means that for each $i$ we have overcounted by a factor of $i^{c_i}$.

And we have proven the desired result. If you’d like, try to rephrase this proof in terms of group actions.

This proof is highly unsatisfying. Among other things, it casts a lot of suspicion on the exponential function: all those factorials must be there for a combinatorial reason related to permutations. And considering the denominators of $1, 2, 3, ...$ in the argument of the exponential originally came from the power series of the logarithm, those must be there for a combinatorial reason as well, related to cycles. A starting point as to the exact connection is the following: setting $z_1 = z_2 = ... = 1$ gives

$\displaystyle \sum_{n \ge 0} \frac{n!}{n!} t^n = \exp \left( t + \frac{t^2}{2} + \frac{t^3}{3} + ... \right)$

which is the “obvious” statement that $\frac{1}{1 - t} = \exp \ln \left( \frac{1}{1 - t} \right)$. But what the combinatorics is telling us is that this analytic identity is equivalent to the existence of cycle decompositions.

Proof 2

A general technique for dealing with a difficult sequence defined combinatorially is to find a recursion for it. The cycle index polynomials for the symmetric groups can be computed recursively as follows: in a permutation in $S_{m+1}$, the number $m+1$ belongs to a cycle of some length $k+1, 0 \le k \le m$. The rest of the cycles can be identified with a permutation in $S_{m-k}$ once we choose which elements of $\{ 1, 2, ... m \}$ to identify with the subset of size $m-k$, and in what order. After juggling the factorials, this argument is equivalent to the weighted identity

$\displaystyle (m+1)! Z(S_{m+1}) = \sum_{k=0}^{m} z_{k+1} \frac{m!}{(m-k)!} (m-k)! Z(S_{m-k})$

since what we have shown is that each monomial on the LHS comes from a unique term on the RHS. Dividing out, this is equivalent to

$\displaystyle (m+1) Z(S_{m+1}) = \sum_{k=0}^{m} z_{k+1} Z(S_{m-k})$

but this is equivalent to

$\displaystyle \frac{d}{dt} \left( \sum_{m \ge 0} Z(S_m) t^m \right) = \left( \sum_{m \ge 0} Z(S_m) t^m \right) \left( \sum_{k \ge 0} z_{k+1} t^k \right)$

and solving this differential equation with the appropriate initial conditions gives us the same identity as before. Recognizing multiplication by the index $m+1$ as differentiation and sums of the form on the RHS as multiplication of series is a common trick, and it’s good to get used to it.

But the manner of this proof suggests a more general argument: it’s true of any power series of the form $g(x) = e^{f(x)}$ that

$\displaystyle g'(x) = g(x) f'(x)$

and if we understood the combinatorial meaning of differentiation this would give a recursive description of the combinatorial meaning of $g(x)$.

Proof 3

The proofs so far all lack real explanatory power. To understand this result from a purely combinatorial point of view, it is necessary to appeal to the theory of exponential, rather than ordinary, generating functions. Exponential generating functions take the form

$\displaystyle \sum_{n \ge 0} a_n \frac{x^n}{n!}$

for some combinatorial sequence $a_n$, and they appear when counting so-called “labeled” objects. The precise definition of “labeled” is surprisingly difficult to pin down; for one thing, one should be very careful not to think of a labeling of a set as defining an ordering, although most of the symbols we use to label things come with an implicit order.

One way to understand what “labeled” objects are is in terms of what multiplication of egfs does. Multiplication of ogfs corresponds to the convolution or Cauchy product

$\displaystyle c_n = \sum_{k=0}^{n} a_k b_{n-k} = \sum_{k+l=n} a_k b_l$.

As we’ve discussed, Cauchy products appear commonly in combinatorics when $a_n, b_n$ describe the number of ways to perform some construction on a set of size $n$; then $c_n$ describes the number of ways to perform the first construction on the “left half” of that set and the second construction on the “right half” of that set, for some division of left and right.

Multiplication of egf’s appears when, instead of dividing left and right (which is typical if you’re constructing totally ordered objects like tilings), you want to perform the first construction on an arbitrary subset and the second construction on its complement (which is typical if you’re constructing unordered objects like labeled graphs); that is, if

$\displaystyle A(x) = \sum_{n \ge 0} \frac{a_n}{n!} x^n, B(x) = \sum_{n \ge 0} \frac{b_n}{n!} x^n$

then their product $C(x) = \sum_{n \ge 0} \frac{c_n}{n!} x^n$ satisfies

$\displaystyle c_n = \sum_{k=0}^{n} {n \choose k} a_k b_{n-k}$.

For example, the sequence with constant value $1$ has the ogf $\frac{1}{1 - x}$. One usually interprets this as the “sequence” construction: there is exactly one board with $n$ slots for each $n$. Squaring this generating function gives the number of ways to place two boards next to each other, counted by the total number of slots; in other words, it’s the number of ways to divide a board into a left side and a right side.

The sequence with constant value $1$ has the egf $e^x$. One usually interprets this as the “set” construction – this is very important for understanding the cycle index result – but this is vague; a concrete perspective is that there is exactly one empty graph with $n$ vertices for each $n$. (This perspective is useful because many labeled constructions are best thought of as constructing various types of labeled graphs.) Squaring this generating function gives the number of ways to put two empty graphs together, counted by the total number of vertices; in other words, it’s the number of ways to divide an empty graph into red vertices and blue vertices.

I hope that example gave you some insight into what I mean by working with “labeled” objects; when we put objects together, we have to relabel them appropriately for everything to make sense. Even when working with ogfs we already do this when defining the disjoint union.

Given the above definition of the exponential, why is it true that

$\displaystyle \frac{d}{dx} e^x = e^x$?

This may seem silly, but bear with me. On egfs, differentiation takes a sequence $a_n$ to $a_{n+1}$ and deletes the first term. Combinatorially, it can be thought of as follows: if $a_n$ describes the number of ways to perform a certain construction on the empty graph on $n$ vertices, then differentiation corresponds to adding a new vertex, then performing the construction. So the reason the exponential function is its own derivative is that adding a new vertex and performing the empty graph construction is equivalent to the empty graph construction. Now something less trivial: why is it true that

$\displaystyle \frac{d}{dx} e^{nx} = n e^{nx}, n \in \mathbb{N}$?

Well, $e^{nx} = (e^x)^n$ describes the number of ways to color an empty graph in $n$ colors and adding a new vertex, then performing this construction is the same thing as performing this construction, then adding a new vertex of one of the $n$ colors.

More generally, let $g(x) = e^{f(x)}$; why is it true that

$\displaystyle g'(x) = g(x) f'(x)$?

In order for this result to make sense “formally” we require that $f(0) = 0$; all this means is that the corresponding construction can’t do anything without any vertices. Combinatorially, what the above equation says is that adding a new vertex and performing the $g$-construction is the same thing as performing the $g$-construction on some subset together with adding a new vertex and performing the $f$-construction on what’s left.

This is a recursive definition. One way to think about it is that differentiation “points” to a vertex (although the pointing operation is usually given by $x \frac{d}{dx}$), and “pointing” to a $g$-construction points out a subset equivalent to an $f$-construction. In other words, since $e^x$ describes empty graphs on $n$ vertices, $e^{f(x)}$ describes the total number of ways to replace each vertex with an $f$-construction (which again one can think of as another type of graph, and then we count according to the total number of vertices). A great intuition for thinking about this is to look at it the other way around: if $g$ denotes a construction such that $\ln g$ has a “nice” generating function, then $\ln g$ denotes the connected components of $g$-constructions. (In my previous post on this subject I explained how this leads to the generating function for connected labeled graphs.)

It’s also possible to interpret the series

$\displaystyle e^{f(x)} = \sum_{n \ge 0} \frac{f(x)^n}{n!}$

in this manner, since $f^n$ denotes the number of ways to split up a set into $n$ subsets and perform the $f$-construction on each one. The reason we divide by $n!$ is related to relabeling: if we think of each $f$-construction as having a different color to distinguish them, then the division by $n!$ corresponds to ignoring the different ways to label each color. One can think of this as a certain action of the symmetric group, but with the property that the labelings of the vertices prevent any constructions from having a nontrivial stabilizer.

Finally, here’s a combinatorial proof that $e^{f+g} = e^f e^g$. Since $f + g$ denotes “do an $f$-construction or a $g$-construction,” $e^{f+g}$ denotes the number of collections of either $f$ or $g$-constructions, which can be uniquely decomposed into the subset consisting of $f$-constructions and the subset consisting of $g$-constructions.

So what we have now are various purely combinatorial explanations of the properties of the exponential function. Are they enough to explain the cycle index result directly?

Yes! The key is to identify a permutation with its functional graph; see the Topological Musings post I linked to for another discussion of this technique. This is a graph whose vertex set is the set we’re permuting together with directed arrows from $x$ to $y$ whenever $\pi(x) = y$.

The identifying property of a functional graph is that every vertex has out-degree $1$, and since permutations are invertible every vertex also has in-degree $1$. It’s not hard to see from here that the connected components of the functional graph of a permutation are directed cycles, which we already knew, but not in this language.

Here’s how to construct the generating function for directed cycles: start with $n$ vertices. List the vertices in some order and then connect them in that order, then connect the last vertex to the first vertex. There are $n!$ ways to do this. Then divide by $n$ because we pretended that there was a start vertex when there wasn’t. So the generating function for directed cycles is

$\displaystyle \sum_{n \ge 1} \frac{(n-1)!}{n!} x^n = \ln \frac{1}{1 - x}$

just like we already knew from calculus. (Again, this proof is purely combinatorial.) But we know more: since the coefficient of $x^k$ means exactly “the cycles of length $k$,” we can add a parameter for every coefficient and construct

$\displaystyle \sum_{n \ge 1} \frac{z_n}{n} x^n$

where $z_n$ is the parameter controlling the cycles of length $n$, and taking the exponential of this generating function gives us precisely (the functional graphs of) permutations, with parameters that record how many cycles of each length they have – precisely the cycle index polynomials of the symmetric groups.

What may not be obvious at this moment is just how much control having all those parameters around entails.

Application 1

Suppose we only want to count permutations indexed by how many cycles of length $1$ they have, i.e. fixed points. So we ignore every parameter except the first one, giving

$\displaystyle yx + \sum_{n \ge 2} \frac{x^n}{n} = \ln \frac{1}{1 - x} + (y - 1) x$

which has the exponential

$\displaystyle \sum_{n \ge 0} \frac{d_n(y)}{n!} x^n = \frac{e^{(y-1)x}}{1 - x}$.

By construction, $d_n(y) = \sum_{k=0}^{n} d_{n, k} y^k$ is a polynomial such that the coefficient of $y^k$ records the number of permutations with $k$ fixed points; these numbers are known as the rencontres numbers, and this generating function tells you everything you could want to know about them. Setting $y = 0$ tells us that derangements have generating function

$\displaystyle \sum_{n \ge 0} \frac{d_{n,0}}{n!} x^n = \frac{e^{-x}}{1 - x}$

which, after expansion, implies the well-known identity

$\displaystyle d_{n,0} = n! \left( \sum_{i=0}^{n} \frac{(-1)^i}{i!} \right)$.

The standard proof of this identity is by inclusion-exclusion. Now, counting the number of permutations with $k$ fixed points is trivial once you know how to count derangements, and the generating function reflects this: taking the $k^{th}$ derivative with respect to $y$ and setting it equal to zero gives

$\displaystyle \sum_{n \ge 0} \frac{d_{n,k}}{n!} x^n = \frac{x^k e^{-x}}{1 - x}$

This identity is equivalent to the following: a permutation of $n$ elements with $k$ fixed points is the same thing as a derangement of $n-k$ elements, once the appropriate relabeling has been done. Both the generating function identity and the combinatorial proof imply

$\displaystyle d_{n,k} = {n \choose k} k! d_{n-k,0}$.

Writing $\frac{x^k}{1 - x} = x^k + x^{k+1} + ...$ implies the identity

$d_{n,k} = \displaystyle n! \left( \sum_{i=0}^{n-k} \frac{(-1)^i}{i!} \right)$

which one can also deduce from the above. This identity trivializes a problem from Canada 1982.

Note that all of these identities are easily provable without using the language of egfs; the value of egfs is that they make proving these identities automatic and generalizable.

One can also see directly from the generating function that $d_{n,0}$ is asymptotically $\frac{n!}{e}$; this is due to the general principle that a meromorphic function $f(z)$ with positive coefficients and dominant singularity a simple pole at $z = 1$ has coefficients asymptotically equal to $\lim_{z \to 1} (1 - z) f(z)$. Flajolet and Sedgewick exploit this simple principle to the hilt in analyzing generating functions arising from regular languages.

What’s the average number of fixed points of a permutation? By the orbit-counting lemma, the answer is the number of orbits of the action of $S_n$ on $n$ elements, which is just $1$. One can also see this by differentiating the above generating function with respect to $y$ and letting $y = 1$, which gives

$\displaystyle \frac{xe^{(1-1)x}}{1 - x} = \frac{1}{1 - x} - 1$.

What’s the average value of the square of the number of fixed points? This is less obvious. There is a nontrivial combinatorial argument that generalizes to $k^{th}$ powers, but again with egfs it’s automatic: take the second derivative, which gives the average value of $\text{Fix}(\pi) \left( \text{Fix}(\pi) - 1 \right)$, then add the previous generating function. This generating function is

$\displaystyle \frac{x^2 + x}{1 - x} = \frac{2}{1 - x} - 2 - x$.

Some perspectives on the generalization are available here. In any case, generating functions in general make it quite easy to compute means and variances of various combinatorial parameters of interest. For example, the above computation implies that the variance of the number of fixed points is $1$.

Application 2

Now an application to a nontrivial Putnam problem.

Putnam 2005 B6: Let $\sigma(\pi)$ denote the signature of the permutation $\pi$. Show that

$\displaystyle \sum_{\pi \in S_n} \frac{\sigma(\pi)}{\text{Fix}(\pi) + 1} = (-1)^{n+1} \frac{n}{n+1}$.

Solution. Fans of generating functions are fond of exploiting the identity

$\displaystyle \frac{1}{n+1} = \int_{0}^{1} x^n \, dx$.

In keeping with the combinatorial theme of this post, this identity has the following probabilistic interpretation: pick $x \in [0, 1]$ uniformly and pick $n$ other points. The RHS is the probability that the $n$ points are all to the left of $x$. The LHS is the probability that the first point chosen, $x$, is to the right of all the other points, which is just the reciprocal of the total number of points. Anyway, exploiting this identity we find that what we really want to compute is

$\displaystyle \int_{0}^{1} \left( \sum_{\pi \in S_n} \sigma(\pi) y^{\text{Fix}(\pi)} \right) \, dy$

and this is much more manageable: the polynomial we’re integrating happens to be easily computable from the cycle index polynomial $Z_{S_n}$. Since the signature is multiplicative, it is determined by the cycle decomposition of a permutation. Moreover, the parity of the number of elements of a cycle is precisely the opposite of the signature of the cycle; it then follows that

$\displaystyle \sum_{n \ge 0} \left( \sum_{\pi \in S_n} \sigma(\pi) y^{\text{Fix}(\pi)} \right) \frac{x^n}{n!} = \exp \left( yx - \frac{x^2}{2} + \frac{x^3}{3} \mp ... \right)$.

This is equal to $e^{(y-1)x + \ln (1 + x)} = (1 + x) e^{(y-1)x}$. Integrating with respect to $y$, we obtain

$\displaystyle (1 + x) \int_{0}^{1} e^{(y-1)x} \, dy = (1 + x) \left( \frac{1 - e^{-x}}{x} \right)$.

From here it’s a straightforward computation; the coefficient of $x^n$ is indeed $(-1)^{n+1} \frac{n}{n+1}$ as desired.

What is somewhat mysterious is that this problem has the following beautiful alternate solution:

$\displaystyle \sum_{\pi \in S_n} \sigma(\pi) x^{\text{Fix}(\pi)} = \left| \begin{array}{ccccc} x & 1 & 1 & \hdots & 1 \\ 1 & x & 1 & \hdots & 1 \\ 1 & 1 & x & \hdots & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & 1 & \hdots & x \end{array} \right|$.

One can then induct using row reduction. The appearance of determinants in combinatorics is fascinating and not, it seems to me, fully understood.

Application 3

The following folklore problem is intimately related to cycle decomposition. There are $100$ prisoners in a jail. One day the warden decides to play the following game: he places the names of the $100$ prisoners into $100$ boxes in a row in a room and instructs each prisoner to enter the room. Each prisoner is allowed to check up to $50$ of the boxes. If each prisoner finds his name in this manner, the prisoners are allowed free. Assuming a random distribution of names, what is the optimal strategy and what is the corresponding probability that it succeeds?

Naively, one would expect that each prisoner has a $50\%$ chance of finding his name, which gives a dismal success probability of $\frac{1}{2^{100}}$. The actual probability of success for the optimal strategy is around $31\%$. Why?

The key is that one can impose a total order on the set of boxes and on the set of names – for example, by calling them both $1, 2, ... 100$ – and then what the warden hands us is a random permutation, and permutations have certain structural properties. Although the proof of the optimality of the following strategy is difficult, its description is simple:

1. Prisoner $i$ begins at box $i$. If box $i$ contains name $j$, move to box $j$.
2. Each prisoner continues in this manner until either $50$ boxes have been checked or the prisoner finds his name.

In other words, the prisoner checks the first $50$ entries of whichever cycle he’s in. From here, the probability of success is simply the probability that each cycle has length at most $50$, which is much better than $\frac{1}{2^{100}}$. This probability is precisely the coefficient of $x^{100}$ in

$\displaystyle \exp \left( \sum_{k=1}^{50} \frac{x^k}{k} \right)$

and computer algebra systems are very good at this sort of computation. (Note that differentiation, again, gives a recurrence.) Asymptotic techniques are available, but I don’t think the easy asymptotics kick in at $n = 100$ and I’m not familiar with the hard asymptotics.

In the other direction, one can compute the probability that a given permutation contains no cycles of length less than or equal to $r$; this is given by the generating function

$\displaystyle \exp \left( \sum_{k \ge r+1} \frac{x^k}{k} \right) = \frac{1}{1 - x} \exp \left( - \sum_{k=0}^{r} \frac{x^k}{k} \right)$.

(Expanding this out gives a rather complicated identity which can presumably be proven by inclusion-exclusion.) The probability we want is, in the regime that $n$ is large compared to $r$, approximately the residue at $x = 1$, which is $\exp \left( -H_r \right)$ where $H_r$ is a harmonic number. If $r$ grows sufficiently slowly as $n$ gets large, $H_r \sim \ln r + \gamma$ where $\gamma$ is the Euler-Mascheroni constant, which gives the asymptotic probability $\displaystyle \frac{1}{e^{\gamma} r}$.

Remarks

There are (at least) two “deep” explanations for the factorials in exponential generating functions, neither of which I understand particularly well. One proceeds via Rota’s idea that many combinatorial arguments naturally generalize to posets. Ordinary generating functions correspond to the poset $\mathbb{N}$ under the usual order, exponential generating functions correspond to the poset of finite sets under inclusion, and, for example, Dirichlet series correspond to the poset $\mathbb{N}$ under division. Rota’s Finite Operator Calculus is the only reference I know on this subject.

One advantage of this perspective is that it unifies various notions of Mobius inversion. Mobius inversion on $\mathbb{N}$ under the usual order corresponds to the inversion pair

$\displaystyle a_n = \sum_{k=0}^{n} b_k \Leftrightarrow b_n = a_n - a_{n-1}$

which in turn corresponds to the generating function identity

$\displaystyle A(x) = \frac{B(x)}{1 - x} \Leftrightarrow B(x) = (1 - x) A(x)$.

Mobius inversion on finite sets corresponds to the inversion pair

$\displaystyle a_n = \sum_{k=0}^{n} {n \choose k} b_k \Leftrightarrow b_n = \sum_{k=0}^{n} (-1)^{n-k} {n \choose k} a_k$

which in turn corresponds to the generating function identity

$\displaystyle A(x) = e^x B(x) \Leftrightarrow B(x) = e^{-x} A(x)$.

This is a disguised form of inclusion-exclusion! Unfortunately, I don’t have my copy of Enumerative Combinatorics handy to make sure I state the exact connection correctly.

Mobius inversion on $\mathbb{N}$ under division is the usual number-theoretic notion of Mobius inversion; it corresponds to the inversion pair

$\displaystyle a_n = \sum_{d | n} b_d \Leftrightarrow b_n = \sum_{d | n} a_d \, \mu \left( \frac{n}{d} \right)$

where $\mu(d)$ is the number-theoretic Mobius function, which in turn corresponds to the generating function identity

$\displaystyle A(s) = \zeta(s) B(s) \Leftrightarrow B(s) = \frac{1}{\zeta(s)} A(s)$

where $\zeta(s)$ is the Riemann zeta function. Motivated by the number-theoretic case, the generalization of the functions $\frac{1}{1 - x}$ and $e^x$ is called the zeta function of the poset in question, and the generalization of their inverses is called the Mobius function of the poset. I haven’t, however, read through Rota’s text very thoroughly, so I can’t discuss this fascinating idea in detail.

The other way to make sense of the factorials in egfs, which I haven’t seen fully fleshed out anywhere, treats division by $n!$ as what’s called a weak quotient by the symmetric group. The idea is to define a notion of quotient of a set $S$ by a group $G$ acting on it such that elements of the set with nontrivial stabilizer are “folded in half” (or $n$ times) to count for less. I wish I could find more resources about this sort of stuff.

Exercises

Show, for fixed $k$, that the probability that a permutation of $n$ elements consists of only cycles of length at most $k$ vanishes as $n$ becomes large.

Compute the generating function of the complete Bell polynomials.

Given a graph $G$, let $\mathbf{A}$ denote its adjacency matrix and let $N_k = \text{tr } \mathbf{A}^k$ denote the number of closed walks of length $k$. Show that

$\displaystyle \frac{1}{\det (\mathbf{I} - t \mathbf{A})} = \exp \left( N_1 t + \frac{N_2 t^2}{2} + \frac{N_3 t^3}{3} + ... \right)$.

This function is (a variant of) the Ihara zeta function of $G$. (The purely manipulative proof of this identity is a consequence of a result in the last post; what would be great is a combinatorial proof.)

Let $A(x)$ be an ordinary, not exponential, generating function describing some collection of objects. Show that the generating function for unordered pairs of distinct objects from this collection is

$\displaystyle \frac{A(x)^2 - A(x^2)}{2}$.

(Note what happens when $A(1)$ is finite.) Hence solve the following problem.

Putnam 2003 A6: for a set $S$ of non-negative integers let $r_S(n)$ denote the number of ordered pairs $(s_1, s_2) \in S^2, s_1 \neq s_2$ such that $s_1 + s_2 = n$. Is it possible to partition the non-negative integers into disjoint sets $A$ and $B$ such that $r_A(n) = r_B(n)$ for all $n$?

Similarly, show that the generating function for unordered triplets of distinct objects from this collection is

$\displaystyle \frac{A(x)^3 - 3A(x^2) A(x) + 2A(x^3)}{6}$.

Generalize. (The easiest way to do this, as usual, is to pack everything into a bigger generating function. If you get stuck, compute the first few cycle index polynomials of the symmetric groups.)