Archive for the ‘math’ Category

In the previous post we described a fairly straightforward argument, using generating functions and the saddle-point bound, for giving an upper bound

\displaystyle p(n) \le \exp \left( \pi \sqrt{ \frac{2n}{3} } \right)

on the partition function p(n). In this post I’d like to record an elementary argument, making no use of generating functions, giving a lower bound of the form \exp C \sqrt{n} for some C > 0, which might help explain intuitively why this exponential-of-a-square-root rate of growth makes sense.

The starting point is to think of a partition of n as a Young diagram of size n, or equivalently (in French coordinates) as a lattice path from somewhere on the y-axis to somewhere on the x-axis, which only steps down or to the right, such that the area under the path is n. Heuristically, if the path takes a total of L steps then there are about 2^L such paths, and if the area under the path is n then the length of the path should be about O(\sqrt{n}), so this already goes a long way towards explaining the exponential-of-a-square-root behavior.


Read Full Post »

(Part I of this post is here)

Let p(n) denote the partition function, which describes the number of ways to write n as a sum of positive integers, ignoring order. In 1918 Hardy and Ramanujan proved that p(n) is given asymptotically by

\displaystyle p(n) \approx \frac{1}{4n \sqrt{3}} \exp \left( \pi \sqrt{ \frac{2n}{3} } \right).

This is a major plot point in the new Ramanujan movie, where Ramanujan conjectures this result and MacMahon challenges him by agreeing to compute p(200) and comparing it to what this approximation gives. In this post I’d like to describe how one might go about conjecturing this result up to a multiplicative constant; proving it is much harder.


Read Full Post »

I went to see The Man Who Knew Infinity yesterday. I have nothing much to say about the movie as a movie that wasn’t already said in Scott Aaronson‘s review, except that I learned a few fun facts during the Q&A session with writer/director Matthew Brown afterwards. Namely, it’s a little surprising the movie was able to get high-profile stars like Dev Patel and Jeremy Irons on board given that it was made on a relatively low budget. Apparently, Dev Patel signed on because he really wanted to popularize the story of Ramanujan, and Jeremy Irons signed on because he was hooked after being given a copy of Hardy’s A Mathematician’s Apology.

(Disclaimer: this blog does not endorse any of the opinions Hardy expresses in the Apology, e.g. the one about mathematics being a young man’s game, the one about pure math being better than applied math, or the one about exposition being an unfit activity for a real mathematician. The opinion of this blog is that the Apology should be read mostly for insight into Hardy’s psychology rather than for guidance about how to do mathematics.)

Anyway, since this is a movie about Ramanujan, let’s talk about some of the math that appears in the movie. It’s what he would have wanted, probably.


Read Full Post »

Separable algebras

Let k be a commutative ring and let A be a k-algebra. In this post we’ll investigate a condition on A which generalizes the condition that A is a finite separable field extension (in the case that k is a field). It can be stated in many equivalent ways, as follows. Below, “bimodule” always means “bimodule over k.”

Definition-Theorem: The following conditions on A are all equivalent, and all define what it means for A to be a separable k-algebra:

  1. A is projective as an (A, A)-bimodule (equivalently, as a left A \otimes_k A^{op}-module).
  2. The multiplication map A \otimes_k A^{op} \ni (a, b) \xrightarrow{m} ab \in A has a section as an (A, A)-bimodule map.
  3. A admits a separability idempotent: an element p \in A \otimes_k A^{op} such that m(p) = 1 and ap = pa for all a \in A (which implies that p^2 = p).

(Edit, 3/27/16: Previously this definition included a condition involving Hochschild cohomology, but it’s debatable whether what I had in mind is the correct definition of Hochschild cohomology unless k is a field or A is projective over k. It’s been removed since it plays no role in the post anyway.)

When k is a field, this condition turns out to be a natural strengthening of the condition that A is semisimple. In general, loosely speaking, a separable k-algebra is like a “bundle of semisimple algebras” over \text{Spec } k.


Read Full Post »

Coalgebraic geometry

Previously we suggested that if we think of commutative algebras as secretly being functions on some sort of spaces, we should correspondingly think of cocommutative coalgebras as secretly being distributions on some sort of spaces. In this post we’ll describe what these spaces are in the language of algebraic geometry.

Let D be a cocommutative coalgebra over a commutative ring k. If we want to make sense of D as defining an algebro-geometric object, it needs to have a functor of points on commutative k-algebras. Here it is:

\displaystyle D(-) : \text{CAlg}(k) \ni R \mapsto |D \otimes_k R| \in \text{Set}.

In words, the functor of points of a cocommutative coalgebra D sends a commutative k-algebra R to the set |D \otimes_k R| of setlike elements of D \otimes_k R. In the rest of this post we’ll work through some examples.


Read Full Post »

Coalgebras of distributions

Mathematicians are very fond of thinking about algebras. In particular, it’s common to think of commutative algebras as consisting of functions of some sort on spaces of some sort.

Less commonly, mathematicians sometimes think about coalgebras. In general it seems that mathematicians find these harder to think about, although it’s sometimes unavoidable, e.g. when discussing Hopf algebras. The goal of this post is to describe how to begin thinking about cocommutative coalgebras as consisting of distributions of some sort on spaces of some sort.


Read Full Post »

The principle of maximum entropy asserts that when trying to determine an unknown probability distribution (for example, the distribution of possible results that occur when you toss a possibly unfair die), you should pick the distribution with maximum entropy consistent with your knowledge.

The goal of this post is to derive the principle of maximum entropy in the special case of probability distributions over finite sets from

We’ll do this by deriving an arguably more fundamental principle of maximum relative entropy using only Bayes’ theorem.


Read Full Post »

Older Posts »


Get every new post delivered to your Inbox.

Join 479 other followers