Feeds:
Posts

## Singular value decomposition

As a warm-up to the subject of this blog post, consider the problem of how to classify $n \times m$ matrices $M \in \mathbb{R}^{n \times m}$ up to change of basis in both the source ($\mathbb{R}^m$) and the target ($\mathbb{R}^n$). In other words, the problem is to describe the equivalence classes of the equivalence relation on $n \times m$ matrices given by

$\displaystyle M \sim N \Leftrightarrow M = PNQ^{-1}, P \in GL_n(\mathbb{R}), Q \in GL_m(\mathbb{R})$.

It turns out that the equivalence class of $M$ is completely determined by its rank $r = \text{rank}(M)$. To prove this we construct some bases by induction. For starters, let $x_1 \in \mathbb{R}^m$ be a vector such that $y_1 = M x_1 \neq 0$; this is always possible unless $M = 0$. Next, let $x_2 \in \mathbb{R}^m$ be a vector such that $y_2 = M x_2$ is linearly independent of $y_1$; this is always possible unless $\text{rank}(M) = 1$.

Continuing in this way, we construct vectors $x_1, \dots x_r \in \mathbb{R}^m$ such that the vectors $y_1 = M x_1, \dots y_r = M x_r \in \mathbb{R}^n$ are linearly independent, hence a basis of the column space of $M$. Next, we complete the $x_i$ and $y_i$ to bases of $M$ in whatever manner we like. With respect to these bases, $M$ takes a very simple form: we have $M x_i = y_i$ if $1 \le i \le r$ and otherwise $M x_i = 0$. Hence, in these bases, $M$ is a block matrix where the top left block is an $r \times r$ identity matrix and the other blocks are zero.

Explicitly, this means we can write $M$ as a product

$\displaystyle M = PDQ^{-1}, P \in GL_n(\mathbb{R}), Q \in GL_m(\mathbb{R})$

where $D$ has the block form above, the columns of $P$ are the basis of $\mathbb{R}^n$ we found by completing $M x_1, \cdots M x_r$, and the columns of $Q$ are the basis of $\mathbb{R}^m$ we found by completing $x_1, \cdots x_r$. This decomposition can be computed by row and column reduction on $M$, where the row operations we perform give $P$ and the column operations we perform give $Q$.

Conceptually, the question we’ve asked is: what does a linear transformation $T : X \to Y$ between vector spaces “look like,” when we don’t restrict ourselves to picking a particular basis of $X$ or $Y$? The answer, stated in a basis-independent form, is the following. First, we can factor $T$ as a composite

$\displaystyle X \xrightarrow{p} \text{im}(T) \xrightarrow{i} Y$

where $\text{im}(T)$ is the image of $T$. Next, we can find direct sum decompositions $X \cong \text{im}(T) \oplus X'$ and $Y \cong \text{im}(T) \oplus Y'$ such that $p$ is the projection of $X$ onto its first factor and $i$ is the inclusion of the first factor into $Y$. Hence every linear transformation “looks like” a composite

$\displaystyle \text{im}(T) \oplus X' \xrightarrow{p_{\text{im}(T)}} \text{im}(T) \xrightarrow{i_{\text{im}(T)}} \text{im}(T) \oplus Y$

of a projection onto a direct summand and an inclusion of a direct summand. So the only basis-independent information contained in $T$ is the dimension of the image $\text{im}(T)$, or equivalently the rank of $T$. (It’s worth considering the analogous question for functions between sets, whose answer is a bit more complicated.)

The actual problem this blog post is about is more interesting: it is to classify $n \times m$ matrices $M \in \mathbb{R}^{n \times m}$ up to orthogonal change of basis in both the source and the target. In other words, we now want to understand the equivalence classes of the equivalence relation given by

$\displaystyle M \sim N \Leftrightarrow M = UNV^{-1}, U \in O(n), V \in O(m)$.

Conceptually, we’re now asking: what does a linear transformation $T : X \to Y$ between finite-dimensional Hilbert spaces “look like”?

## Higher linear algebra

Let $k$ be a commutative ring. A popular thing to do on this blog is to think about the Morita 2-category $\text{Mor}(k)$ of algebras, bimodules, and bimodule homomorphisms over $k$, but it might be unclear exactly what we’re doing when we do this. What are we studying when we study the Morita 2-category?

The answer is that we can think of the Morita 2-category as a 2-category of module categories over the symmetric monoidal category $\text{Mod}(k)$ of $k$-modules, equipped with the usual tensor product $\otimes_k$ over $k$. By the Eilenberg-Watts theorem, the Morita 2-category is equivalently the 2-category whose

• objects are the categories $\text{Mod}(A)$, where $A$ is a $k$-algebra,
• morphisms are cocontinuous $k$-linear functors $\text{Mod}(A) \to \text{Mod}(B)$, and
• 2-morphisms are natural transformations.

An equivalent way to describe the morphisms is that they are “$\text{Mod}(k)$-linear” in that they respect the natural action of $\text{Mod}(k)$ on $\text{Mod}(A)$ given by

$\displaystyle \text{Mod}(k) \times \text{Mod}(A) \ni (V, M) \mapsto V \otimes_k M \in \text{Mod}(A)$.

This action comes from taking the adjoint of the enrichment of $\text{Mod}(A)$ over $\text{Mod}(k)$, which gives a tensoring of $\text{Mod}(A)$ over $\text{Mod}(k)$. Since the two are related by an adjunction in this way, a functor respects one iff it respects the other.

So Morita theory can be thought of as a categorified version of module theory, where we study modules over $\text{Mod}(k)$ instead of over $k$. In the simplest cases, we can think of Morita theory as a categorified version of linear algebra, and in this post we’ll flesh out this analogy further.

## What’s a fire, and why does it – what’s the word – burn?

I was staring at a bonfire on a beach the other day and realized that I didn’t understand anything about fire and how it works. (For example: what determines its color?) So I looked up some stuff, and here’s what I learned.

## More on partition asymptotics

In the previous post we described a fairly straightforward argument, using generating functions and the saddle-point bound, for giving an upper bound

$\displaystyle p(n) \le \exp \left( \pi \sqrt{ \frac{2n}{3} } \right)$

on the partition function $p(n)$. In this post I’d like to record an elementary argument, making no use of generating functions, giving a lower bound of the form $\exp C \sqrt{n}$ for some $C > 0$, which might help explain intuitively why this exponential-of-a-square-root rate of growth makes sense.

The starting point is to think of a partition of $n$ as a Young diagram of size $n$, or equivalently (in French coordinates) as a lattice path from somewhere on the y-axis to somewhere on the x-axis, which only steps down or to the right, such that the area under the path is $n$. Heuristically, if the path takes a total of $L$ steps then there are about $2^L$ such paths, and if the area under the path is $n$ then the length of the path should be about $O(\sqrt{n})$, so this already goes a long way towards explaining the exponential-of-a-square-root behavior.

## The man who knew partition asymptotics

(Part I of this post is here)

Let $p(n)$ denote the partition function, which describes the number of ways to write $n$ as a sum of positive integers, ignoring order. In 1918 Hardy and Ramanujan proved that $p(n)$ is given asymptotically by

$\displaystyle p(n) \approx \frac{1}{4n \sqrt{3}} \exp \left( \pi \sqrt{ \frac{2n}{3} } \right)$.

This is a major plot point in the new Ramanujan movie, where Ramanujan conjectures this result and MacMahon challenges him by agreeing to compute $p(200)$ and comparing it to what this approximation gives. In this post I’d like to describe how one might go about conjecturing this result up to a multiplicative constant; proving it is much harder.

## The man who knew elliptic integrals, prime number theorems, and black holes

I went to see The Man Who Knew Infinity yesterday. I have nothing much to say about the movie as a movie that wasn’t already said in Scott Aaronson‘s review, except that I learned a few fun facts during the Q&A session with writer/director Matthew Brown afterwards. Namely, it’s a little surprising the movie was able to get high-profile stars like Dev Patel and Jeremy Irons on board given that it was made on a relatively low budget. Apparently, Dev Patel signed on because he really wanted to popularize the story of Ramanujan, and Jeremy Irons signed on because he was hooked after being given a copy of Hardy’s A Mathematician’s Apology.

(Disclaimer: this blog does not endorse any of the opinions Hardy expresses in the Apology, e.g. the one about mathematics being a young man’s game, the one about pure math being better than applied math, or the one about exposition being an unfit activity for a real mathematician. The opinion of this blog is that the Apology should be read mostly for insight into Hardy’s psychology rather than for guidance about how to do mathematics.)

Anyway, since this is a movie about Ramanujan, let’s talk about some of the math that appears in the movie. It’s what he would have wanted, probably.

## Separable algebras

Let $k$ be a commutative ring and let $A$ be a $k$-algebra. In this post we’ll investigate a condition on $A$ which generalizes the condition that $A$ is a finite separable field extension (in the case that $k$ is a field). It can be stated in many equivalent ways, as follows. Below, “bimodule” always means “bimodule over $k$.”

Definition-Theorem: The following conditions on $A$ are all equivalent, and all define what it means for $A$ to be a separable $k$-algebra:

1. $A$ is projective as an $(A, A)$-bimodule (equivalently, as a left $A \otimes_k A^{op}$-module).
2. The multiplication map $A \otimes_k A^{op} \ni (a, b) \xrightarrow{m} ab \in A$ has a section as an $(A, A)$-bimodule map.
3. $A$ admits a separability idempotent: an element $p \in A \otimes_k A^{op}$ such that $m(p) = 1$ and $ap = pa$ for all $a \in A$ (which implies that $p^2 = p$).

(Edit, 3/27/16: Previously this definition included a condition involving Hochschild cohomology, but it’s debatable whether what I had in mind is the correct definition of Hochschild cohomology unless $k$ is a field or $A$ is projective over $k$. It’s been removed since it plays no role in the post anyway.)

When $k$ is a field, this condition turns out to be a natural strengthening of the condition that $A$ is semisimple. In general, loosely speaking, a separable $k$-algebra is like a “bundle of semisimple algebras” over $\text{Spec } k$.