Feeds:
Posts

## Cartesian closed categories and the Lawvere fixed point theorem

Previously we saw that Cantor’s theorem, the halting problem, and Russell’s paradox all employ the same diagonalization argument, which takes the following form. Let $X$ be a set and let

$\displaystyle f : X \times X \to 2$

be a function. Then we can write down a function $g : X \to 2$ such that $g(x) \neq f(x, x)$. If we curry $f$ to obtain a function

$\displaystyle \text{curry}(f) : X \to 2^X$

it now follows that there cannot exist $x \in X$ such that $\text{curry}(f)(x) = g$, since $\text{curry}(f)(x)(x) = f(x, x) \neq g(x)$.

Currying is a fundamental notion. In mathematics, it is constantly implicitly used to talk about function spaces. In computer science, it is how some programming languages like Haskell describe functions which take multiple arguments: such a function is modeled as taking one argument and returning a function which takes further arguments. In type theory, it reproduces function types. In logic, it reproduces material implication.

Today we will discuss the appropriate categorical setting for understanding currying, namely that of cartesian closed categories. As an application of the formalism, we will prove the Lawvere fixed point theorem, which generalizes the argument behind Cantor’s theorem to cartesian closed categories.

## Operations, pro-objects, and Grothendieck’s Galois theory

Previously we looked at several examples of $n$-ary operations on concrete categories $(C, U)$. In every example except two, $U$ was a representable functor and $C$ had finite coproducts, which made determining the $n$-ary operations straightforward using the Yoneda lemma. The two examples where $U$ was not representable were commutative Banach algebras and commutative C*-algebras, and it is possible to construct many others. Without representability we can’t apply the Yoneda lemma, so it’s unclear how to determine the operations in these cases.

However, for both commutative Banach algebras and commutative C*-algebras, and in many other cases, there is a sense in which a sequence of objects approximates what the representing object of $U$ “ought” to be, except that it does not quite exist in the category $C$ itself. These objects will turn out to define a pro-object in $C$, and when $U$ is pro-representable in the sense that it’s described by a pro-object, we’ll attempt to describe $n$-ary operations $U^n \to U$ in terms of the pro-representing object.

The machinery developed here is relevant to understanding Grothendieck’s version of Galois theory, which among other things leads to the notion of étale fundamental group; we will briefly discuss this.

## Operations and Lawvere theories

Groups are in particular sets equipped with two operations: a binary operation (the group operation) $(x_1, x_2) \mapsto x_1 x_2$ and a unary operation (inverse) $x_1 \mapsto x_1^{-1}$. Using these two operations, we can build up many other operations, such as the ternary operation $(x_1, x_2, x_3) \mapsto x_1^2 x_2^{-1} x_3 x_1$, and the axioms governing groups become rules for deciding when two expressions describe the same operation (see, for example, this previous post).

When we think of groups as objects of the category $\text{Grp}$, where do these operations go? They’re certainly not morphisms in the corresponding categories: instead, the morphisms are supposed to preserve these operations. But can we recover the operations themselves?

It turns out that the answer is yes. The rest of this post will describe a general categorical definition of $n$-ary operation and meander through some interesting examples. After discussing the general notion of a Lawvere theory, we will then prove a reconstruction theorem and then make a few additional comments.

## The double commutant theorem

Let $A$ be an abelian group and $T = \{ T_i : A \to A \}$ be a collection of endomorphisms of $A$. The commutant $T'$ of $T$ is the set of all endomorphisms of $A$ commuting with every element of $T$; symbolically,

$\displaystyle T' = \{ S \in \text{End}(A) : TS = ST \}$.

The commutant of $T$ is equal to the commutant of the subring of $\text{End}(A)$ generated by the $T_i$, so we may assume without loss of generality that $T$ is already such a subring. In that case, $T'$ is just the ring of endomorphisms of $A$ as a left $T$-module. The use of the term commutant instead can be thought of as emphasizing the role of $A$ and de-emphasizing the role of $T$.

The assignment $T \mapsto T'$ is a contravariant Galois connection on the lattice of subsets of $\text{End}(A)$, so the double commutant $T \mapsto T''$ may be thought of as a closure operator. Today we will prove a basic but important theorem about this operator.

## Non-unital rings

(This post was originally intended to go up immediately after the sequence on Gelfand duality.)

A rng (“ring without the i”) or non-unital ring is a semigroup object in $\text{Ab}$. Equivalently, it is an abelian group $A$ together with an associative bilinear map $m : A \otimes A \to A$ (which is not required to have an identity). This is what some authors mean when they say “ring,” but this does not appear to be standard. A morphism between rngs is an abelian group homomorphism which preserves multiplication (and need not preserve a multiplicative identity even if it exists); this defines the category $\text{Rng}$ of rngs (to be distinguished from the category $\text{Ring}$ of rings).

Until recently, I was not comfortable with non-unital rings. If we think of rings either algebraically as endomorphisms of abelian groups or geometrically as rings of functions on spaces, then there does not seem to be any reason to exclude the identity endomorphism resp. the identity function on a space. As for morphisms which don’t preserve identities, if $X \to Y$ is any map between spaces of some kind, then the identity function $Y \to F$ ($F$ is, say, a field) is sent to the identity function $X \to F$, so not preserving identities when they exist seems unnatural.

However, not requiring or preserving identities turns out to be natural in the theory of C*-algebras; in the commutative case, it corresponds roughly to thinking about locally compact Hausdorff spaces rather than just compact Hausdorff spaces. In this post we will discuss rngs generally, including a discussion of the geometric picture of commutative rngs, to get more comfortable with them. It turns out that we can study rngs by formally adjoining multiplicative identities to them. This is an algebraic version of taking the one-point compactification, and it allows us to extend Gelfand duality, in a suitable sense, to locally compact Hausdorff spaces (see this math.SE question for the precise statement, which we will not discuss here).

## Hilbert spaces (and dagger categories)

Hilbert spaces are a particularly nice class of Banach spaces. They axiomatize ideas from Euclidean geometry such as orthogonality, projection, and the Pythagorean theorem, but the ideas apply to many infinite-dimensional spaces of functions of interest to various branches of mathematics. Hilbert spaces are also fundamental to quantum mechanics, as vectors in Hilbert spaces (up to phase) describe (pure) states of quantum systems.

Today we’ll develop and discuss some of the basic theory of Hilbert spaces. As with the theory of Banach spaces, there are (at least) two types of morphisms we might want to talk about (unitary operators and bounded operators), and we will discuss an elegant formalism that allows us to talk about both. Things written by John Baez will be cited excessively.

The Artin-Wedderburn theorem shows that the definition of a semisimple ring is enormously restrictive. Even $\mathbb{Z}$ fails to be semisimple! A less restrictive notion, but one that still captures the notion of a ring which can be understood by how it acts on simple (left) modules, is that of a semiprimitive or Jacobson semisimple ring, one with the property that every element $r \in R$ acts nontrivially in some simple (left) module $M$.

Said another way, let the Jacobson radical $J(R)$ of a ring consist of all elements of $r$ which act trivially on every simple module. By definition, this is an intersection of kernels of ring homomorphisms, hence a two-sided ideal. A ring $R$ is then semiprimitive if it has trivial Jacobson radical.

The goal of this post will be to discuss some basic properties of the Jacobson radical. I am again working mostly from Lam’s A first course in noncommutative rings.

## Structures on hom-sets

Suppose I hand you a commutative ring $R$. I stipulate that you are only allowed to work in the language of the category of commutative rings; you can only refer to objects and morphisms. (That means you can’t refer directly to elements of $R$, and you also can’t refer directly to the multiplication or addition maps $R \times R \to R$, since these aren’t morphisms.) Geometrically, I might equivalently say that you are only allowed to work in the language of the category of affine schemes, since the two are dual. Can you recover $R$ as a set, and can you recover the ring operations on $R$?

The answer turns out to be yes. Today we’ll discuss how this works, and along the way we’ll run into some interesting ideas.

## Boolean rings, ultrafilters, and Stone’s representation theorem

Recently, I have begun to appreciate the use of ultrafilters to clean up proofs in certain areas of mathematics. I’d like to talk a little about how this works, but first I’d like to give a hefty amount of motivation for the definition of an ultrafilter.

Terence Tao has already written a great introduction to ultrafilters with an eye towards nonstandard analysis. I’d like to introduce them from a different perspective. Some of the topics below are also covered in these posts by Todd Trimble.

## The adjoint functor theorem for posets

Recently in Measure Theory we needed the following lemma.

Lemma: Let $g : \mathbb{R} \to \mathbb{R}$ be non-constant, right-continuous and non-decreasing, and let $I = (g(-\infty), g(\infty))$. Define $f : I \to \mathbb{R}$ by $f(x) = \text{inf} \{ y \in \mathbb{R} : x \le g(y) \}$. Then $f$ is left-continuous and non-decreasing. Moreover, for $x \in I$ and $y \in \mathbb{R}$,

$f(x) \le y \Leftrightarrow x \le g(y)$.

If you’re categorically minded, this last condition should remind you of the definition of a pair of adjoint functors. In fact it is possible to interpret the above lemma this way; it is a special case of the adjoint functor theorem for posets. Today I’d like to briefly explain this. (And who said category theory isn’t useful in analysis?)

The usual caveats regarding someone who’s never studied category talking about it apply. I welcome any corrections.