Feeds:
Posts

## Archive for the ‘category theory’ Category

Previously we described $n$-ary operations on (the underlying sets of the objects of) a concrete category $(C, U)$, which we defined as the natural transformations $U^n \to U$.

Puzzle: What are the $n$-ary operations on finite groups?

Note that $U$ is not representable here. The next post will answer this question, but for those who don’t already know the answer it should make a nice puzzle.

## Operations and Lawvere theories

Groups are in particular sets equipped with two operations: a binary operation (the group operation) $(x_1, x_2) \mapsto x_1 x_2$ and a unary operation (inverse) $x_1 \mapsto x_1^{-1}$. Using these two operations, we can build up many other operations, such as the ternary operation $(x_1, x_2, x_3) \mapsto x_1^2 x_2^{-1} x_3 x_1$, and the axioms governing groups become rules for deciding when two expressions describe the same operation (see, for example, this previous post).

When we think of groups as objects of the category $\text{Grp}$, where do these operations go? They’re certainly not morphisms in the corresponding categories: instead, the morphisms are supposed to preserve these operations. But can we recover the operations themselves?

It turns out that the answer is yes. The rest of this post will describe a general categorical definition of $n$-ary operation and meander through some interesting examples. After discussing the general notion of a Lawvere theory, we will then prove a reconstruction theorem and then make a few additional comments.

## The type system of mathematics

Occasionally I see mathematical questions that seem “grammatically incorrect” in some sense.

Example. “Is $[0, 1]$ open or closed?”
Example. “Is $\{ 1, 2, 3 \}$ a group?”
Example. “What’s the Fourier series of $\sin x + \sin \pi x$?”

Here are some sillier examples.

Example. “Is a rectangle prime?”
Example. “Is $17 \in 3$?”
Example. “What’s the Fourier series of the empty set?”

What all of these examples have in common is that they are type errors: they are attempts to apply some mathematical process to a kind of mathematical object it was never intended to take as input. If you tried to write a program in some highly mathematical programming language to answer these questions, it (hopefully!) wouldn’t compile.

Mathematical objects are usually not explicitly thought of as having types in the same way that objects in a programming language with a type system has types. Ordinary mathematics is supposed to be formalizable within Zermelo-Fraenkel (ZF) set theory, possibly with the axiom of choice, and in ZF every mathematical object is constructed as a set. In that sense they all have the same type. (In particular, the question “is $17 \in 3$?” is perfectly meaningful in ZF! This is one reason not to like ZF as a foundation of mathematics.) However, I think that in practice mathematical objects are implicitly thought of as having types, and that this is a mental habit mathematicians pick up but don’t often talk about.

Instead of thinking in terms of set theory, thinking of mathematical objects as having types allows us to import various useful concepts into mathematics, such as the notions of type safety, typecasting, subtyping, and overloading, that help us make more precise what we mean by a mathematical sentence being “grammatically incorrect.” The rest of this post will be a leisurely discussion of these and other type-based concepts as applied to mathematics in general. There are various categorical ideas here, but for the sake of accessibility we will restrict them to parenthetical remarks.

## Connected objects and a reconstruction theorem

A common theme in mathematics is to replace the study of an object with the study of some category that can be built from that object. For example, we can

• replace the study of a group $G$ with the study of its category $G\text{-Rep}$ of linear representations,
• replace the study of a ring $R$ with the study of its category $R\text{-Mod}$ of $R$-modules,
• replace the study of a topological space $X$ with the study of its category $\text{Sh}(X)$ of sheaves,

and so forth. A general question to ask about this setup is whether or to what extent we can recover the original object from the category. For example, if $G$ is a finite group, then as a category, the only data that can be recovered from $G\text{-Rep}$ is the number of conjugacy classes of $G$, which is not much information about $G$. We get considerably more data if we also have the monoidal structure on $G\text{-Rep}$, which gives us the character table of $G$ (but contains a little more data than that, e.g. in the associators), but this is still not a complete invariant of $G$. It turns out that to recover $G$ we need the symmetric monoidal structure on $G\text{-Rep}$; this is a simple form of Tannaka reconstruction.

Today we will prove an even simpler reconstruction theorem.

Theorem: A group $G$ can be recovered from its category $G\text{-Set}$ of $G$-sets.

## Epi-mono factorizations

In many familiar categories, a morphism $f : a \to b$ admits a canonical factorization, which we will write

$a \xrightarrow{e} c \xrightarrow{m} b$,

as the composite of some kind of epimorphism $e$ and some kind of monomorphism $m$. Here we should think of $c$ as something like the image of $f$. This is most familiar, for example, in the case of $\text{Set}, \text{Grp}, \text{Ring}$, and other algebraic categories, where $c$ is the set-theoretic image of $f$ in the usual sense.

Today we will discuss some general properties of factorizations of a morphism into an epimorphism followed by a monomorphism, or epi-mono factorizations. The failure of such factorizations to be unique turns out to be closely related to the failure of epimorphisms or monomorphisms to be regular.

## Groupoid cardinality

Suitably nice groupoids have a numerical invariant attached to them called groupoid cardinality. Groupoid cardinality is closely related to Euler characteristic and can be thought of as providing a notion of integration on groupoids.

There are various situations in mathematics where computing the size of a set is difficult but where that set has a natural groupoid structure and computing its groupoid cardinality turns out to be easier and give a nicer answer. In such situations the groupoid cardinality is also known as “mass,” e.g. in the Smith-Minkowski-Siegel mass formula for lattices. There are related situations in mathematics where one needs to describe a reasonable probability distribution on some class of objects and groupoid cardinality turns out to give the correct such distribution, e.g. the Cohen-Lenstra heuristics for class groups. We will not discuss these situations, but they should be strong evidence that groupoid cardinality is a natural invariant to consider.

## String diagrams, duality, and trace

Previously we introduced string diagrams and saw that they were a convenient way to talk about tensor products, partial compositions of multilinear maps, and symmetries. But string diagrams really prove their use when augmented to talk about duality, which will be described topologically by bending input and output wires. In particular, we will be able to see topologically the sense in which the following four pieces of information are equivalent:

• A linear map $U \to V$,
• A linear map $U \otimes V^{\ast} \to 1$,
• A linear map $V^{\ast} \to U^{\ast}$,
• A linear map $1 \to U^{\ast} \otimes V^{\ast}$.

Using string diagrams we will also give a diagrammatic definition of the trace $\text{tr}(f)$ of an endomorphism $f : V \to V$ of a finite-dimensional vector space, as well as a diagrammatic proof of some of its basic properties.

Below all vector spaces are finite-dimensional and the composition convention from the previous post is still in effect.

## Introduction to string diagrams

Today I would like to introduce a diagrammatic notation for dealing with tensor products and multilinear map. The basic idea for this notation appears to be due to Penrose. It has the advantage of both being widely applicable and easier and more intuitive to work with; roughly speaking, computations are performed by topological manipulations on diagrams, revealing the natural notation to use here is 2-dimensional (living in a plane) rather than 1-dimensional (living on a line).

For the sake of accessibility we will restrict our attention to vector spaces. There are category-theoretic things happening in this post but we will not point them out explicitly. We assume familiarity with the notion of tensor product of vector spaces but not much else.

Below the composition of a map $f : a \to b$ with a map $g : b \to c$ will be denoted $f \circ g : a \to c$ (rather than the more typical $g \circ f$). This will make it easier to translate between diagrams and non-diagrams. All diagrams were drawn in Paper.

## Non-unital rings

(This post was originally intended to go up immediately after the sequence on Gelfand duality.)

A rng (“ring without the i”) or non-unital ring is a semigroup object in $\text{Ab}$. Equivalently, it is an abelian group $A$ together with an associative bilinear map $m : A \otimes A \to A$ (which is not required to have an identity). This is what some authors mean when they say “ring,” but this does not appear to be standard. A morphism between rngs is an abelian group homomorphism which preserves multiplication (and need not preserve a multiplicative identity even if it exists); this defines the category $\text{Rng}$ of rngs (to be distinguished from the category $\text{Ring}$ of rings).

Until recently, I was not comfortable with non-unital rings. If we think of rings either algebraically as endomorphisms of abelian groups or geometrically as rings of functions on spaces, then there does not seem to be any reason to exclude the identity endomorphism resp. the identity function on a space. As for morphisms which don’t preserve identities, if $X \to Y$ is any map between spaces of some kind, then the identity function $Y \to F$ ($F$ is, say, a field) is sent to the identity function $X \to F$, so not preserving identities when they exist seems unnatural.

However, not requiring or preserving identities turns out to be natural in the theory of C*-algebras; in the commutative case, it corresponds roughly to thinking about locally compact Hausdorff spaces rather than just compact Hausdorff spaces. In this post we will discuss rngs generally, including a discussion of the geometric picture of commutative rngs, to get more comfortable with them. It turns out that we can study rngs by formally adjoining multiplicative identities to them. This is an algebraic version of taking the one-point compactification, and it allows us to extend Gelfand duality, in a suitable sense, to locally compact Hausdorff spaces (see this math.SE question for the precise statement, which we will not discuss here).