Feeds:
Posts

## The connected components functor

I skimmed through books 1, 4, and 5 of my new batch and am currently skimming through 3; it seems I don’t have the mathematical prerequisites to get much out of 2. It will take me a long time to digest all of the interesting things I’ve learned, but I thought I’d discuss an interesting idea coming from Lawvere and Schanuel.

An important idea in mathematics is to reduce an object into its “connected components.” This has various meanings depending on context; it is perhaps clearest in the categories $\text{Top}$ and $\text{Graph}$, and also has a sensible meaning in, for example, $G\text{-Set}$ for a group $G$. Lawvere and Schanuel suggest the following way to understand several of the examples that occur in practice:

Let $C$ be a concrete category with a forgetful functor $F : C \to \text{Set}$. If it exists, let $T : \text{Set} \to C$ be the left adjoint to $F$. Then $T$ describes the “discrete” (i.e. “totally disconnected”) objects of $C$, and, if it exists, the left adjoint to $T$ is a functor $\pi_0 : C \to \text{Set}$ describing the “connected components” of an object in $C$.

I think this is a nice illustration of a construction that is illuminated by the abstract approach, so I’ll briefly describe how this works for a few of my favorite categories.

Apropos of nothing, I now have a new favorite mathematical term:

“Fake baby monster Lie algebra.”

And I thought “complex simple Lie algebra” was funny.

## Localization and the strong Nullstellensatz

A basic idea in topology and analysis is to study a space by restricting attention to arbitrarily small neighborhoods of a point. It is desirable, therefore, to have a notion of looking at small neighborhoods of a point which can be stated in entirely ring-theoretic terms. More generally, we’d like to have a way to ignore some points and only think about others. The tool that allows us to do this is called localization, and it offers a conceptual proof of the strong Nullstellensatz from the weak Nullstellensatz, which, as you’ll recall, is the tool that allows us to describe the category of affine varieties as the opposite of a category of algebras.

## MaxSpec is not a functor

For commutative unital C*-algebras and for finitely-generated reduced integral $\mathbb{C}$-algebras, we have seen that $\text{MaxSpec}$ is a functor which sends homomorphisms to continuous functions. However, this is not true for general commutative rings. What we want is for a ring homomorphism $\phi : R \to S$ to be sent to a continuous function

$M(\phi) : \text{MaxSpec } S \to \text{MaxSpec } R$

via contraction. Unfortunately, the contraction of a maximal ideal is not always a maximal ideal. The issue here is that a maximal ideal of $S$ is just a surjective homomorphism $S \to F$ where $F$ is some field, and the contracted ideal is just the kernel of the homomorphism $R \xrightarrow{\phi} S \to F$. However, this homomorphism need no longer be surjective, so it may land in a subring of $F$ which may not be a field. For a specific example, consider the inclusion $\mathbb{Z} \to \mathbb{Q}$. The ideal $(0)$ is maximal in $\mathbb{Q}$, but its contraction is the ideal $(0)$ in $\mathbb{Z}$, which is prime but not maximal.

In other words, if we want to think of ring homomorphisms as continuous functions on spectra, then we cannot work with maximal ideals alone. Prime ideals are more promising: a prime ideal is just a surjective homomorphism $S \to D$ where $D$ is some integral domain, and the contracted ideal of a prime ideal is always prime because a subring of an integral domain is still an integral domain. Now, therefore, is an appropriate time to replace $\text{MaxSpec}$ with $\text{Spec}$, the space of all prime ideals equipped with the Zariski topology, and this time $\text{Spec}$ is a legitimate contravariant functor $\text{CommRing} \to \text{Top}$.

In this post we’ll discuss this choice. I should mention that the Secret Blogging Seminar has discussed this point very thoroughly already, but from a much more high-brow perspective.

## Affine varieties and regular maps

I have to admit I’ve been using somewhat unconventional definitions. The usual definition of an affine variety is as an irreducible Zariski-closed subset of $\text{MaxSpec } k[x_1, ... x_n] \simeq \mathbb{A}^n(k)$, affine $n$-space over an algebraically closed field $k$. A generic Zariski-closed subset is usually referred to instead as an algebraic set (although some authors also call these varieties), and the terminology does not apply to non-algebraically closed fields. The additional difficulty that arises in the non-algebraically-closed case is that it’s harder to think about points. For example, $\text{MaxSpec } \mathbb{R}[x]$ has two types of points corresponding to the two types of irreducible polynomials: the usual points $(x - a), a \in \mathbb{R}$ on the real line and additional points $(x^2 - 2ax + (a^2 + b^2)), a, b \in \mathbb{R}$. These points can be thought of as orbits of the action of $\text{Gal}(\mathbb{C}/\mathbb{R})$ on $\mathbb{C}$, hence $\text{MaxSpec } \mathbb{R}[x]$ can be thought of as the quotient of $\text{MaxSpec } \mathbb{C}[x]$ by this group action. This picture generalizes.

Anyway, for convenience let’s stick to $k = \mathbb{C}$. In this case, and more generally in the algebraically closed case, there is a reasonably simple description of what the category of affine varieties looks like, but first we have to describe what the morphisms look like and then we have to take the strong Nullstellensatz on faith, since we haven’t proven it yet.

## Functoriality

I wanted to talk about the geometric interpretation of localization, but before I do so I should talk more generally about the relationship between ring homomorphisms on the one hand and continuous functions between spectra on the other. This relationship is of utmost importance, for example if we want to have any notion of when two varieties are isomorphic, and so it’s worth describing carefully.

The geometric picture is perhaps clearest in the case where $X$ is a compact Hausdorff space and $C(X) = \text{Hom}_{\text{Top}}(X, \mathbb{R})$ is its ring of functions. From this definition it follows that $C$ is a contravariant functor from the category $\text{CHaus}$ of compact Hausdorff spaces to the category $\mathbb{R}\text{-Alg}$ of $\mathbb{R}$-algebras (which we are assuming have identities). Explicitly, a continuous function

$f : X \to Y$

between compact Hausdorff spaces is sent to an $\mathbb{R}$-algebra homomorphism

$C(f) : C(Y) \to C(X)$

in the obvious way: a continuous function $Y \to \mathbb{R}$ is sent to a continuous function $X \xrightarrow{f} Y \to \mathbb{R}$. The contravariance may look weird if you’re not used to it, but it’s perfectly natural in the case that $f$ is an embedding because then one may identify $C(X)$ with the restriction of $C(Y)$ to the image of $f$. This restriction takes the form of a homomorphism $C(Y) \to C(X)$ whose kernel is the set of functions which are zero on $f(X)$, so it exhibits $C(X)$ as a quotient of $C(Y)$.

Question: Does every $\mathbb{R}$-algebra homomorphism $C(Y) \to C(X)$ come from a continuous function $X \to Y$?

## Textbooks

I recently added two new pages to the blog: a bibliography for listing references I cite on multiple occasions, and a suggestions and requests page. The bibliography is likely to soon contain citations for at least some of the following books which have recently come into my possession:

1. Introduction to the Theory of Computation, Sipser
2. Lectures on Quantum Mechanics, Faddeev, Yakubovskii
3. Representation Theory: a First Course, Fulton, Harris
4. Conceptual Mathematics, Lawvere, Schanuel
5. Concrete Mathematics: a Foundation for Computer Science, Graham, Knuth, Patashnik

I haven’t looked at 2 or 4 very closely yet, but so far I find 1, 3, and 5 to be among the best written textbooks I have ever read. Sipser’s book, in particular, strikes me as having found a perfect balance between brevity and clarity. His tone is conversational but finely polished, and I rather like his habit of summarizing the basic strategy of a proof before actually writing it down. Generally I am finding the book an absolute pleasure to read, which I can’t say for most of the math textbooks I’ve seen. You will likely see me blogging a little about languages and automata once I finish up my current series (right now I’m stuck on what should be a trivial proof).

Why don’t more mathematicians write like Sipser?