Feeds:
Posts

## Topological Diophantine equations

The problem of finding solutions to Diophantine equations can be recast in the following abstract form. Let $R$ be a commutative ring, which in the most classical case might be a number field like $\mathbb{Q}$ or the ring of integers in a number field like $\mathbb{Z}$. Suppose we want to find solutions, over $R$, of a system of polynomial equations

$\displaystyle f_1 = \dots = f_m = 0, f_i \in R[x_1, \dots x_n]$.

Then it’s not hard to see that this problem is equivalent to the problem of finding $R$-algebra homomorphisms from $S = R[x_1, \dots x_n]/(f_1, \dots f_m)$ to $R$. This is equivalent to the problem of finding left inverses to the morphism

$\displaystyle R \to S$

of commutative rings making $S$ an $R$-algebra, or more geometrically equivalent to the problem of finding right inverses, or sections, of the corresponding map

$\displaystyle \text{Spec } S \to \text{Spec } R$

of affine schemes. Allowing $\text{Spec } S$ to be a more general scheme over $\text{Spec } R$ can also capture more general Diophantine problems.

The problem of finding sections of a morphism – call it the section problem – is a problem that can be stated in any category, and the goal of this post is to say some things about the corresponding problem for spaces. That is, rather than try to find sections of a map between affine schemes, we’ll try to find sections of a map $f : E \to B$ between spaces; this amounts, very roughly speaking, to solving a “topological Diophantine equation.” The notation here is meant to evoke a particularly interesting special case, namely that of fiber bundles.

We’ll try to justify the section problem for spaces both as an interesting problem in and of itself, capable of encoding many other nontrivial problems in topology, and as a possible source of intuition about Diophantine equations. In particular we’ll discuss what might qualify as topological analogues of the Hasse principle and the Brauer-Manin obstruction.

## The Picard groups

Let $R$ be a commutative ring. From $R$ we can construct the category $R\text{-Mod}$ of $R$-modules, which becomes a symmetric monoidal category when equipped with the tensor product of $R$-modules. Now, whenever we have a monoidal operation (for example, the multiplication on a ring), it’s interesting to look at the invertible things with respect to that operation (for example, the group of units of a ring). This suggests the following definition.

Definition: The Picard group $\text{Pic}(R)$ of $R$ is the group of isomorphism classes of $R$-modules which are invertible with respect to the tensor product.

By invertible we mean the following: for $L \in \text{Pic}(R)$ there exists some $L^{-1}$ such that the tensor product $L \otimes_R L^{-1}$ is isomorphic to the identity for the tensor product, namely $R$.

In this post we’ll meander through some facts about this Picard group as well as several variants, all of which capture various notions of line bundle on various kinds of spaces (where the above definition captures the notion of a line bundle on the affine scheme $\text{Spec } R$).

## Five proofs that the Euler characteristic of a closed orientable surface is even

Let $\Sigma_g$ be a closed orientable surface of genus $g$. (Below we will occasionally write $\Sigma$, omitting the genus.) Then its Euler characteristic $\chi(\Sigma_g) = 2 - 2g$ is even. In this post we will give five proofs of this fact that do not use the fact that we can directly compute the Euler characteristic to be $2 - 2g$, roughly in increasing order of sophistication. Along the way we’ll end up encountering or proving more general results that have other interesting applications.

## Hypersurfaces, 4-manifolds, and characteristic classes

In this post we’ll compute the (topological) cohomology of smooth projective (complex) hypersurfaces in $\mathbb{CP}^n$. When $n = 3$ the resulting complex surfaces give nice examples of 4-manifolds, and we’ll make use of various facts about 4-manifold topology to try to say more in this case; in particular we’ll be able to compute, in a fairly indirect way, the ring structure on cohomology. This answers a question raised by Akhil Mathew in this blog post.

Our route towards this result will turn out to pass through all of the most common types of characteristic classes: we’ll invoke, in order, Euler classes, Chern classes, Pontryagin classes, Wu classes, and Stiefel-Whitney classes.

## The free cocompletion I

Let $C$ be a (locally small) category. Recall that any such category naturally admits a Yoneda embedding

$\displaystyle Y : C \ni c \mapsto \text{Hom}(-, c) \in \widehat{C}$

into its presheaf category $\widehat{C} = [C^{op}, \text{Set}]$ (where we use $[C, D]$ to denote the category of functors $C \to D$). The Yoneda lemma asserts in particular that $Y$ is full and faithful, which justifies calling it an embedding.

When $C$ is in addition assumed to be small, the Yoneda embedding has the following elegant universal property.

Theorem: The Yoneda embedding $Y : C \to \widehat{C}$ exhibits $\widehat{C}$ as the free cocompletion of $C$ in the sense that for any cocomplete category $D$, the restriction functor

$\displaystyle Y^{\ast} : [\widehat{C}, D]_{\text{cocont}} \to [C, D]$

from the category of cocontinuous functors $\widehat{C} \to D$ to the category of functors $C \to D$ is an equivalence. In particular, any functor $C \to D$ extends (uniquely, up to natural isomorphism) to a cocontinuous functor $\widehat{C} \to D$, and all cocontinuous functors $\widehat{C} \to D$ arise this way (up to natural isomorphism).

Colimits should be thought of as a general notion of gluing, so the above should be understood as the claim that $\widehat{C}$ is the category obtained by “freely gluing together” the objects of $C$ in a way dictated by the morphisms. This intuition is important when trying to understand the definition of, among other things, a simplicial set. A simplicial set is by definition a presheaf on a certain category, the simplex category, and the universal property above says that this means simplicial sets are obtained by “freely gluing together” simplices.

In this post we’ll content ourselves with meandering towards a proof of the above result. In a subsequent post we’ll give a sampling of applications.

## A transcript of my qualifying exam

I passed my qualifying exam last Friday. Here is a copy of the syllabus and a transcript.

Although I’m sure there are more, I’m only aware of two other students at Berkeley who’ve posted transcripts of their quals, namely Christopher Wong and Eric Peterson. It would be nice if more people did this.

## How to invent intuitionistic logic

(This is an old post I never got around to finishing. It was originally going to have a second half about pointless topology; the interested reader can consult Vickers’ Topology via Logic on this theme.)

Standard presentations of propositional logic treat the Boolean operators “and,” “or,” and “not” as fundamental (e.g. these are the operators axiomatized by Boolean algebras). But from the point of view of category theory, arguably the most fundamental Boolean operator is “implies,” because it gives a collection of propositions the structure of a category, or more precisely a poset. We can endow the set of propositions with a morphism $p \to q$ whenever $p \Rightarrow q$, and no morphisms otherwise. Then the identity morphisms $\text{id}_p : p \to p$ simply reflect the fact that a proposition always implies itself, while composition of morphisms

$\displaystyle (p \Rightarrow q) \wedge (q \Rightarrow r) \to (p \Rightarrow r)$

is a familiar inference rule (hypothetical syllogism). Since it is possible to define “and,” “or,” and “not” in terms of “implies” in the Boolean setting, we might want to see what happens when we start from the perspective that propositional logic ought to be about certain posets and figure out how to recover the familiar operations from propositional logic by thinking about what their universal properties should be.

It turns out that when we do this, we don’t get ordinary propositional logic back in the sense that the posets we end up identifying are not just the Boolean algebras: instead we’ll get Heyting algebras, and the corresponding notion of logic we’ll get is intuitionistic logic.