Feeds:
Posts

The Artin-Wedderburn theorem shows that the definition of a semisimple ring is enormously restrictive. Even $\mathbb{Z}$ fails to be semisimple! A less restrictive notion, but one that still captures the notion of a ring which can be understood by how it acts on simple (left) modules, is that of a semiprimitive or Jacobson semisimple ring, one with the property that every element $r \in R$ acts nontrivially in some simple (left) module $M$.

Said another way, let the Jacobson radical $J(R)$ of a ring consist of all elements of $r$ which act trivially on every simple module. By definition, this is an intersection of kernels of ring homomorphisms, hence a two-sided ideal. A ring $R$ is then semiprimitive if it has trivial Jacobson radical.

The goal of this post will be to discuss some basic properties of the Jacobson radical. I am again working mostly from Lam’s A first course in noncommutative rings.

## Structures on hom-sets

Suppose I hand you a commutative ring $R$. I stipulate that you are only allowed to work in the language of the category of commutative rings; you can only refer to objects and morphisms. (That means you can’t refer directly to elements of $R$, and you also can’t refer directly to the multiplication or addition maps $R \times R \to R$, since these aren’t morphisms.) Geometrically, I might equivalently say that you are only allowed to work in the language of the category of affine schemes, since the two are dual. Can you recover $R$ as a set, and can you recover the ring operations on $R$?

The answer turns out to be yes. Today we’ll discuss how this works, and along the way we’ll run into some interesting ideas.

## Boolean rings, ultrafilters, and Stone’s representation theorem

Recently, I have begun to appreciate the use of ultrafilters to clean up proofs in certain areas of mathematics. I’d like to talk a little about how this works, but first I’d like to give a hefty amount of motivation for the definition of an ultrafilter.

Terence Tao has already written a great introduction to ultrafilters with an eye towards nonstandard analysis. I’d like to introduce them from a different perspective. Some of the topics below are also covered in these posts by Todd Trimble.

## The adjoint functor theorem for posets

Recently in Measure Theory we needed the following lemma.

Lemma: Let $g : \mathbb{R} \to \mathbb{R}$ be non-constant, right-continuous and non-decreasing, and let $I = (g(-\infty), g(\infty))$. Define $f : I \to \mathbb{R}$ by $f(x) = \text{inf} \{ y \in \mathbb{R} : x \le g(y) \}$. Then $f$ is left-continuous and non-decreasing. Moreover, for $x \in I$ and $y \in \mathbb{R}$,

$f(x) \le y \Leftrightarrow x \le g(y)$.

If you’re categorically minded, this last condition should remind you of the definition of a pair of adjoint functors. In fact it is possible to interpret the above lemma this way; it is a special case of the adjoint functor theorem for posets. Today I’d like to briefly explain this. (And who said category theory isn’t useful in analysis?)

The usual caveats regarding someone who’s never studied category talking about it apply. I welcome any corrections.

## The connected components functor

I skimmed through books 1, 4, and 5 of my new batch and am currently skimming through 3; it seems I don’t have the mathematical prerequisites to get much out of 2. It will take me a long time to digest all of the interesting things I’ve learned, but I thought I’d discuss an interesting idea coming from Lawvere and Schanuel.

An important idea in mathematics is to reduce an object into its “connected components.” This has various meanings depending on context; it is perhaps clearest in the categories $\text{Top}$ and $\text{Graph}$, and also has a sensible meaning in, for example, $G\text{-Set}$ for a group $G$. Lawvere and Schanuel suggest the following way to understand several of the examples that occur in practice:

Let $C$ be a concrete category with a forgetful functor $F : C \to \text{Set}$. If it exists, let $T : \text{Set} \to C$ be the left adjoint to $F$. Then $T$ describes the “discrete” (i.e. “totally disconnected”) objects of $C$, and, if it exists, the left adjoint to $T$ is a functor $\pi_0 : C \to \text{Set}$ describing the “connected components” of an object in $C$.

I think this is a nice illustration of a construction that is illuminated by the abstract approach, so I’ll briefly describe how this works for a few of my favorite categories.

## The ideal-variety correspondence

I guess I didn’t plan this very well! Instead of completing one series I ended one and am right in the middle of another. Well, I’d really like to continue this series, but seeing as how finals are coming up I probably won’t be able to maintain the one-a-day pace. So I’ll just stop tagging MaBloWriMo.

Let’s summarize the story so far. $R$ is a commutative ring, and $X = \text{MaxSpec } R$ is the set of maximal ideals of $R$ endowed with the Zariski topology, where the sets $V(f) = \{ x \in X | f \in m_x \}$ are a basis for the closed sets. Sometimes we will refer to the closed sets as varieties, although this is mildly misleading. Here $x$ denotes an element of $X$, while $m_x$ denotes the corresponding ideal as a subset of $R$; the difference is more obvious when we’re working with polynomial rings, but it’s good to observe it in general.

We think of elements of $R$ as functions on $X$ as follows: the “value” of $f$ at $x$ is just the image of $f$ in the residue field $R/m_x$, and we say that $f$ vanishes at $x$ if this image is zero, i.e. if $f \in m_x$. (As we have seen, in nice cases the residue fields are all the same.)

For any subset $J \subseteq R$ the set $V(J) = \{ m | J \subseteq m \}$ is an intersection of closed sets and is therefore itself closed, and it is called the variety defined by $J$ (although note that we can suppose WLOG that $J$ is an ideal). In the other direction, for any subset $V \subseteq X$ the set $I(V) = \{ f | \forall x \in V, f \in m_x \}$ is the ideal of “functions vanishing on $V$” (again, note that we can suppose WLOG that $V$ is closed).

A natural question presents itself.

Question: What is $I(V(-))$? What is $V(I(-))$?

In other words, how close are $I, V$ to being inverses?

## The induced representation

Charles Siegel over at Rigorous Trivialities suggested a NaNoWriMo for math bloggers: instead of writing a 50,000-word novel, just write a blog post every day. I have to admit I rather like the idea, so we’ll see if I can keep it up.

Continuing the previous post, what we want to do now is to think of restriction $\text{Res}_H^G : \text{Rep}(G) \to \text{Rep}(H)$ as a forgetful functor, since restricting a representation just corresponds to forgetting some of the data that defines it. Its left adjoint, if it exists, should be a construction of the “free $G$-representation” associated to an $H$-representation. Given a representation $\rho : H \to \text{Aut}(V)$ we therefore want to find a representation $\rho' : G \to \text{Aut}(V')$ with the following universal property: any $H$-intertwining operator $\phi : V \to W$ for $\tau$ a $G$-representation on $W$ naturally determines a unique $G$-intertwining operator $\phi' : V' \to W$. In other words, we want to construct a functor $\text{Ind}_G^H : \text{Rep}(H) \to \text{Rep}(G)$ such that

$\text{Hom}_{\text{Rep}(G)}(\text{Ind}_H^G \rho, \tau) \simeq \text{Hom}_{\text{Rep}(H)}(\rho, \text{Res}_H^G \tau)$.

One way to define a subgroup $H$ of a group $G$ is as the image of a homomorphism into $G$. Given the inclusion map $H \to G$, the functor $\text{Hom}(G, \text{End}(V))$ in the category of groups acts contravariantly to give a map $\text{Res}_H^G : \text{Rep}(G) \to \text{Rep}(H)$ called restriction. More concretely, the restricted representation $\rho|_H$ of a representation $\rho$ is defined simply by $\rho|_H(h) = \rho(h)$. Hence there is a functorial way to pass from a representation of a group $G$ to one of a subgroup $H$.
It is not obvious, however, whether there is a functorial way to pass from a representation of $H$ back to one of $G$. There is such a construction, which goes by the name of induction, and we will need it later. Today we’ll discuss the general category-theoretic context in which induction is understood, where it is called an adjoint functor. For more about adjoints, see (in no particular order) posts at Concrete Nonsense, the Unapologetic Mathematician, and Topological Musings.