(Warning: I’m trying to talk about things I don’t really understand in this post, so feel free to correct me if you see a statement that’s obviously wrong.)

Why are continuous functions the “correct” notion of homomorphism between topological spaces?

The “obvious” way to define homomorphisms for a large class of objects involves thinking of them as “sets with extra structure,” and then homomorphisms are functions that preserve that extra structure. In category theory this is formalized by the notion of a concrete category, i.e. a category with a good notion of forgetful functor. For topological spaces this is the functor which gives the set of points.

However, a naive interpretation of “preserving additional structure” suggests that homomorphisms between topological spaces should be open maps, and this isn’t the case. So what gives?

I’m not sure. But rather than take the concrete perspective I’d like to talk about another general principle for defining a good notion of homomorphism.

**Little insight:** If you can realize a structure as a small category, homomorphisms should be functors.

**About functors**

A functor from a category to another category sends objects and morphisms in the former to objects and morphisms in the latter in a way that preserves identity morphisms and composition. To a first approximation, a functorial construction is one that is “canonical.” Examples in large categories include

- The construction of the dual space of a vector space , which is contravariant,
- The construction of the free group generated by a set of symbols,
- The construction of the fundamental group of a pointed space,
- Hom functors; subsuming the first example, since where is the base field, the dual space construction is such a functor, and Harrison gives another example: the set of all colorings of a graph with colors is ,
- Any combinatorial species; for example, the construction of the set of all total orders on a set.

In all of these examples we already had a good notion of homomorphism between the objects we cared about, and we used that as the morphisms of our category. But one of the great things about category theory is how the same concepts reappear at different “levels.”

**Objects as small categories**

Although the Unapologetic Mathematician did it, it’s somewhat unusual to define categories before defining groups, but if you can be convinced that categories are useful without seeing any abstract algebra then you might like this definition.

**Definition:** A group is a small category with one object and all morphisms invertible.

One automatically obtains the definition of a group homomorphism from the definition of a functor, since the identity and composition of morphisms is preserved. The really nice thing about this point of view, as John Armstrong makes clear, is that all sorts of other ideas in group theory can be discussed very succinctly using categorical language. A group action of a group is just a functor from to the category of sets and functions; a representation of is a functor to the category of vector spaces and linear transformations.

It is possible, but less trivial, to define rings and algebras this way as well; again one automatically gets the definition of homomorphism for free. The definition easily extends, however, to a homomorphism of group actions, since any group action of a group on a set can be viewed as a category with object set and morphism set , where a morphism is the arrow from to . Group actions define equivalence relations on , and in fact equivalence relations are also categories.

**Definition:** An equivalence relation on a set is a small category with object set and at most one morphism between objects such that every morphism is invertible.

What is nice about this perspective is that all of the equivalence relation axioms fall very naturally out of the category axioms. Composition is equivalent to transitivity, the existence of identity morphisms is equivalent to reflexivity, and symmetry is equivalent to invertibility. Functors between categories of the above form are morphisms of equivalence relations (although people don’t seem to talk about those much), which are precisely the functions that preserve equivalence. A common generalization of the above definitions is the following.

**Definition:** A groupoid is a small category with all morphisms invertible.

Thus a group is a one-object groupoid. Groupoids are a natural way to study objects whose “symmetries” only compose under certain conditions; for example, the set of permissible moves in the 15 puzzle form a groupoid. Weinstein has a nice introduction to groupoids here.

Groupoids are very cool; they have an intriguing notion of cardinality based on symmetry considerations that extends to some categories. There’s a lot of mysterious stuff going on here; see, for example, John Baez’s summary.

Anyway, back to stuff I sort of understand. Equivalence relations are special cases of relations; let’s talk about another important type of relation.

**Definition:** A partially ordered set is a small category with at most one morphism between distinct objects such that no morphisms besides the identity are invertible.

Again, the usual axioms for a poset fall out of the category axioms: Composition is equivalent to transitivity, the existence of identity morphisms is equivalent to reflexivity, and antisymmetry is equivalent to non-invertibility. Again, functors between such categories define homomorphisms of posets, which are precisely the order-preserving or monotone / monotonic functions.

Many of the important concepts of poset theory correspond to ideas first understood (at least if I’ve got my history right) for large categories; for example, infimums and supremums are none other than coproducts and products, and initial and terminal objects are none other than greatest and smallest elements. A lattice is a poset with all finite products and coproducts (although in lattice theory they’re called joins and meets as a generalization of unions and intersections in Boolean lattices).

Topological spaces are usually defined in terms of their lattice of open sets, which are characterized by the fact that they are a sublattice of the Boolean lattice of some set having finite meets, arbitrary joins, a smallest element (the empty set), and a greatest element (the whole set). This perspective suggests the direction of pointless topology, which has interesting things to say regarding why the definition of a continuous function seems “backwards.” Here’s what the Wikipedia article seems to be saying.

**Definition:** A frame is a lattice such that finite meets distribute over arbitrary joins (i.e. finite coproducts distribute over arbitrary products).

Hence any lattice of open sets is a frame, although the converse isn’t true. Since coproducts and products can be defined by means of diagrams, I think there is some general abstract nonsense that justifies looking at functors (morphisms) between frames that preserve finite meets and arbitrary joins, although I don’t have the category theory background to know if this is really true or not. Anyway, this defines a category of frames with a “natural” notion of homomorphism.

Given two topological spaces in the category of topological spaces and continuous functions, there is a contravariant functor taking to its lattice of open sets and taking a continuous function to the function defined by taking preimages. The contravariance of this functor is the essence of the “backwardness” of the definition of continuous function.

So where do contravariant functors come from? A natural place to look for them is in Hom functors of the form for some fixed object . A moment’s thought suggests setting with a topology generated by the open set . Continuous functions are in one-to-one correspondence with open sets in ; just take the preimage of . It follows that is essentially the contravariant functor . (There’s an enrichment here, but I’m not sure what the details are.)

So this seems to be more or less what’s going on:

- Given a set in the category of sets and functions, where gives a functor from to the power set of , i.e. the Boolean lattice . To recover the order structure one should move to the category of posets and require that ; this induces the usual order on . (Again, there's an enrichment here, but I’m not sure what the details are.)
- If you want to pick out certain kinds of subsets over others, you need to control the preimage of in the above construction.
- If you want your Boolean lattices to have extra properties (finite meets, arbitrary joins) and morphisms preserving those properties, this restricts the type of functions that can be lifted by the functor above in a way naturally involving preimages.

Or something like that. If someone could let me know if I’m on the right track and / or correct something obviously wrong that I’ve said, that would be great. Alternately, propose an entirely different definition of a topological space as a small category; I think an approach is possible through the Kuratowski closure axioms. I’d also love it if someone could tell me what the Yoneda lemma has to say about all this.

on August 8, 2009 at 9:32 pm |TheoWell, I won’t be able to answer all your questions, but I have a few comments.

(1) My favorite definition of “topological space” is a set with a good notion of which nets converge. A net is a function from a “directed set”, that is a poset (or rather a preordered set) such that every pair of elements has a least upper bound. It is a theorem that a function between topological spaces is continuous in the traditional sense if and only if it takes convergent nets to convergent nets and respects their limits. (In a first-countable space, it’s enough to replace the word “net” with the word “sequence”.) I believe that one can axiomatize in a pretty reasonable way the word “topological space” solely in terms of conditions on the structure of convergent nets; then a homomorphism is a function of sets that preserves this structure.

(2) You correctly recognize that “set with equivalence relation” is the same as “groupoid with at most one morphism between any two objects”. A better word for the latter is “0-groupoid”. A 0-category is a category enriched over the two-element category {F,T}, which in addition to the identity maps has the unique arrow F\to T (and the monoidal structure given by AND, which is the categorical product). Well, that’s not the usual definition of 0-category; the usual definition is of a category enriched over the two-element category with no non-identity arrows. But anyway, in either definition a 0-groupoid is a 0-category with all arrows invertible. You should read Mike Shulman’s appendices to Lectures on n-Categories and Cohomology for more details in this direction.

Anyhoo, from the categorical point of view, the notion of “set” is not a natural one. Rather, “0-groupoid” is. Moreover, “poset” should refer to what’s normally called “preordered set”, and the easiest definition of this is “0-category”, but in low-brow language I mean a set with a transitive reflexive relationship, but without the condition that if $a\leq b$ and $b\leq a$ then $a = b$. At issue is that a category equivalent to a (usual) poset is not necessarily a (usual) poset, just like a 0-groupoid is equivalent to a (usual) set. So then the words “usual set” and “usual poset” should be defined as “skeletal 0-groupoid” and “skeletal 0-category” respectively.

An aside: in the definition of “net” above I used the word “directed set”, which I defined. In light of the previous paragraph, a better definition starts with saying that a “directed set” is a 0-category. Then I want the condition that any two elements have an upper bound. A _least upper bound_ is precisely a categorical coproduct, but I don’t know how to make the definition of “directed set” natural in category language. Actually, a quick look on wikipedia generalizes “directed set” to filtered category, a word I probably should have known.

(3) I’ve never been totally happy with the notion of a “frame”, and the other notions from pointless topology. This is probably because I am an algebraist, and so I always want to define “space” as (dual to) “commutative algebra”. For example, a “locally compact Hausdorff topological space” is a “commutative C^* algebra”, and an “affine scheme” is a “finitely generated algebra”. These are both (usually) over the field of complex numbers, but one can always try to port notions to, say, the field F_1 with one element, or to the ring {F,T}, which you called B. So I’ve often wanted to define “topological space” as (dual to) “ring over {F,T}”. Now, if {F,T} is given the topology in which {F} is the only nontrivial closed set, then the continuous functions from a topological space X to {F,T} determine X up to equivalence (in that if two points in a topological space are in all the same open sets, they should be identified). And the set of such continuous functions is an algebra over {F,T} — just like the set of continuous functions to R is. Indeed, the preimage of T is an open set, and open sets are characterized by being the preimages of T, and the ring operations in the algebra of open sets are INTERSECTION and UNION. Anyhoo, so a topological space is determined up to equivalence by its “ring of functions”, but it’s not true that every ring over {F,T} is the ring of functions of a topological space.

All of this is a long tangent to say that whatever theory you end up with to explain the contravariance, it probably has a natural analogue over any field. Because as you say contravariance is usually from thinking about the functions out of a space. Yoneda says that if you understand the functor that assigns to any other space Y the set of functions from X to Y, then you understand X. But most of these categories of spaces are such that to understand X you need only understand the set of functions to some particular space.

As a side remark, this all explains your question about the enrichment. If B is an algebraic object, then the set of functions to B inherits all sorts of structure from B. In particular, if B is a ring, then Hom(X,B) is an algebra over B. (I’ve enriched a bit far: algebras over {F,T} are more than just posets, but rather boolean algebras without complements; boolean algebras are algebras over F_2 with trivial Frobenius map.)

on August 8, 2009 at 10:00 pm |Qiaochu YuanThanks for the comments! (I apologize for the small width of this column, but it seems I can’t change it without purchasing something.)

1) This is an interesting idea, but the naive way to do this (at least to me) would be to set the objects to be the points of the space and then I can’t imagine what the morphisms would be. Maybe someone else knows what to do with this idea, though. It seems morally related to the notion of metric spaces as enriched categories, since it’s the metric that provides the notion of what nets converge.

2) This is very interesting and that “skeletal” remark just cleared up a major point for me. Before I read this I was very uncomfortable with preorders but now I see that a preorder is just what happens when we allow some objects to be isomorphic but not equal and now it’s the most natural idea in the world.

3) I had this in mind, but I didn’t see any easy way to extend these ideas from LCH spaces. I have to admit that I was biased in favor of frames from reading a lot of Gian-Carlo Rota recently; he has some strong opinions about the value of lattice theory and it just seemed natural to me as I was writing this, especially because I don’t have much experience with the C*-algebra/NCG perspective.

on August 9, 2009 at 7:22 pm |OmarBut continuous maps

arethe functions preserving the additional structure of topological spaces, for an appropriate notion of additional structure!(I tried explaning this in the comments to some other blog and somehow managed to mess it up, I’m trying again here.)

The important additional structure of a topological space is the vague notion of points being in some sense close together. Of course, there is no useful way to say that two points are close together, so closeness is not a relation on points.

One way to specify closeness is, as Theo pointed out in the first comment, to say when a net converges to a point. Perhaps a more basic way is to say when a point is close to a set of points. Let’s say a point x of a space X is close to a subset A iff x is in the closure of A (which is the olny reasonable choice, I guess.)

So, each topological space X has a relation, called closeness, between X and subsets of X. This relation actually specifies the topology on X completely, since it obviously specifies the closure operator of X, and there is a characterization of the topology in terms of said closure operator.

This means we can regard topological spaces as sets with the extra structure of a relation between points and subsets, which has to satisfy certain axioms we get from Kuratowski’s axioms:

(1) No point is close to the empty set

(2) If a point is close to set of points close to A, then it is also close to A

(3) If a point is close to union of two subsets, then it must be close to at least one of them.

It is an easy exercise to show that continuous maps between two spaces are just functions that preserve the closeness relation.

on August 9, 2009 at 10:11 pm |Qiaochu YuanVery nice. This definition seems to me to be by far the most intuitive definition of a topological space I’ve ever seen. This is exactly the problem I have with a book like Rudin, which never motivates its definitions.

on August 10, 2009 at 8:04 am |HarrisonThat really is a lovely characterization.

Strictly speaking, I’d want a better justification for why closeness isn’t a meaningful relation on points, other than “well, it isn’t.” A first approximation to that justification might be that there’s obviously no transitive property of closeness — if there was, then for sufficiently well-behaved topological spaces,

everypair of points would be close together, which is clearly undesirable. Still, there are plenty of useful and interesting relations which aren’t transitive, so this isn’t all that convincing.on August 10, 2009 at 8:56 am |OmarHarrison, I don’t think just transitivity goes wrong with trying to define closeness as a relation between points. In some sense two points really are just never close to each other in familiar spaces!

Think of the real numbers R. Here we have a clear idea of what we want the topology to be and what we want continuous functions from R to R to be. But there is just no relation between points such that continuous functions are exactly the functions preserving this relation.

Certainly, we want all functions of the form ax+b to be continuous. But these functions can map any pair of points to any other, so if there were some pair of points in R close to each other, we would have to have all pairs close to each other!

(Intuitively this makes sense: are 0 and 1 close? Well, probably not. Are 0 and 1/1000000 close? If so, multiplication by 1000000 isn’t continuous.)

You can try instead of just saying some points are close, saying how close they are. This works of course and produces some kinds of spaces similar to topological spaces. Metric spaces are perhaps the most obvious version, but uniform spaces can be regarded as a version of this idea too.

Another approach is instead of trying to define “x and y are epsilon close”, as metric and uniform spaces do, you can go the route of saying “well, x is not really close to y_1, or to y_2, etc, individually, but it is closer and closer to the y_i”. This way leads to convergence spaces, the axioms for nets to converge and my closeness relation between points and subsets.

The definitions you get aren’t usually equivalent to topological spaces but I would say my intuition does not really distinguish them (which means, for example, my intuition without special training is completely unreliable when attempting to guess whether something is true only for metric spaces or also for uniform spaces or generally for topological spaces). All of these things seem to me as reasonable formalizations of the idea of a “space where you can say when points are close to each other”. I only prefer topological spaces because they seem more useful and they probably only seem more useful to me because (1) I studied them for much longer in college!, (2) they are more general, so appear more often, and (3) actually more is known about them since other people prefer them too!

on August 10, 2009 at 10:00 am |Akhil MathewYou could define “ is close to if ,” but that’s not really useful in Hausdorff spaces (and it’s not symmetric), though it is useful for schemes. It should probably read “ is a specialization of .”

on August 10, 2009 at 2:09 pm |HarrisonOmar: Sure, okay, that’s reasonable. It still doesn’t have the advantage of being intuitive from first principles, but it makes sense. (Qiaochu: Motivating your definitions is fine, but having definitions that are immediately obviously correct is even better!)

Akhil: Yeah, I thought of that, too, and also realized that it was useless for Hausdorff spaces. Didn’t know it was useful in geometry though!

on August 10, 2009 at 1:25 pm |Phi. IsettI think that one reason you wouldn’t require the maps between topological spaces to be open is that there is something less natural (in the context of topological spaces) the induced covariant map on power sets ( V -> f(V) — the “image” map ) as compared to the contravariant one ( V -> f^{-1}(V) ).

The operation of inverse image preserves all the algebraic structure of the power set (arbitrary interesections and unions, the Boolean ring operations and the unit/0 element). Since the basic axioms of topology (as in measure theory) are formulated with regard to these operations, which are not preserved by the “image” map, I would believe that (just on principle alone) the inverse image is the right thing to consider.

on August 10, 2009 at 2:00 pm |Qiaochu YuanI think this is a case where intuition from goes awry, since in we can define a subset of a set as the image of a function

intothe set (covariant), whereas the “correct” definition is as some preimage of a functionout ofthe set (contravariant). A hint at the right generalization occurs in , where the former defines subgroups but the latter definesnormalsubgroups.on August 12, 2009 at 6:47 am |Akhil Mathew“since in we can define a subset of a set as the image of a function into the set (covariant), whereas the “correct” definition is as some preimage of a function out of the set (contravariant).”

Interesting. I was aware of only the first definition, since that is a subobject in . When do you need the second? (In $\mathrm{Grp}$ you could just define normal subgroups as kernels.)

on August 12, 2009 at 10:58 am |Qiaochu YuanI guess what I’m saying is that open sets aren’t subobjects, so the subobject definition is “wrong” for this particular purpose.

on August 14, 2009 at 8:36 am |Akhil Mathew“open sets aren’t subobjects”

I don’t know if I missed something here, but my understanding of a subobject was a map such that if were such that , then .

This is satisfied for open subsets.

on August 14, 2009 at 10:33 am |Qiaochu YuanYes, but the set of all subobjects of an object in isn’t the set of open subsets. I guess I’ve been wording things poorly; I should’ve said “open subset is not synonymous with subobject.”

on August 14, 2009 at 11:30 am |Akhil MathewSure, thanks for the clarification.