What does it mean to define a mathematical object?
Roughly speaking, there are two strategies you could use. One is to construct that object from other objects. For example, can be constructed from
using Cauchy sequences or Dedekind cuts.
Another is to give a list of properties that uniquely specify the object. For example, can be defined as the unique Dedekind-complete totally ordered field.
Let’s call these strategies construction and specification. Construction has the advantage of concreteness, and it also proves that the object exists. However, it has the disadvantage that constructions are not unique in general and you might not be working with the most useful construction at a given time; to prove things about an interesting mathematical object you might have to switch between various constructions (while also proving that they all construct the same object).
Relying excessively on construction also has a curious disadvantage when talking to students about the object in question: namely, they might ask you how to show that object satisfies property
, and you might respond “well, it’s obvious if you use construction
,” and they might respond “but I was taught that
is [construction
]!” The possibility of multiple different constructions of the same mathematical object makes construction somewhat unsatisfying in terms of describing what you even mean when you say “object
.”
This is the appeal of specification. Although specification is more abstract and harder to grasp at first, it has the advantage you don’t have to use any particular construction of an object to prove things about it: instead, you should in principle be able to prove everything from the properties alone, since those uniquely specify the object. This often leads to cleaner proofs. Specifications also usually give a better motivation for looking at an object: it’s easier to make sense of the claim that we want a mathematical object with various properties than to make sense of the claim that we want a mathematical object which is constructed in some arbitrary-looking (to the uninitiated) fashion.
Below we’ll go through a few examples of mathematical objects and some constructions and specifications of them.
The natural numbers
How would you define the natural numbers ? Here are a few options:
- The finite von Neumann ordinals in ZF.
- The unique model of second-order Peano arithmetic.
- The isomorphism classes of finite sets.
- The initial algebra of the endofunctor
on
(where
denotes a one-element set and
denotes the disjoint union).
Of these options, I would describe Options 1 and 3 as constructions and Options 2 and 4 as specifications.
I am not a fan of Option 1 as a definition. Of the four choices above, it does the worst job of explaining why we care about the natural numbers in the first place.
Option 2 has the virtue of making the conceptual point that we care about the natural numbers as a way of formalizing our experiences with numbers, which we codify into the axioms of Peano arithmetic. It’s also interesting in that no first-order version of Peano arithmetic suffices (e.g. by the Löwenheim-Skolem theorem); there will always exist nonstandard models.
Option 3 is a nice example of (de)categorification (thinking of as a categorification of the natural numbers). One reason to like Option 3 is that it imbues the standard operations on natural numbers with categorical meaning. Addition of natural numbers is a decategorification of taking coproducts, multiplication is a decategorification of taking products, and exponentiation is a decategorification of taking exponential objects. Hence together these three structures reflect the fact that
is a bicartesian closed category.
Option 4 may be unfamiliar, but it is currently my favorite, and unlike Option 3 it has a strong relationship to induction. In more detail: if is an endofunctor of a category, then an algebra of
, or
-algebra, is an object
together with a morphism
, and the initial algebra of
is the initial object in the category of
-algebras. Initial algebras offer an elegant formalism for recursive definitions.
Example.If is the functor
where
is a fixed set, then the initial
-algebra is the set
of (finite) lists of elements of
.
Example. If is the functor
, then the initial
-algebra is the set of full binary trees.
Initial algebras are specified via a particularly nice kind of property, namely a universal property. Objects satisfying universal properties are always guaranteed to be unique up to unique isomorphism, so half of the problem of using a specification doesn’t occur and we only need to show existence.
Option 4 can be made more explicit as follows. An algebra over the functor is a set together with a map
. This is precisely the data of an endomorphism
together with a point
. Hence the category of algebras over this functor is the category of pointed dynamical systems, and Option 4 says that
, together with the point
and the endomorphism
, is the initial pointed dynamical system. Said another way,
is the free dynamical system on one element. More explicitly, if
is a dynamical system and
is a point, then there is a unique map
such that
and
for all
.
But this is a form of the principle of induction! In fact, it constitutes a definition of the notation describing the iterated application of a function to an element, since we of course have
.
The lesson here is that is the thing that indexes iterations. When you talk about doing something, then doing it twice, then doing it a third time, and so forth, you’re talking about the universal property of
!
The universal property gives an alternate method of constructing the standard operations on natural numbers which is in better agreement with how they are usually constructed. First, apply the universal property to itself: for any natural number
, there is a unique map
such that
and
, hence
.
This gives us addition (“addition is repeated successor”). Now above is itself an endomorphism of
, so we can apply the universal property again: for any natural number
there is a unique map
such that
and
, hence
.
This gives us multiplication (“multiplication is repeated addition”). is itself an endomorphism of
, so we can apply the universal property once again to recover exponentiation (“exponentiation is repeated multiplication”). Repeating this process gives tetration and so on, which isn’t suggested by Option 3 (tetration doesn’t seem to decategorify an interesting categorical operation on
). It’s unclear what to make of this.
The exponential function
How would you define the exponential function ? Here are a few options:
- The limit
.
- The series
.
- The unique solution to the differential equation
with initial condition
.
Of these options, I would describe Options 1 and 2 as constructions and Options 3 as a specification.
Here I think the situation is more clear-cut than with the natural numbers: as a definition, Option 3 is the clear winner. It describes concisely why we care about the exponential function and why it’s important: it’s an eigenfunction of the differentiation operator with eigenvalue . This fundamental fact is ultimately responsible for the important role that the exponential function plays in the theory of differential equations and hence all of mathematics.
Option 3 also explains why we care about Option 1 and Option 2. Why do we care about that particular limit? Because it’s the limit you get when you apply Euler’s method with steps and step size
to the initial value problem
and let
. Why do we care about that particular series? Because it’s the unique power series that satisfies
.
One of the most important properties of the exponential function is that . What does a proof of this property look like starting from each of the three options?
Option 1: We claim that
.
The desired conclusion follows from this claim and the observation that
.
To see this it suffices to observe that the expression in question is, for sufficiently large , bounded by
and
for any
, and that
is continuous.
Option 2: Formally, we have
which we can rewrite as
but various bounds need to be checked to ensure that these formal manipulations are valid.
Option 3: and
are both solutions to the differential equation
with initial value
, and this solution is unique.
I find the proof from Option 3 the least mysterious and the most satisfying. The proof from Option 2 can be given a combinatorial interpretation using exponential generating functions, but since there is a combinatorial interpretation of differentiation, Option 3 can also be given a combinatorial interpretation.
Free groups
How would you define the free groups ? Here are a few options:
- The group of words on the letters
where no
appears adjacent to an
and where multiplication is given by concatenation together with the removal of any adjacent pairs
.
- The unique group such that there is a natural isomorphism
.
- The fundamental group of a wedge of
circles.
Of these options, I would describe Options 1 and 3 as constructions and Option 2 as a specification. Option 2 is, of course, the universal property of the free groups. Option 3 is a standard application of the Seifert-van Kampen theorem (which reveals a close relationship between Option 2 and Option 3, so perhaps Option 2 is in between a construction and a specification).
Many simple properties of the free groups are cleaner to prove from Option 2 than from Option 1. For example, a standard exercise is to show that the free groups are not isomorphic if
. The shortest proof is to let
in the universal property; then
implies
. It’s less clear where to start from the construction, unless from the construction we prove the universal property.
The standard application of Option 3 is to proving that subgroups of free groups are free via covering space theory (although the use of algebraic topology can be avoided if we instead develop covering space theory for groupoids, as in Brown). This is a statement which does not seem easy to prove from either Option 1 or Option 2 and is a good example of the value of having multiple perspectives on a mathematical object.
Option 2 reveals that statements about the free groups have a strong effect on the rest of group theory. For example, a group is said to be residually finite if for every
there is a finite quotient
of
in which the image of
is not the identity. The free groups
are residually finite. This shouldn’t be surprising. If the free groups were not residually finite, there would exist an element
of some free group
whose image under any homomorphism
, for
finite, is the identity. But this would be a strange state of affairs: it would mean that
constitutes an identity which holds in every finite group, but not in some infinite group!
Interesting post ! I was starting to miss them. I’m sure you’ve thought of that, but the first example I was expecting when reading the title is the various constructions of ordinary (co)homology theories (say for spaces with the homotopy type of a CW), and the Eilenberg-Steenrod axioms.
I think by “$x\in X$” in the paragraph beginning “Option 4 may be unfamiliar” you mean “$X \in C$”. As an aside, what’s an example of an endofunctor on $\mathrm{Set}$ that doesn’t have an initial algebra? For all reasonable endofunctors that I can think of, it looks to me like the category of algebras is in fact presentable.
On an unrelated note, I find the residual finiteness of free groups utterly mysterious. The eigenvalues of any element of any finite group acting on any vector space are roots of unity, and therefore algebraic integers with unit norm (at all places). I could certainly imagine a situation in which some word held for algebraic integers of unit norm, but not for more general numbers.
Take the double powerset functor
. This doesn’t have an initial algebra by Lambek’s theorem and Cantor’s theorem.
Hmm. Maybe.
Well, I’m certainly happy to believe that the algebras for $x \mapsto 2^{2^x}$ has no initial object. (My math doesn’t seem to process in comments — what am I don’t wrong?) For functors built out of products and coproducts, I think I see how to prove that the category of algebras is presentable, but for these “exponential” functors I will make no such claim.
On WordPress you need to add “latex” after the first dollar sign (apparently other blogs need to write ordinary dollar signs sometimes).
I agree with Theo that the residual finiteness of free groups is mysterious. The only proof that I can think of at the moment is to use the ping-pong lemma to show that
contains a free subgroup, and then show that
itself is residually finite, since any element is non-trivial modulo an appropriate prime $p$. Is there a simpler way to do it?
Moreover, the state of affairs with discrete groups often is strange: for example, in finite groups, the equation
implies that
commutes with
, but this implication fails in some infinite groups! This is a classical result of Baumslag and Solitar; see this shameless self-promotion (p.6, proof of Thm 1(a)) for a proof along the lines of Theo’s comment.
Interesting example! My intuitions about this might not have been as well-calibrated as I thought.
Groupprops has a cute and reasonably short proof: the idea is just to show that one can cook up a suitable homomorphism into a symmetric group. Geometrically one is cooking up a suitable covering space of a wedge of
circles.