Occasionally I see mathematical questions that seem “grammatically incorrect” in some sense.

*Example.* “Is open or closed?”

*Example.* “Is a group?”

*Example.* “What’s the Fourier series of ?”

Here are some sillier examples.

*Example.* “Is a rectangle prime?”

*Example.* “Is ?”

*Example.* “What’s the Fourier series of the empty set?”

What all of these examples have in common is that they are **type errors**: they are attempts to apply some mathematical process to a kind of mathematical object it was never intended to take as input. If you tried to write a program in some highly mathematical programming language to answer these questions, it (hopefully!) wouldn’t compile.

Mathematical objects are usually not explicitly thought of as having types in the same way that objects in a programming language with a **type system** has types. Ordinary mathematics is supposed to be formalizable within Zermelo-Fraenkel (ZF) set theory, possibly with the axiom of choice, and in ZF every mathematical object is constructed as a set. In that sense they all have the same type. (In particular, the question “is ?” is perfectly meaningful in ZF! This is one reason not to like ZF as a foundation of mathematics.) However, I think that in practice mathematical objects are implicitly thought of as having types, and that this is a mental habit mathematicians pick up but don’t often talk about.

Instead of thinking in terms of set theory, thinking of mathematical objects as having types allows us to import various useful concepts into mathematics, such as the notions of type safety, typecasting, subtyping, and overloading, that help us make more precise what we mean by a mathematical sentence being “grammatically incorrect.” The rest of this post will be a leisurely discussion of these and other type-based concepts as applied to mathematics in general. There are various categorical ideas here, but for the sake of accessibility we will restrict them to parenthetical remarks.

**An informal description of mathematical types**

Informally, the **type** of a mathematical object describes what kind of object it is.

*Example.* The object has type (natural number).

*Example.* The object has type (integer).

*Example.* The object has type (rational number).

*Example.* The object has type (pair of natural numbers).

*Example.* The object has type (degree).

*Example.* The object has type (function from natural numbers to natural numbers).

*Example.* The object has type (Boolean).

Some less simple examples:

*Example.* The object has type (group).

*Example.* The object has type (topological space).

*Example.* The object has type (function from topological spaces to groups).

Types help you figure out what you can do with a collection of mathematical objects.

*Example.* You can take two objects of type and add or multiply them. In other words, there are functions .

*Example.* You can take two objects of type and, in addition to being able to add or multiply them, you can also subtract them. In other words, there are functions .

*Example.* You can take an object of type and apply a trigonometric function to it. In other words, there are functions .

*Example.* If are types, you can take an object of type and another object of type and apply the latter to the former. In other words, there is a function .

The type occupies a special place in this theory. There are only two objects of this type, namely and , and functions which output values can be used to check whether a property holds or doesn’t hold.

*Example.* You can take a natural number and ask whether it is prime. In other words, there is a function .

*Example.* If is a type, you can take two objects of type and ask whether they are equal. In other words, there is a function

,

which we might also write using the equals sign .

*Example.* You can take two integers and ask whether the first is greater than or equal to the second. In other words, there is a function

.

As the above examples suggest, there are various ways of combining types to form new types; these are called **type constructors**.

*Example.* An object of the **product type** is a pair of objects of type and respectively.

*Example.* An object of the **sum type** is either an object of type or an object of type .

*Example.* An object of the **function type** , also sometimes notated , is a function which takes as input objects of type and returns as output objects of type .

(These constructions should look familiar to fans of category theory: they are just the categorical product, categorical coproduct, and categorical exponential in a category of types. In other words, categories of types are bicartesian closed categories.)

Using the basic type constructors above we can build more complicated type constructors. For example, given a type we can form the **list type** , which is the type of (finite) lists of elements of type , as follows:

(where is a special type called the **unit type**; its defining feature is that there is exactly one object of unit type, which we will call a **point**). Heuristically the list type can be written as the “geometric series” .

**Notation**

Above we used the notation to denote that was an object of type . It makes sense to generalize this notation to arbitrary types, so that denotes that is an object of type . This is a **type declaration**.

**Type safety and type errors**

Besides helping you figure out what you can do with a collection of mathematical objects, types also help you figure out what you **can’t** do. Let’s go through the examples from the beginning of this post with this idea in mind.

*Example.* “Is open or closed?” The functions and don’t take as input a set or even a topological space; they take as input pairs where is a topological space and is a subspace of (and output whether or not is a closed or open subset of respectively). In other words, they don’t have type but type . The question of whether is open or closed depends on whether it’s being viewed as a subset of itself or, say, .

*Example.* “Is a group?” The function doesn’t take as input a set; it takes as input pairs where is a set and is a binary operation on (a magma). In other words, it doesn’t have type but type .

*Example.* “What’s the Fourier series of ?” The function doesn’t take as input a function on ; it takes as input a periodic function on , e.g. with period , or equivalently a function on , and is not periodic with any period. In other words, doesn’t have type but type .

The sillier examples can be analyzed similarly.

Looking for type errors, or **type checking**, can help you debug a piece of mathematics in the same way that a compiler looks for type errors to debug a piece of code. For example, whenever you see an expression of the form , you can check to see if it even makes sense before checking whether or not it’s true by checking whether and are of the same type. This is a very general version of dimensional analysis.

Type checking is also one way to check your understanding of a mathematical subject new to you. If you can’t say correctly typed sentences about the subject yet (as determined by other people who know the subject), then you haven’t yet understood the types of the main objects or functions in the subject. For example, if you say “fundamental group of a…” you’d better finish that sentence with either a pointed topological space or a path-connected topological space; otherwise, you haven’t understood an important aspect of the definition of the fundamental group, namely the role of basepoints.

**Typecasting, subtyping, and overloading**

One complication that makes type checking nontrivial is that most mathematical objects are naturally regarded as having more than one type. For example, we said above that has type , but it clearly also has type , and even . One way to describe this situation is that there are **typecasting** functions

that convert objects between types. If there is a typecasting operator , then we can use objects of type as inputs into functions that expect objects of type provided that we typecast first.

Another way to describe this situation is that some types are **subtypes** of other types; in other words, instead of being a collection of tyepcasting functions, the above describes a chain of subtype inclusions. Subtyping allows objects to have more than one type simultaneously, rather than to be a particular type at a particular point in some computation.

Subtypes have an interesting relationship to function types. If is a subtype of , then the function type is a subtype of , but if is a subtype of (a function which outputs objects of type also outputs objects of type ), then the function type is not a subtype but a supertype of (a function which accepts inputs of type also accepts inputs of type ). In other words, the construction of function types is covariant in output types but contravariant in input types with respect to subtypes.

(Again, this should be familiar to fans of category theory: this phenomenon reflects the fact that the construction of exponential objects is covariantly functorial in the target object but contravariantly functorial in the source object.)

A third way to describe the situation is that functions in mathematics are **overloaded**, and this is the description that is perhaps closest to mathematical practice. Overloading refers to the practice of defining functions which have the same name but which take different types of inputs. For example, the addition symbol refers not to one function but to many functions, all of which happen to have the same name, e.g.

and so forth. More generally we might use for the addition operation in any ring, and even more generally for the group operation in any abelian group. We even used it above to denote sum types!

Exponential notation is perhaps even worse. It might refer to functions of any of the following types:

,

or even more generally, might be an element of a group and might be an integer, or might be and might be an element of a topological ring, and we even used exponential notation above to denote function types! See also this math.SE question.

Overloading seems to confuse students, and I think part of the reason is that mathematicians rarely say explicitly that they are doing it. Overloading isn’t so bad when all the different instances of it are at least meaningfully related to each other, but sometimes concepts in mathematics are overloaded for no reason other than historical accident, e.g. words like “normal” and “regular.” This is even more confusing to students: sometimes the use of the same word in two different contexts is because of some meaningful relationship, e.g. normal extensions and normal subgroups, and sometimes it just isn’t. See also this MO question.

**Recursive types**

In type theory it is possible to define some types **recursively**, in terms of themselves, rather than directly in terms of other types. A surprising number of important kinds of mathematical objects can be defined this way.

*Example.* The type of natural numbers can be defined recursively as

;

that is, a natural number is either a point (namely ) or is another natural number (namely its predecessor).

*Example.* More generally, the list type can be defined recursively as

;

that is, a list of elements of type is either a point or it is an element of type (namely the beginning of the list) together with a list of elements of type (namely the rest of the list). In particular, is just the type of lists of points. Note that this is exactly the relationship one would expect to be satisfied by the “geometric series” .

*Example.* The type of (rooted, full binary) trees can be recursively defined as

;

that is, a tree is either a point or it is a pair of trees (namely the pair of trees obtained by removing its root).

*Example.* A version of the type of sets can be recursively defined as

;

that is, a set is a function on sets which returns either true (for the elements it contains) or false (for the elements it does not contain).

*Example.* The type of (not necessarily finite) **combinatorial games** can be recursively defined as

;

that is, a game is a pair of functions on games, which we might call and . Strictly speaking, a combinatorial game should really be called a game position, since it describes some particular moment in a game (e.g. the beginning of a game of chess) rather than what is colloquially referred to as a game (e.g. chess). We think of games as being played between two players, Left and Right, and at any game position Left has some possible **options** of game positions he can move the game to, namely the positions such that returns true, and Right has some possible options of game positions she can move to, namely the positions such that returns true.

Conway noticed that combinatorial games resemble a generalization of both ordinals (which can be described in terms of a set of ordinals) and Dedekind cuts (which are described by a pair of sets of rationals) and used this relationship to define a large class of numbers using games; see *On Numbers and Games* for details.

The type we defined above, , can also be thought of as the type of **impartial games**, which are games where the available moves don’t depend on which player is playing.

(For fans of category theory, recursive types are initial algebras relative to a suitable endofunctor of a category of types.)

Functions which take as input a recursive type can also be defined recursively. Examples involving the natural numbers should already be familiar; here are some other examples.

*Example.* The length of a list can be defined recursively as follows: the length of a point is . If the list is not a point, it consists of an element and a sublist, and then the length of the list is one greater than the length of the sublist. (I would write this down in pseudocode but I don’t know a clean way to display pseudocode on a WordPress site.)

*Example.* More generally, for any two types there is a function

called **map** which takes as input a function and returns as output a function defined recursively as follows: if the input list is empty, so is the output list. Otherwise, the input list consists of an object of type together with a sublist, and applies to the object and then to the sublist, and these together constitute the output list. The length of a list is the function obtained by applying to the unique function .

(Fans of category theory will recognize this as essentially the statement that the list constructor is a functor. There are even programming languages like Haskell which explicitly recognize this fact and put it to good use.)

*Example.* The **height** of a tree can be defined recursively as follows: the height of a point is . If the tree is not a point, it has two subtrees, and then the height of the tree is one greater than the maximum of the height of its subtrees.

*Example.* Games which are guaranteed to terminate can be organized into four disjoint classes: a game is

**positive**if Left wins no matter who goes first,**negative**if Right wins no matter who goes first,**zero**if the second player to move wins,**fuzzy**if the first player to move wins.

Here “wins” means “wins according to the normal play condition,” where the first player who cannot make a move loses. The above classes correspond to four functions

which can be recursively defined in terms of each other as follows:

- A game is positive iff at least one of Left’s options is positive or zero and all of Right’s options are positive or fuzzy.
- A game is negative iff all of Left’s options are negative or fuzzy and at least one of Right’s options is negative or zero.
- A game is zero iff none of Left’s options are positive or zero and none of Right’s options are negative or zero.
- A game is fuzzy iff at least one of Left’s options is positive or zero and at least one of Right’s options is negative or zero.

(For fans of category theory, recursively defined functions are applications of the universal property of initial algebras.)

Recursive types are another reason to think of mathematics as having a type system: they allow for a much richer notion of recursion than is usually explicitly mentioned in mathematics, where we usually make do with induction (recursion on ) or sometimes transfinite induction (recursion on a modified version of ). Recursive types, by contrast, allow for a general notion of structural induction. Often we can reduce cases of structural induction we need to ordinary induction (e.g. if we want to perform structural induction over trees we can perform ordinary induction on their heights), but structural induction is conceptually cleaner (e.g. we can perform structural induction over trees without singling out a particular function called height).

**Conclusion**

Although mathematics is not often explicitly described as having a type system in theory, it seems both useful and accurate to describe mathematics as having a type system in practice. This gives us a language for understanding certain kinds of mathematical errors (type errors) as well as understanding certain implicit mathematical practices (overloading). Types also have an interesting mathematical structure and provide a richer language for defining and working with recursively defined mathematical objects than is commonly taught.

on October 16, 2016 at 9:47 am |Formal vs Functional Series (OR: Generating Function Voodoo Magic) | Power Overwhelming[…] and “functional series”. They use exactly the same notation but are two different types of objects, and this ends up being the source of lots of errors, because “formal series” do not […]

on March 13, 2016 at 3:37 pm |sma045My hobby programming language aims to do types the way Mathematicians informally do. It makes some useful distinctions. Subtyping is declared using isA so that if Int isA Rational then Int must automatically have every property of Rational (so you can ask for the denominator of an Int and hopefully get 1). Conversions are declared with convertsTo and they allow normal terseness, so that an Assignable(X) convertsTo X means that you can provide a muteable Int where an Int is required, but there is no suggestion that a muteable Int is the same thing as an Int [it is very different, being a box that you can put an Int into and later take out]. Finally overloading is used to allow the same name to be used in different contexts (though this could be done with convertsTo also). The effect of this is that values belong to multiple types, so that 3:Int, but also 3:Rational [I think this is how Mathematicians think informally, or do you disagree?].

on February 21, 2016 at 2:52 pm |James WaddingtonHate to break it to you, but there are a lot of mathematicians who view a natural number as the set of smaller natural numbers. So $17 \in 3$ makes sense and evaluates to false.

on September 5, 2015 at 12:02 am |DavidYou first say that $\mathbb{Z}$ is a group, but later say that $\{1,2,3\}$ is not a group. Which is it? In the former you probably want the operations to be attached to the group just like methods of a class in an object-oriented programming language, but if so then you cannot use $\mathbb{Z}$ as a set. So to be consistent with normal usage you should say $(\mathbb{Z},+)$ is a group.

And about overloading, type checking is nowhere near enough. One frequent example is: What is $(-1)^{1/3}$? You can’t tell right? Similarly in logic, where almost all textbooks carelessly use “$\neg φ$” to mean the formula obtained by adding the logical negation symbol in front of the formula $φ$, resulting in ambiguity all over. For example, what is $\{ φ : \neg φ \in S \}$?

I would say that we cannot complain about people making type errors if we do not ourselves stick strictly to an unambiguous type system. =)

on September 5, 2015 at 8:58 am |Qiaochu YuanYes, I should say that is a group. But I object strongly to the claim that I can’t complain about type errors if I’ve ever made a type error myself. I’m not describing a social rule and condemning people for not following it: I’m pointing out what I think is a real feature of mathematics that I think should be pointed at more explicitly. Types are a tool to think with, not a tool to criticize with.

on September 18, 2015 at 6:04 am |DavidAh okay. I thought you were advocating enforcing type checking, which I would agree with, since most students I’ve taught make not only logical errors but errors that do not even make type sense. For example, often students write things like “If n is true, …” when ‘using’ induction. So for me personally I go all the way and remove ambiguity completely in my written proofs. What I say or draw in diagrams can be informal and sloppy on the other hand, because they are just there to guide the intuition, and they complement the strict formal proof.

on July 7, 2014 at 12:13 pm |isomorphismesI think you’re totally right in what you say about type errors. Although at the same time don’t you think “weird questions” (which come not from a lack of understanding, but which are asked

aftergrokking the concepts) can actually be quite interesting ones?For example 3 ∈ 17 clearly works combinatorically as well as in Finsler’s “generalised numbers”. It’s amusing (at least for me) to try to think of a way to think of a way to do something “backwards” — perhaps in the sense that one looks for a field with one element.

The notion of a prime rectangle could be related to some ambient context perhaps, or some relationship between side-lengths and areas.

Anyway I do agree with you; type errors are a good tool for teachers to identify when a student totally missed the boat.

on July 3, 2013 at 3:05 am |ChristophI’m not sure I understand what you wrote about recursive definitions of types.

The list type definition [A] = 1 + A x [A] is fine. I see how that defines 1 to be a list, and any pair of an object of type A together with a list of type A as lists. So for a, b, c of type A i can conclude that 1, (a, 1), (b, (a, 1)), (c, (b, (a, 1)) etc. are of type [A].

However, I don’t see what the recursive definition of the natural numbers Nat = 1 + Nat tells me, except that 1 is of type Nat. Since for any type A, if 1 is of type A it holds that A = 1 + A, since any object of type A is either 1 or any other object of type A. I don’t see how this is supposed to define the type of natural numbers, when the notion of taking successors is not even mentioned. In Haskell, we can define natural numbers as “data Nat = Zero | Succ Nat” where Succ is the type constructor modelling “taking the successor”, but in your description this doesn’t appear.

What am I missing here?

on July 3, 2013 at 3:47 am |Qiaochu YuanYou’re not missing anything. I just omitted the analogue of “Succ” here. Note that if you’re comfortable with lists, then Nat is precisely the list type [1]. Would Nat = 1 + 1 x Nat have made you happier? (The point of the definition is that a natural number is specified either by specifying 1 or by specifying another natural number, namely its predecessor. The notion of predecessor is built into this definition so I don’t have to give it a name.)

on July 3, 2013 at 5:33 am |ChristophYes, Nat = 1 + 1 × Nat would have made me happier indeed. This generates all the elements 1, (1,1), (1,(1,1)) etc., which make a perfect model of natural numbers. Thanks for the clarification.

on June 2, 2013 at 7:37 pm |Scott MorrisonIt’s nice to think about “dependent types”, as well, in which the type refers to some specific object.

As an example, you might consider Subspace(X) to denote all subsets of a fixed topological space X. Then the isClosed function which you describe above could be thought of as having the type indicated by the following “method signature”

isClosed(X: Top)(Y: Subspace(X)): Bool

In order to use this function with two topological spaces X and Y, one first has to establish that Y really has type Subspace(X).

I’m not sure how people usually denote types like this. As a little advertisement, the programming language Scala allows expressing this sort of thing. (And I’ve used it quite a bit, and really enjoyed the powerful automatic type-checking I get as a result.)

e.g. you might see something like the following:

trait Top {

trait Subspace extends Top

def verifySubspace(y: Top): this.Subspace = { … }

}

def isClosed(x: Top)(y: x.Subspace): Boolean = { … }

on June 2, 2013 at 11:38 pm |Qiaochu YuanYes, I’m vaguely planning on writing about dependent types once I understand them better.

on June 26, 2013 at 1:11 am |DanielYou might want to check out Homotopy Type Theory: http://homotopytypetheory.org/2013/03/08/homotopy-theory-in-homotopy-type-theory-introduction/

on June 26, 2013 at 11:10 am |Qiaochu YuanI am!

on June 1, 2013 at 3:31 am |Mathematician Tries to Drive a Basketball | Big Science[…] The type system of mathematics […]

on May 31, 2013 at 9:49 am |Rod CarvalhoYou can use simple HTML. For example, here’s a snippet of Haskell code:

<pre style="padding-left: 30px; background: #EEEEEE;">

f :: Integer -> Integer

f x = x^2

</pre>

which will look like this on WordPress:

on May 31, 2013 at 12:00 pm |Qiaochu YuanThanks! Somehow I had forgotten you could just use HTML in WordPress posts.

on May 29, 2013 at 7:26 am |Miss FibrationYou might enjoy reading B. Jacobs’s “Categorical Logic and Type Theory”. The general theme of this book is “A logic is always a logic over a type theory”.

on May 28, 2013 at 9:39 pm |John BaezYou’re advertising the virtues of type theory, while barely hinting that this is also category theory: types are objects, typecasting comes from morphisms, etc. Is your plan to sell people on type theory and then reveal to them that this is also category theory?

on May 28, 2013 at 10:02 pm |Qiaochu YuanMaaaybe. But I learned a long time ago not to promise future posts, so if I follow this post up then I’ll make that clear, and if not it can still stand alone even for people who aren’t comfortable with category theory.

on May 28, 2013 at 10:06 pm |Qiaochu Yuan(Part of the reason I wrote this post was to have something to reference when I complain on math.SE about someone’s question being a type error, and that reason stands independent of any promotion of category theory I might also want to do.)

on May 28, 2013 at 5:16 pm |Keshav Kini (@ffee_machine)I was a little surprised at where you went with your third example. I had expected something to the effect that $\sin x + \sin \pi x$ isn’t a function, and that we should instead say something like $x \mapsto \sin x + \sin \pi x$ or $\lambda x . \sin x + \sin \pi x$.

on May 28, 2013 at 5:28 pm |Qiaochu YuanThat’s an abuse of notation I’m perfectly willing to tolerate. It’s not the most important problem with the question.

on May 28, 2013 at 1:24 pm |Joshua ZelinskyI’m not sure I like the [0,1] example. In that context, it seems perfectly ok to talk about the implied topology.

on May 28, 2013 at 1:31 pm |Qiaochu YuanThe problem isn’t the implied topology, it’s the lack of an explicit ambient space in which sits. A student who asks that question hasn’t understood something important about the words “open” and “closed,” namely that they depend on an ambient space.

on May 28, 2013 at 12:50 pm |Zhen LinNote that the initial “algebra” for the powerset endofunctor is the universe of well-founded sets! Of course, there are various set-theoretic issues, most obvious of which is a technicality about size. If I understand correctly, this observation underlies the so-called “algebraic set theory”.

on May 28, 2013 at 1:06 pm |Qiaochu YuanYes, I am cheerfully ignoring all set-theoretic issues. I think it ought to be true that there is a topos in which the powerset functor has an initial algebra (right?), and that’s good enough for me.

on May 28, 2013 at 1:45 pm |Zhen LinSure, the degenerate topos. (This is related to a [previous question](http://mathoverflow.net/questions/128439/does-this-kind-of-endofunctor-ever-have-an-initial-algebra/128443#128443) of yours.)

The way to get around the problem is to restrict the size of the “subsets”; for instance, if there is an inaccessible cardinal κ then we can look at the endofunctor that sends every set to its set of subsets of cardinality < κ, and then the initial algebra for this endofunctor will be the Grothendieck universe V_κ.

on May 28, 2013 at 1:56 pm |Qiaochu YuanOh, right. I had misremembered what you said in that answer.