Feeds:
Posts

## The quaternions and Lie algebras I

Someone who has just read the previous post on how exponentiating quaternions gives a nice parameterization of $\text{SO}(3)$ might object as follows: “that’s nice and all, but there has to be a general version of this construction for more general Lie groups, right? You can’t always depend on the nice properties of division algebras.” And that someone would be right. Today we’ll begin to describe the appropriate generalization, the exponential map from a Lie algebra to its Lie group. To simplify the exposition, we’ll restrict to the case of matrix groups; that is, nice subgroups of $\text{GL}_n(\mathbb{F})$ for $\mathbb{F} = \mathbb{R}$ or $\mathbb{C}$, which will allow us to mostly avoid differential geometry.

The theory of Lie groups and Lie algebras is regarded to be one of the most beautiful in mathematics, and it is also fundamental to many areas, so today’s post is an extended discussion motivating the definition of a Lie algebra. In the next post we will actually do something with them.

For studying the hydrogen atom, our interest in Lie algebras comes from the following. If a Lie group $G$ acts smoothly on a smooth manifold $M$, its Lie algebra acts by differential operators on the space $C^{\infty}(M)$ of smooth functions, and these differential operators are the “infinitesimal generators” which give us conserved quantities for the evolution of a quantum system on $M$ (in the case that $G$ consists of symmetries of the Hamiltonian). Despite the fact that Lie algebras are commonly sold as a tool for understanding Lie groups, arguably in quantum mechanics the Lie algebra of symmetries of a Hamiltonian is more fundamental. This is important in sitations where the Lie algebra can sometimes exist without an associated Lie group.

Symmetries

To describe what intuitive notion a Lie algebra is supposed to capture, let’s return to the intuitive notion a group is supposed to capture. There are several ways to convince yourself that the group axioms perfectly capture what is intuitively meant by (global) symmetry. First, Cayley’s theorem guarantees that abstract groups (sets with a binary operation satisfying certain axioms) are the same thing as concrete groups (permutations of some set, generally intended to preserve some structure). Second, the group axioms correspond perfectly to the axioms an equivalence relation satisfies: given any group $G$ acting on a set $X$, define the relation $x \sim y \Leftrightarrow \exists g : x = gy$ on $X$. Then

1. The fact that $G$ is closed under multiplication is equivalent to the fact that $\sim$ is transitive.
2. The fact that $G$ has an identity is equivalent to the fact that $\sim$ is reflexive.
3. The fact that $G$ has inverses is equivalent to the fact that $\sim$ is symmetric.

And, of course, function composition is also associative. In fact, groups, group actions, and equivalence relations all have a common generalization in groupoids, which are in turn special categories. So there is an overwhelming amount of abstract evidence that the group axioms are “right.”

When studying abstract groups, a general method is to study them via generators and relations. If the group is finitely generated or, better yet, finitely presented, this gives a completely finitary description of the group. However, the groups we are currently interested in, Lie groups, are (a priori) infinitary objects, and there does not seem to be any hope of describing such a group in terms of generators and relations.

However, this attitude only makes sense if we assume that our groups do not have any extra structure; in particular, we are assuming that they are discrete. The Lie groups we care about are far from discrete, so there are topological constraints on morphisms between Lie groups. These constraints are strongest if we assume that our Lie groups are connected (since we are ignoring the discreteness that comes from the group of connected components). In fact, the following holds.

Theorem: Any connected topological group $G$ is generated by a neighborhood $U$ of the identity. Hence any continuous homomorphism out of $G$ is determined by its restriction to any such neighborhood $U$.

Proof. Let $U$ be a neighborhood of the identity and let $H$ be the subgroup generated by $U$. Since $H$ is a union of translations of $U$, it is open. If $x \in G$ has the property that any neighborhood $V$ of $x$ intersects $H$, then $x U$ intersects $H$, hence $x \in H$, so $H$ is closed. Since $H$ is non-empty, it must be all of $G$.

So morphisms out of connected topological groups are determined by what’s happening in arbitrarily small neighborhoods of the identity: informally speaking, they are determined by “infinitesimal” elements of $G$. The goal of the definition of a Lie algebra is to make this notion of infinitesimal symmetry precise.

Derivations

What is an infinitesimal symmetry? As a first approximation, we want to consider a continuous analogue of homomorphisms $f : \mathbb{Z} \to G$, which correspond to applying the element $f(1) \in G$ over and over again. The appropriate continuous analogue is a one-parameter subgroup $f : \mathbb{R} \to G$ (where for Lie groups we will assume the morphism is smooth, although this turns out to already be true). Just as $f : \mathbb{Z} \to G$ is determined by $f(1)$, by the above Theorem $f : \mathbb{R} \to G$ is determined by $f([0, \epsilon))$ for arbitrary $\epsilon > 0$. Our first approximation of an infinitesimal symmetry is the thing which generates a one-parameter subgroup.

Here is a definition which does not require any analysis to state, but which requires a little analysis to motivate. Let $A$ be an algebra, not necessarily commutative (say over $\mathbb{R}$), and let $\phi : \mathbb{R} \to \text{Aut}(A)$ be a one-parameter group of automorphisms of $A$. (For example, we can choose $A = C^{\infty}(M)$ for $M$ a smooth manifold and $\phi$ the map induced by a one-parameter group of automorphisms of $M$.) Let us suppose that the derivative

$\displaystyle d \phi_p = \lim_{h \to 0} \frac{\phi_{p+h} - \phi_p}{h}$

exists in some suitable sense (in the space of $\mathbb{R}$-vector endomorphisms of $A$). Since $\phi_{p+h} = \phi_p \phi_h$, this is just $d \phi_p = \phi_p d \phi_0$. The map $d \phi_0$ will be our model for the “infinitesimal generator” of $\phi_p$. Since $\phi_p$ is linear, so is $d \phi_0$. It’s also not hard to see that $d \phi_0$ fixes constant functions. Finally, since $\phi_p(fg) = \phi_p(f) \phi_p(g)$, we see that

$\displaystyle d\phi_0(fg) = d \phi_0(f) g + f d \phi_0(g)$

(the product rule). These properties define a(n $\mathbb{R}$-) derivation of $A$, which is a map $D : A \to A$ satisfying the following axioms:

1. $D(f + g) = D(f) + D(g)$,
2. $c \in \mathbb{R} \Rightarrow D(c) = 0$.
3. $D(fg) = D(f) g + f D(g)$,

Note that the second axiom implies that $D$ is $\mathbb{R}$-linear because of the third axiom.

Example. Let $A = C^{\infty}(\mathbb{R})$ and let $\phi_p : A \to A$ be the translation map $\phi_p(f(t)) = f(t + p)$. Then $d \phi_0 : A \to A$ is the ordinary derivative $d \phi_0(f(t)) = f'(t)$. We might write $d \phi_0 = \frac{\partial}{\partial t}$.

This example also works if we replace $A$ with the algebra of polynomials $\mathbb{R}[t]$.

Example. Let $A = C^{\infty}(\mathbb{R}^n)$ and let $\phi_p : A \to A$ be the translation map $\phi_p(f(v)) = f(v + pw)$ where $w \in \mathbb{R}^n$ is fixed. Then $d \phi_0 : A \to A$ is the directional derivative $d \phi_0(f(v)) = \nabla_w f(v)$ along $w$. If $\mathbb{R}^n$ is given coordinates $x_1, ... x_n$ and $w$ is the unit vector in the direction $x_i$, we might write $d \phi_0 = \frac{\partial}{\partial x_i}$.

This example also works if we replace $A$ with the algebra of polynomials $\mathbb{R}[x_1, ... x_n]$.

Example. Let $A = C^{\infty}(S^1)$ and let $\phi_p : A \to A$ be the rotation map

$\displaystyle \phi_p(f(x, y)) = f(x \cos p - y \sin p, x \sin p + y \cos p)$.

Then $d \phi_0 = x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x}$. Once more this example works if we replace $A$ with the algebra of polynomials $\mathbb{R}[x, y]/(x^2 + y^2 - 1)$, but we can also replace $A$ with the algebra of holomorphic functions on an annulus about $S^1 = \{ z \in \mathbb{C} : |z| = 1 \}$ restricted to $S^1$, whereupon $\phi_p(f(z)) = f(e^{ip} z)$ and $d \phi_0 = iz \frac{\partial}{\partial z}$ (and the same is true for the corresponding algebra of polynomials $\mathbb{C}[z, z^{-1}]$).

Example. If $A$ is any algebra such that the exponential $e^H, H \in A$ makes sense (for example a Banach algebra), then let $\phi_p(f) = e^{pH} f e^{-pH}$. (We think of $A$ as the algebra of observables of some system and $\phi_p$ as changing coordinates via the symmetry generated by $H$; this is relevant to the Heisenberg picture of quantum mechanics). Then

$d \phi_0(f) = Hf - fH = [H, f]$.

This example is extremely important to keep in mind.

Generally speaking, if $A = C^{\infty}(M)$ and $\phi_p$ comes from a one-parameter group of automorphisms $g_p : M \to M$ (via the relation $\phi_p(f(t)) = f(g_p(t))$), then $d \phi_0$ is nothing more than the derivative along the vector field defined by $g_p$, which corresponds to its flow. It turns out that every derivation of $C^{\infty}(M)$ comes from a vector field in this way, so we can think of derivations in general as an algebraic generalization of vector fields; that is, we should think of a derivation $A \to A$ as a “vector field on $\text{Spec } A$.”

We are now ready to give a more-or-less precise definition of an infinitesimal symmetry. For an algebra $A$ over a field $k$, a (first-order) infinitesimal symmetry of $A$ is a homomorphism $A \to A[\epsilon]/\epsilon^2$ of $k$-algebras such that the composition of the above homomorphism with the natural quotient $A[\epsilon]/\epsilon^2 \to A$ is the identity. (If $A$ is noncommutative, we require that $\epsilon$ is central.) In other words, an infinitesimal symmetry a first-order deformation of the identity symmetry.

Explicitly, an infinitesimal symmetry is given by a homomorphism $\phi(a) = a + \epsilon D(a)$ such that $\phi(af + bg) = a \phi(f) + b \phi(g)$, hence $D$ is $k$-linear, and such that $\phi(fg) = \phi(f) \phi(g)$, hence

$fg + \epsilon D(fg) = (f + \epsilon D(f))(g + \epsilon D(g)) = fg + \epsilon (D(f) g + f D(g))$.

Hence $D$ is precisely a $k$-derivation.

If $A$ is commutative and $e_p : A \to k$ is (the evaluation map at) an $k$-point $p$ of $\text{Spec } A$, then composing an infinitesimal endomorphism $\phi : A \to A[\epsilon]/\epsilon^2$ with $e_p$ gives a morphism $e_p \circ \phi : A \to k[\epsilon]/\epsilon^2$. We call this a derivation at $p$, and it is the appropriate algebraic generalization of a tangent vector at $p$; this is the algebraic sense in which a derivation gives rise to a vector field.

Indeed, this is one way to define the Zariski tangent space at a point of a variety. In this case we have $A = k[x_1, ... x_n]/(f_1, ... f_k)$, so a derivation at $p = (p_1, ... p_n)$ is nothing more than a choice $e_p \circ \phi(x_i) = p_i + \epsilon v_i$ such that $f_j(p + \epsilon v) = 0$, or

$\displaystyle f_j(p + \epsilon v) = f_j(p) + \epsilon \sum_{k=1}^n \frac{\partial f_j}{\partial x_k}(p) v_i = 0$

hence $(v_1, ... v_n)$ is a vector orthogonal to the gradient of every $f_j$ at $p$. More abstractly, an infinitesimal symmetry of a commutative $k$-algebra can be written as a morphism $A \to A \otimes_k k[\epsilon]/\epsilon^2$, or in the other direction as a morphism

$\displaystyle \text{Spec } A \times_k \text{Spec } k[\epsilon]/\epsilon^2 \to \text{Spec } A$

where $\times_k$ is the fiber product over $k$. (The tensor product over $k$ is the coproduct in the category of commutative $k$-algebras. It dualizes to give the fiber product, which is therefore the product in the category of schemes over $k$.) Here just as $\text{Spec } k$ is the universal $k$-point, $\text{Spec } k[\epsilon]/\epsilon^2$ is the universal $k$-point with a tangent vector, so an infinitesimal symmetry is just a tangent direction in the automorphism group of $\text{Spec } A$. Compare to the algebraic definition of a one-parameter subgroup, which is a morphism

$\text{Spec } A \times_k \text{Spec } k[t] \to \text{Spec } A$

or, equivalently, a morphism $A \to A[t]$ (with the same restriction as before on the quotient $A[t] \to A$). Given such a morphism we get an infinitesimal symmetry by composing with the quotient $A[t] \to A[t]/t^2$. So we see that a tiny bit of algebraic geometry provides an elegant algebraic language for going from one-parameter subgroups to infinitesimal symmetries.

Local nilpotence

Given a derivation $D : A \to A$ we would like to promote it to a one-parameter subgroup $\phi_p : A \to A$ of automorphisms such that $d \phi_0 = D$, hence such that $\frac{d}{dp} \phi_p = D \phi_p$. This differential equation at least formally admits the solution

$\displaystyle \phi_p = e^{Dp} = \sum_{k=0}^{\infty} \frac{D^n}{n!} p^n$

which is defined at various levels of rigor depending on how much structure $A$ has and how well-behaved $D$ is on it. (It is, at the very least, a morphism $A \to A[p]/p^n$ for every $n$ (a higher-order infinitesimal symmetry) if $A$ has the appropriate characteristic, and by taking the categorical limit it is a morphism $A \to A[[p]]$ if $A$ has characteristic zero.) For example, if $A = C^{\infty}(M)$ and $D$ comes from a vector field on $M$ which is Lipschitz, then the Picard-Lindelöf theorem guarantees the existence and uniqueness of $\phi_p$ in a neighborhood of zero, which defines it uniquely everywhere.

For now I would like to make the point that for some $D$ it is possible to make sense of the exponential with no assumption of additional structure on $A$. Namely, suppose that $D$ is locally nilpotent on $A$: that is, that for every $a \in A$, the vector space $\text{span} \{a, Da, D^2 a, ... \}$ is finite, and $D$ acts nilpotently on it. Then $e^{Dp} a$ is a sum of finitely many terms for any particular $a \in A$, so it is perfectly well-defined without the need to take any limits, and moreover it is not hard to show that $e^{Dp}$ is actually a family of automorphisms (each of which has inverse $e^{-Dp}$).

This applies in particular to the case that $D = \frac{\partial}{\partial x_k}$ acting on $\mathbb{R}[x_1, ... x_n]$, hence it is possible to give very concrete meaning to the intuitive idea that $D$ is the infinitesimal generator of translation in the $x_k$ direction in the sense that

$\displaystyle e^{Dp} f(x_1, ... x_n) = f(x_1, ... x_k + p, ... x_n)$

for any $p$. (This is just an abstract form of the Taylor expansion formula.) Since these derivations all commute, the exponential map gives a smooth homomorphism from the vector space of derivations $\sum p_k \frac{\partial}{\partial x_k}$ to the Lie group $\mathbb{R}^n$ of translations acting on $\mathbb{R}[x_1, ... x_n]$. Of course, a statement of this form can’t be true in general for non-abelian groups of automorphisms, but it is still nice to see how the picture works out in a nice abelian case.

Two formulas

I can’t resist pointing out two cute applications of this idea that $e^{pD}$ is the translation by $p$ operator when $D = \frac{\partial}{\partial t}$. Since in particular $e^D(f(t)) = f(t+1)$, it follows that the backward difference operator $\nabla(f(t)) = f(t) - f(t-1)$ can be written $\nabla = 1 - e^{-D}$. If we want to find sums of the function $f(t)$, we want in some sense to be inverting $\nabla$, so we want to investigate

$\displaystyle \frac{1}{\nabla} = \frac{1}{1 - e^{-D}}$.

The RHS is essentially the generating function for the Bernoulli numbers, which gives

$\displaystyle \frac{1}{\nabla} = \frac{1}{D} \cdot \frac{D}{1 - e^{-D}} = \frac{1}{D} + \frac{1}{2} + \sum_{m=2}^{\infty} \frac{B_m}{m!} D^{m-1}$.

And subtracting two copies of this gives a rough heuristic derivation of the Euler-Maclaurin formula (since $\frac{1}{D}$ is indefinite integration)! By local nilpotence, the above is almost completely rigorous when applied to polynomials, which gives Faulhaber’s formula.

The Lie bracket

Just as groups are an abstraction of the concrete phenomenon of symmetries (permutations of sets), Lie algebras can be thought of as an abstraction of the concrete phenomenon of infinitesimal symmetries (first-order deformations of the identity automorphism of an algebra). So what axioms should a Lie algebra satisfy?

First of all, if $D$ is a $k$-derivation, then so is $t D$ for every scalar $t \in k$. When $k = \mathbb{R}$ this corresponds to performing the same infinitesimal symmetry, but $t$ times faster.

Second of all, an infinitesimal symmetry $A \to A[\epsilon]/\epsilon^2$ can be extended to a homomorphism $A[\epsilon]/\epsilon^2 \to A[\epsilon]/\epsilon^2$ by making it $\epsilon$-linear, and then we can compose a pair of infinitesimal symmetries $I + \epsilon D_1$ and $I + \epsilon D_2$ (hence $D_1, D_2$ are derivations) to get a new infinitesimal symmetry $I + \epsilon (D_1 + D_2)$. Together with the above observation, we can now confidently state that whatever a Lie algebra over $k$ is, it is at the very least a $k$-vector space.

As we saw above, in some basic abelian cases this $k$-vector space structure is already enough to recover the structure of the corresponding group of symmetries. In the non-abelian case, however, this will fail. The problem is that we are only looking at first-order information, and infinitesimal symmetries commute to first order, but actual symmetries don’t. In order to fix this issue we need to look for second-order information.

Hence for $k$ a field of characteristic not equal to $2$, we make the following definition: a second-order infinitesimal symmetry of a $k$-algebra $A$ is a homomorphism $A \to A[\epsilon]/\epsilon^3$ such that composition with the quotient $A[\epsilon]/\epsilon^3 \to A$ gives the identity. Explicitly, this is a map

$\displaystyle \phi(a) = a + \epsilon D(a) + \frac{\epsilon^2}{2} D'(a)$

where $D(a), D'(a)$ are $k$-linear and $\phi(fg) = \phi(f) \phi(g)$. Expanding out this condition shows that it is equivalent to the condition that $D$ is a $k$-derivation and that $D'$ satisfies

$\displaystyle D'(fg) = D'(f) g + f D'(g) + 2 D(f) D(g)$.

Since we know that $e^{\epsilon D} = I + \epsilon D + \frac{\epsilon^2}{2} D^2$ is always a second-order infinitesimal symmetry, we see that there is always a distinguished choice $D' = D^2$. Indeed, $D^2(fg) = D(D(f) g + f D(g)) = D^2(f) g + 2 D(f) D(g) + f D^2(g)$. It therefore follows that

$\displaystyle (D' - D^2)(fg) = (D' - D)(f) g + f (D' - D)(g)$.

In other words, $\phi$ can be specified by specifying two derivations: the first derivation $D$ gives a first-order deformation of the identity, and the second derivation $D' - D^2$ gives a second-order deformation of the flow $e^{\epsilon D}$.

Given two derivations $D_1, D_2$, we can take their second-order flows and get second-order symmetries $e^{\epsilon D_i}$, and then we can compose those flows to get

$\displaystyle e^{\epsilon D_1} e^{\epsilon D_2} = I + \epsilon (D_1 + D_2) + \frac{\epsilon^2}{2} (D_1^2 + 2 D_1 D_2 + D_2^2)$.

This second-order symmetry is the flow of $e^{\epsilon (D_1 + D_2)}$ if $D_1, D_2$ commute, but otherwise the second-order deformation is

$\displaystyle e^{\epsilon D_1} e^{\epsilon D_2} - e^{\epsilon (D_1 + D_2)} = \frac{\epsilon^2}{2} (D_1 D_2 - D_2 D_1) = \frac{\epsilon^2}{2} [D_1, D_2]$.

We know from the above that second-order deformations are derivations, so it follows that $[D_1, D_2]$ is a derivation, the Lie bracket of $D_1$ and $D_2$. Thinking of $D_1, D_2$ as vector fields, the Lie bracket, which is a special case of the Lie derivative, measures how $D_2$ changes along the flow induced by $D_1$.

This can be made precise as follows. If $\phi_p : A \to A$ is a one-parameter group of automorphisms with $d \phi_0 = D_1$, then $\phi_p$ acts on derivations via conjugation $D \mapsto \phi_p D \phi_{-p}$. Differentiating this action gives the bracket $D \mapsto [D_1, D]$. This is precisely analogous to how every element of a group $h \in G$ defines an inner automorphism $g \mapsto hgh^{-1}$ of $G$.)

The Lie bracket is obviously bilinear. If $\phi_p$ commutes with $D$, then $[D_1, D] = 0$; in particular, $[D_1, D_1] = 0$. Hence the Lie bracket is alternating. In characteristic $\neq 2$, this is equivalent to the condition that $[D_1, D_2] = - [D_2, D_1]$, which is intuitive since the change in $D_1$ along $D_2$ ought to be the opposite of the change in $D_2$ along $D_1$. Finally, since the Lie bracket is bilinear and $[D, -]$ is obtained by differentiating a group symmetry, it follows that $[D, -]$ ought to satisfy the product rule; that is,

$\displaystyle [D_1, [D_2, D_3]] = [[D_1, D_2], D_3] + [D_2, [D_1, D_3]]$

which is equivalent, after a little rearrangement and application of alternativity, to the Jacobi identity

$\displaystyle [D_1, [D_2, D_3]] + [D_3, [D_1, D_2]] + [D_2, [D_3, D_1]] = 0$.

In other words, the Lie bracket defines an action of a derivation on the space of derivations by derivations! This is less confusing than it sounds at first. After all, both left multiplication and conjugation define an action of a permutation (that is, an element of a group $G$) on a set of permutations (that is, on $G$) by permutation. The Lie bracket is more closely analogous to conjugation, but it is also instructive to think of it as analogous to left multiplication, since the condition that left multiplication defines a group action on a group $G$ is precisely the associativity axiom. So the Jacobi identity can roughly be thought of as “associativity of the Lie bracket.”

It turns out that second-order information about a Lie group is enough to completely reconstruct its multiplication in a neighborhood of the identity; we do not need to go further to find third-order analogues of the Lie bracket. So we have enough structure to feel comfortable making the following definition.

Definition: A Lie algebra over a field $k$ is a $k$-vector space $\mathfrak{g}$ equipped with a bilinear map $[ \cdot, \cdot ] : \mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}$ which is alternating and satisfies the Jacobi identity.

The vector space $\text{Der}_k(A)$ of $k$-derivations of a $k$-algebra $A$, and any subspace of this closed under bracket (a Lie subalgebra), forms a Lie algebra, so we already have a large and rich class of examples; in particular, the vector fields on a smooth manifold $M$ form a Lie algebra.

In the next post we will describe how to associate a Lie algebra to a Lie group using as little differential geometry as possible. Lie algebras form a category $\text{LieAlg}_k$ whose morphisms are $k$-linear maps preserving the bracket, and what we will be describing is a functor $\text{LieGrp} \to \text{LieAlg}_{\mathbb{R}}$ which is surprisingly close to faithful.

### 15 Responses

1. [...] We now know what a Lie algebra is and we know they are abstractions of infinitesimal symmetries, which are given by derivations. Today we will see what we can say about associating infinitesimal symmetries to continuous symmetries: that is, given a matrix Lie group , we will describe its associated Lie algebra of infinitesimal elements and the exponential map which promotes infinitesimal symmetries to real ones. [...]

2. Hello, thanks for this post, it is great! Could you suggest any references for motivations behind studying “abstract Lie Algebras”, or anything related to what was on this post?
I am trying to understand better why people would study Lie Algebras “in their own right”. I am slowly starting to appreciate it better — but in the past, I’ve thought of lie algebras existing as a field of study only to serve lie group theory ( lie algebras encode information about the lie group ). To me, perhaps motivating lie algebras via derivations seems to be a step towards motivating lie algebras without necessarily lie groups.

Thank you

• One motivation is that there are interesting infinite-dimensional Lie algebras, such as the Kac-Moody algebras, and not all of them come from corresponding infinite-dimensional Lie groups. These have various applications, e.g. affine Lie algebras are relevant to mathematical physics. In physics, Lie algebras are actually more fundamental than Lie groups, since they directly give rise to observables.

• Thanks, I will take a look. Do these Kac-Moody algebras fit into the “Lie Algebras are first-order deformations of the identity automorphism of an algebra” idea? I like the description you gave with taking the differential at 0 of a varying 1-parameter family of automorphisms of an algebra. Is this general enough? I am not yet at the level of understanding completely, but Aut(A) does not always have to form a lie group, is this correct? For example. Diff( M ) = Aut ( C^infinity ) of a manifold does not have to form a lie group, when Diff( M ) is not a differential manifold ( in the standard locally euclidean sense )?

• I don’t think so. I’m not really sure where they come from. Everything is more or less fine if you take one-parameter families in the formal sense (power series) but in general there won’t be any “actual” one-parameter families running around unless A is a finite-dimensional real algebra or a Banach algebra or similar.

3. I see. We can always think of an abstract Lie Algebra in the derivation / infinitesimal one-parameter family of automorphisms sense at least formally. You can define formally the exp map as a map from the lie algebra into a formal power series algebra. Then the set of all exp( g ) can be seen to be a formal group, once you prove the BCH formula. Then I suppose we can think of the resulting formal group as the automorphism group we were looking for

• Yes, and in fact in characteristic zero this defines an equivalence of categories between Lie algebras and formal groups.

• Thanks, I will try to read more about this and understand it. However, I think I kind of messed it up above. It seems like you can only define the group exp( g ) for g in the Lie Algebra when the Lie Algebra is nilpotent ( so that the BCH formula will actually have finitely many terms ). Otherwise, the elements exp( g ) are not closed under multiplication, unless you extend the domain from just the lie algebra, to formal “lie power series ( power series in the bracket )”.

• Sorry, and just one more question.. I think this seems like it only works between finite dimesional lie algebras and power series of finitely many variables. What happens in the infinite dimensional lie algebra case? Can we still think of them in terms of derivatives of formal power series groups?

thank you

• Yes, one has to use formal power series. I don’t know if the correspondence holds in infinite dimensions.

• Hmm.. too bad.. I was hoping to find a “unified” way of thinking about abstract lie algebras. at least over characteristic 0

thank you!

4. Excellent post, Qiaochu! I ran into this while trying to recall a few basic things about Lie groups/Lie algebras. This is a good reference for the basics.

• Thanks!