**PUBLIC SERVICE ANNOUNCEMENT:** if any of you saw me wearing black corduroy pants and a purple T-shirt emblazoned with a picture of my friend Mike wearing a squid on his head, yes, it was laundry day. Rest assured, the reality disruption was only temporary, and normal service should be resumed shortly.

Now, to the business of the day. Earlier, we took a look at rotations and found a way to summarize their behavior using commutator relations. Recall that the commutator of *A* and *B* is defined to be

[tex]\{A,B\} = AB – BA.[/tex]

For real or complex numbers, the commutator vanishes, but as we saw, the commutators of *matrices* can be non-zero and quite interesting. We recognized that this would have to be the case, since we used matrices to describe rotations in three-dimensional space, and rotations about different axes in 3D do not commute. Looking at very small rotations, we also found that the commutators of *rotation generators* were tied up together in a way which involved cyclic permutations. Today, we’ll express this discovery more neatly, using the *Einstein summation convention* and a mathematical object called the *Levi-Civita tensor.*

First, let’s ground ourselves by noting a few basic properties of commutators. We observe that the commutator of anything with itself vanishes:

[tex]\{A,A\} = 0,[/tex]

and that swapping the order of the bracketed symbols introduces a minus sign:

[tex]\{A,B\} = -\{B,A\}.[/tex]

Last time, we looked at *infinitesimal* rotations, where we twisted through a very small angle ε, and we found that the matrix for rotating around axis *j* took the following form:

[tex]R_j(\epsilon) = \mathbb{I} + \epsilon I_j.[/tex]

The matrices *I*_{j} were comprised of ones and zeros, and they satisfied the following commutator relationships:

[tex]\{I_x,I_y\} = I_z,\ \{I_z,I_x\} = I_y,\ \{I_y,I_z\} = I_x.[/tex]

Alternatively, we saw, these relations could be written this way:

[tex]\{I_1,I_2\} = I_3,\ \{I_3,I_1\} = I_2,\ \{I_2,I_3\} = I_1.[/tex]

Now, we’d like to combine these three equations into one “package.”

Let’s think for a moment about *vectors,* which we used before as a way of combining several numbers into a “marriage” (with possibly many more than two spouses). We said that a vector has both a magnitude (say, 10 kilometers per hour) and a direction (*e.g.,* 35 degrees west of due north). We can write a vector as a list of numbers, organized if you’d like into a one-column or one-row matrix; the first number might give an easterly speed in kph, the second a northerly speed and the third a vertical speed, for example. Going between the magnitude-direction description and the list of “components” just requires a little trigonometry, which for the 2D case we explored earlier. (Well, you *could* go off and invent yourself a non-orthonormal basis in some abstract vector space, but patience! We’re not quite there yet.)

A common operation done on vectors is the *dot product,* which means taking two vectors and multiplying their components, one by one, and adding the total up. The dot product of two velocity vectors in the 2D plane would be, for example, the product of vector 1’s easterly component and vector 2’s easterly component, *plus* vector 1’s northerly component times vector 2’s northerly component. That’s enough of a mouthful to warrant expressing in symbols instead:

[tex]\vec{v} \cdot \vec{u} = v_1 u_1 + v_2 u_2 + \cdots + v_N u_N.[/tex]

Here, we recognize that a vector could in principle have an arbitrarily large number of components. We can squeeze still harder by introducing a *summation* symbol, a big version of the Greek letter Σ:

[tex]\vec{v} \cdot \vec{u} = \sum_{i=1}^N v_i u_i.[/tex]

This says exactly what we said before, but more stylishly. If we trust ourselves to remember where the “index” *i* starts and stops, we might omit the 1 and *N* for brevity:

[tex]\vec{v} \cdot \vec{u} = \sum_i v_i u_i.[/tex]

The final step in streamlining the notation is due to Einstein, who realized that most of the time when you see indices repeated across variables — like the *i* we have on two variables here — you’re going to be summing over them. So, why keep the summation sign around at all? Einstein found that he didn’t have to, and in his honor, we call the idea that *summation is implied whenever you have repeated indices* the “Einstein summation convention.” In our example, this means we can write the dot product like this:

[tex]\boxed{\vec{v} \cdot \vec{u} = v_i u_i.}[/tex]

At last, our equations are starting to look like *physics!*

OK, it’s time to stare down the commutators. Here is, again, what we’re trying to study:

[tex]\{I_1,I_2\} = I_3,\ \{I_3,I_1\} = I_2,\ \{I_2,I_3\} = I_1.[/tex]

Notice that each of these three equations has the form of a commutator between two generators yielding the third generator. We might try to summarize them by saying that in general, the commutator of generator *i* with generator *j* is the third generator, *k.* If we’re more careful, we’ll note that if *i* equals *j,* for example, the result is *zero.* It’s also possible to get *minus* a generator on the right-hand side (how?), so we’re really talking about getting *something times* that third generator:

[tex]\{I_i,I_j\} = (\hbox{??}) I_k.[/tex]

Let’s give that question mark a name. Whatever it is, its value will depend upon *i*, *j* and *k.* By convention, this object is identified using the Greek letter ε — it’s not an infinitesimal angle; in fact, it’s not an infinitesimal anything, but that’s the letter everybody uses. (The Greek alphabet is only so big.) To make sure we don’t forget which epsilon we’re talking about, and to indicate what it depends upon, we attach *i*, *j* and *k* as subscripts:

[tex]\{I_i,I_j\} = \epsilon_{ijk}I_k.[/tex]

Aha! We’ve caught ourselves repeating indices. Since each of *i*, *j* and *k* can be 1, 2 or 3, this equation really means

[tex]\{I_i,I_j\} = \epsilon_{ij1}I_1 + \epsilon_{ij2}I_2 + \epsilon_{ij3}I_3.[/tex]

Hmmm. Have we made life any easier? That is, we’ve managed to squeeze three commutator equations into one, but we have to find out what this “epsilon thing” — or *Levi-Civita tensor* — is all about before we can claim success. To begin with, let’s look at the commutator of *I*_{1} with *I*_{2}. By our formula,

[tex]\{I_1,I_2\} = \epsilon_{12k}I_k,[/tex]

which by the Einstein summation convention expands out to

[tex]\{I_1,I_2\} = \epsilon_{121}I_1 + \epsilon_{122}I_2 + \epsilon_{123}I_3,[/tex]

but *since we already know* that *I*_{1} with *I*_{2} gives *I*_{3}, we can say that

[tex]\epsilon_{123} = 1,[/tex]

and furthermore that

[tex]\epsilon_{121} = \epsilon_{122} = 0.[/tex]

Following the same procedure with the other two commutators tells us that

[tex]\epsilon_{123} = \epsilon_{231} = \epsilon_{312} = 1.[/tex]

Here we see the cyclic nature of the commutators making itself manifest: if the subscripts are a *cyclic permutation* of the sequence 123, the value of the Levi-Civita symbol is 1.

Next, we note what happens when we *swap* the variables *i* and *j.* We know that in general, swapping the things inside the commutator brackets gives a minus sign:

[tex]\{I_j,I_i\} = -\epsilon_{ijk}I_k,[/tex]

but looking at the formula where we introduced the Levi-Civita tensor, we see that

[tex]\{I_j,I_i\} = \epsilon_{jik}I_k.[/tex]

Comparing these two, we observe a neat property:

[tex]\epsilon_{ijk} = -\epsilon_{jik}.[/tex]

The Levi-Civita symbol is what we call *antisymmetric.* A “symmetry” means that we can transform an object and have it look the same as it did before; rotating a symmetrical vase around its axis, for example, yields a configuration which looks the same as the vase before the transformation. Here, we perform an operation — swapping the indices — and we find that the result is almost, but not quite the same as the initial quantity: it’s got an extra minus sign.

As we mentioned earlier, for any value of *i,*

[tex]\{I_i,I_i\} = 0,[/tex]

which implies that

[tex]\epsilon_{iik} = 0.[/tex]

Note that this is consistent with the antisymmetry: if we *swap* the two instances of the index *i,* we get that

[tex]\epsilon_{iik} = -\epsilon_{iik},[/tex]

and the only way for a thing to equal *minus itself* is for that thing to be *zero.* Furthermore, looking at the original three commutators, we see that when a particular index is on the left-hand side, it *can’t be on the right.* If *I*_{1} is used on the left, *I*_{1} won’t be on the other side. This lets us write that

[tex]\epsilon_{iji} = \epsilon_{jii} = 0,[/tex]

which is just the cyclic permutation of the statement we had two equations ago.

The fact that swapping indices introduces a minus sign means that, for example,

[tex]\epsilon_{213} = -\epsilon_{123} = -1.[/tex]

We can equally well say that

[tex]\epsilon_{132} = -\epsilon_{312} = -1,[/tex]

or that

[tex]\epsilon_{321} = -\epsilon_{231} = -1.[/tex]

Summarizing what we’ve found,

[tex]\epsilon_{123} = \epsilon_{231} = \epsilon_{312} = 1,[/tex]

[tex]\epsilon_{132} = \epsilon_{213} = \epsilon_{321} = -1.[/tex]

This exhausts the six possible ways of writing the numbers 1, 2 and 3 in a chosen order without repeating a digit. We observe that the cyclic permutation property holds whether the result is +1 (when the subscripts are “the right way around”) or the result is -1 (when two indices have been swapped). Also, if any two indices have the same value, the result is always 0.

Our new toy, the Levi-Civita tensor (also known as the Levi-Civita symbol and the permutation tensor) is a *completely antisymmetric* object. We could define the analogous contraption for any number of dimensions, but we’ll work with three most of the time. One finds this object in calculations of matrix determinants, vector cross products and other places, and while all of that will probably come up sooner or later, the use for which we will employ it now is in commutator relations:

[tex]\{I_i,I_j\} = \epsilon_{ijk} I_k.[/tex]

Incidentally, physicists typically define the rotation generators with a factor of *i,* the square root of -1, worked in:

[tex]R_j(\theta) = \mathbb{I} – i\theta J_j\ (\hbox{for $\theta$ small}).[/tex]

This brings a factor of *i* into the commutator relation:

[tex]\boxed{\{J_i,J_j\} = i\epsilon_{ijk} J_k.}[/tex]

Somehow, people live with themselves even after they use the letter *i* to represent two different things in the same equation.

The last boxed equation is all the information about 3D spatial rotations we’ll need, distilled to its operatorial essence. The next step will be to apply this knowledge to quantum-mechanical systems and discover what *having rotational symmetry* means for a quantum system’s behavior.

This is the warmest, fuzziest, least intimidating basic manipulation of levi-civita I’ve read yet. Please write enough to replace Arfken, and maybe some stuff to get through Jackson too, because I already spend enough nights with dull, dry men (and the textbooks they write)…

Thanks. :-) I’ll try.