Today we will advance our coverage toward quantum mechanics by looking at an unusual feature of daily life. We’ll be looking at an aspect of the world which doesn’t quite behave as expected; though it won’t be as counterintuitive as, say, the Heisenberg uncertainty relations, it does tend to make people blink a few times and say, “That’s not — well, I guess it is right.” Furthermore, poking into this area will motivate the development of some mathematical tools which will remarkably simplify our study of symmetry in quantum physics.

Fortunately, then, I found an assistant to help me with the demonstrations. Please welcome my fellow physics enthusiast, here on an academic scholarship after a rough-and-tumble life in Bear City:

Those of us who grew up, as I did, with the aftereffects of the “New Math” are familiar with the “commutative law.” At some tender age, we learned that “addition is commutative,” which we were taught meant that the order in which we do additions doesn’t matter. 22 + 17 is the same as 17 + 22, and in fact for any numbers real or complex,

$$a + b = b + a.$$

When we’re talking about numbers which we use for counting things, this seems like the most unremarkable property in the world. How could it be false? My assistant will illustrate the process at work:

Multiplication, which we define in terms of repeated additions, inherits the commutative property of addition, and as we build more types of numbers — irrational, real, complex — this nice and unsurprising character trait continues to hold. It seems so unremarkable that we’d be foolish not to wonder why it deserves a fancy name at all! Why make such a fuss about it and turn it into a grand thing. . . unless there were a place where it wasn’t true.

Subtraction, we note, is not commutative. Six minus three gives one result, but three minus six gives another — a result which, as it happens, we can’t even use the “natural” or “counting” numbers to specify. But wait, isn’t subtraction just a special kind of addition — the addition of negatives? Aha: something must be going on with that “flip” which turns a number into its opposite, bearded twin.

Thinking in terms of a number line, the expression a + b just means counting b units from some starting point a. Flipping the sign to get subtraction, ab, means counting b units in the opposite direction from the same starting point. Addition of any number is a translation along the number line, and negation is a flip to the other side of zero, the origin.

A translation followed by a flip does not give the same result as a flip followed by a translation. Starting from a number a, the former sequence of operations gives -(a + b), while the latter gives

$$(-a) + b = b – a.$$

Now, if addition is a shift to the side, then multiplication is a scaling. On the number line, twice a is the segment from 0 to a stretched out so that it extends from 0 to 2a. Repeated multiplications are successive scalings; b2 is scaling by the amount b twice in succession, and the square root of b is that scaling which one must perform twice in order to scale by b.

This viewpoint is useful for the insight it gives into the next question: what is the operation which, when performed twice, gives a flip?

In more “numerical” terms, we’re asking if there exists a scaling operation — a multiplier — which, when we multiply it by itself, has the same effect as flipping, or negation. This is the geometric interpretation of asking what is the square root of negative one!

Thinking geometrically, it is not so difficult to find the answer. If we imagine the number a represented by a line segment from the origin 0, sticking out in the right-hand direction, we can pivot the number a one quarter-turn clockwise or counterclockwise (your choice), to give a line segment pointing up (or down). By repeating the same operation, we’ll get a line segment of length a pointing to the left — which is just the number (-a). The square root of -1 is a rotation by a quarter-turn!

Well, we’ve worked ourselves right into the complex numbers. Instead of a number line, we’ve got a number plane, each number in which can be represented by a scaling (a shrinking or an expansion) and a turning. (Incidentally, we’re in a very good position now to understand why -i is just as good a square root of -1 as is i.) Starting with the number 1, we can rotate by a gradually increasing angle to trace out a full circle, on which the vertical coordinate is sin θ and the horizontal coordinate is cos θ. Rotation is just multiplication by a complex number, and the complex number which rotates by θ without scaling has a name; it’s called ei θ. This is the geometric interpretation of Euler’s formula

$$e^{i\theta} = \cos\theta + i\sin\theta$$

which we used a little while ago to prove some trigonometric identities.

Whew! We’ve covered a fair bit of territory just thinking about translations, rotations and scalings. One important thing to notice is that successive rotations in the 2D plane commute. Intuitively, we feel pretty confident that twisting by 30 degrees, taking a breather and then twisting by 60 degrees will have the same result as turning by 60 degrees, pausing and turning by another 30.

What can we say about geometric operations in three dimensions? Naturally we can translate shapes in three different directions, but what does the extra “room” mean for rotations? Instead of having one axis about which we can turn, we’ve got three independent ways we can spin and twist. In aerospace lingo, in order to represent an arbitrary rotation in 3D we have to give pitch, roll and yaw. (There are many different ways to represent the same rotation, but they all end up giving the same amount of information.) To specify a scale factor in addition, we need one additional number, for a total of four. Therefore, whatever mathematical objects we employ to do for 3D space what the complex numbers do for 2D space, they must have four components.

We can deduce another fact about the 3D analogues of complex numbers by looking at how successive rotations in 3D behave. Let’s pick a “zero point” somewhere in space to be our origin, and choose three perpendicular axes, which can be left-right, up-down and forward-back. We can rotate an object around any of these axes, by any amount we wish. To begin, we’ll consider rotations around the vertical and around the horizontal left-right axes, and we’ll rotate by one quarter-turn (90 degrees or π/4 radians) each time. My assistant will demonstrate a rotation around the vertical, followed by one around the horizontal:

Now, surely, performing the same operations in the opposite order will have the same outcome, yes? It worked in two dimensions, didn’t it?

Surprise! Rotations about different axes in three dimensions do not commute!

This means that if we represent a rotation around the vertical by some “hypercomplex” number v, and one about the left-right axis by h, then whatever the specific form of v and h, we must have

$$vh \neq hv.$$

Historically, this problem was approached in two different ways. One group of people followed the path to quaternions, while another took the other fork and developed vectors and matrices. Because we’re aiming for quantum mechanics, we’ll be taking the latter approach, though quaternions have interesting properties and practical applications too. (Mark Chu-Carroll and Tim Lambert wrote about them a few months ago.)

RELATED POSTS:

1. Well I don’t mean to be flippant. While I thoroughly enjoyed the matrix, I didn’t care for the other matrices. I might have to take my chances with quaternions!

• Tiffany Wishon
• Posted Tuesday, 15 January 2008 at 13:21 pm