What? Oh, no, that's not the problem, I see how the two-path thing is equivalent to a loop. I'm wrapping my brain around the parallel velocities thing... I think I'm getting it.
Oh! Yeah, that's kind of weird. I mean, in flat space you can always just say "go in that direction" and it'll be fine, because parallel transport is trivial in flat space. If you want 2-dimensional versions of curved Minkowski space to try to visualize, consider a one-sheeted hyperboloid, for instance x^{2} + y^{2} = z^{2} + 1. For the "positive curvature" (anti-de Sitter) variation, time-like means less than 45 degrees from the x-y plane, for the "negative curvature" (de Sitter) variation, time-like means more than 45 degrees from the x-y plane. Geodesics are gotten as the intersections of the hyperboloid with planes through the origin at the appropriate slopes. You can sort-of see how to parallel-transport works in these situations, similar to how parallel transport worked in the arrow-on-an-ant case.
Oh hey cool a math thread, seems like the place for me. Hello, I'm a number theorist. Uh I guess I don't have much to say about the current topic since I'm not much of a differential geometer and my brain is a bit fried from all of the stuff I have going on right now. But math-wise I'm working on hammering out all of the details for a paper (which has been languishing too long and I'm really trying to get put together by the end of the summer), and also reading / writing up some notes on abelian categories / category theory in general for myself (since I like learning all of the background / foundational stuff and writing it down myself).
I try to keep the main posts not too high-level, but am always willing to go into more technical stuff for people like TheSeer. But yeah, if you have some number theory that you could share with the class, that would be great. I personally would love to learn more number theory because it's one of the branches of math that I can't really think about well; I can only do things over C or occasionally R. Z, Q, and F_{q} are mostly just vague mysteries to me.
Yeah, I could try to make some posts about that (though when I'm a bit less busy probably) - there's plenty of elementary stuff you can talk about with number theory (and it probably wouldn't be a bad idea for me to go back over some of those things I haven't seen for a while and write up how they work). Anything in particular that might be a good idea? Or anything higher-level about working over Z or Q or F_q or whatnot? Anyway for now, I'll wind down my evening with some category theory. Lots of diagrams to chase (abstractly, without pushing around elements) :P
Oh man, diagram chasing. Have fun. I had to grade a bunch of first-year grad students' proofs of the snake lemma some time last year and boy was that a trip. Anyway, whatever you think can be explained without resorting to more than say, high-school level mathematics is about where I aim here. Sometimes that means only talking about certain topics, sometimes that means shoving things under the rug. I don't know how well I do. But yeah, something fun and not too technical would be good for this thread. All the more technical stuff I put on my personal blog, which has the benefit of MathJax.
Apparently I never commented here? I have been following this for a while, I swear! Hey, just wanted to say I'm loving these! (Physics student, undergrad, 3rd year in college, my dream is to one day come up with a formulation for "why stuff works" that can surpass even Feynman's paths in terms of simplicity, but oh well.)
Quaternions and Octonions So most of us are probably familiar with the real numbers, the usual numbers we think of when we think of numbers. 1, 2, 17, 3/4, π, e, etc. We can also think about the complex numbers, i, 2+3i, e+πi, etc. We can form the complex numbers by postulating a quantity i such that i^{2} = -1, and then saying that a complex number is a number of the form a + bi where a and b are real numbers. We can add, subtract, multiply complex numbers, and divide by complex numbers that aren't 0 + 0i. We define (a + bi)* = a - bi, called the complex conjugate. Since it takes 2 real numbers to define a complex number, we say that the complex numbers are 2-dimensional. The real numbers have a notion of absolute value, |a|, that describes how "far away" a is from 0. So |1| = 1, and |-1| = 1, since they're both one unit away from 0, in different directions. Also we have that |ab| = |a| |b|, as we expect, but |a + b| is only at most |a| + |b|, not necessarily equal to it, since if a is positive and b is negative, then |a + b| is smaller than |a| + |b|. For real numbers, we'll write N(a) = |a|^{2} = a^{2}, for reasons that will become clear later. We still have N(ab) = N(a)N(b). Also note that 1/a = a/N(a). We call N the norm on the real numbers. The complex numbers also have a similar norm. We say that N(a + bi) = a^{2} + b^{2} = (a + bi)(a + bi)*. This matches the Euclidean norm for 2-dimensional vectors. Note that a + 0i is just a real number, and that the real number norm matches the complex number norm restricted to numbers of the form a + 0i. Notably, we can write the complex number a + bi as (a, b), with multiplication given by (a, b)(c, d) = (ac - db, da + bc). The norm of N(a, b) is just N(a) + N(b). For a long time after the acceptance of complex numbers, mathematicians wondered if there could be a system of numbers that is 3 dimensional, i.e. quantities that required three real numbers to specify, that could be added, subtracted, multiplied and divided (by nonzero quantities), and with a norm of some sort. But it turns out that this is impossible, for a number of rather subtle reasons. Instead, Sir William Rowan Hamilton came up with a different system, called the quaternions because the term "hamiltonian" means something else, that is 4-dimensional, involving i, j and k, so that quantities are of the form a + bi + cj + dk. Here we have i^{2} = -1, as we're familiar with, but we also have j^{2} = k^{2} = -1, and then the funny part, which is that ij = k = -ji, ki = j = -ik, and jk = i = -kj. So now it matters whether a quantity appears on the right or the left when we multiply. The quanternions are therefore noncommutative. We can write a quaternion as a pair of complex numbers (a, b) to distinguish them from a pair of real numbers, and then our multiplication is given by (a, b)(c, d) = (ac - d*b, da + bc*). Note that if c and d are real numbers then this is exactly like real number multiplication, since for real numbers, conjugation doesn't do anything. Now we define our conjugation by (a, b)* = (a*, -b) and our norm by N((a, b)) = (a, b)(a, b)*. If you write out the quaternion as a vector of real numbers coming from the real coefficients of our pair of complex numbers, then N just is just the square of the magnitude of the vector. Quaternions, originally conceived to perform rotations in 3-dimensional space, were eventually superseded by vectors and matrices, but later made a comeback, mathematically for algebraists and geometers since they have nice properties that vectors and matrices do not (in particular, multiplicative inverses) and also, for example, in computer graphics where they have less extraneous data than rotation matrices and are more robust against numerical errors. After the discovery of the quaternions, people looked at the pattern, 1-dimensional, 2-dimensional, 4-dimensional, and quickly realized that the next obvious number of dimensions was 8. Indeed, two mathematicians, Arthur Cayley and John T. Graves both discovered the octonions, working independently. We can write an octonion as a pair of quaternions (a, b) to distinguish them from a pair of real or complex numbers, and then our multiplication is given by (a, b)(c, d) = (ac - d*b, da + bc*). Note that the order that the pieces are written in is important, since quaternions don't commute. We define our conjugation again as (a, b)* = (a*, -b), and our norm as N((a, b)) = (a, b)(a, b)*. We can add, subtract, multiply and we still have multiplicative inverses for nonzero octonions. If you write out the octonion as a vector of real numbers coming from the real coefficients of our pair of quaternions, then N just is just the square of the magnitude of the vector. We lost the ability to say that ab = ba when we went from the complex numbers to the quaternions. When we got to the octonions, we further lose the ability to say a(bc) = (ab)c, so now not only does the order of the factors matter, it also matters which order we do the multiplications in. So we say that the octonions are nonassociative. We have what we call normed division algebras over the real numbers: normed, because we have our norm N, division algebra, because we can add, subtract, multiply, and form multiplicative inverses, and over the real numbers, because each of them contains a copy of the real numbers. So far we've seen normed divisional algebras over the real numbers in dimensions 1, 2, 4 and 8. We have a general idea of how to construct more using what is called the Cayley-Dickson construction, where the 2^{n}-dimensional version takes pairs of elements of the 2^{n-1}-dimensional version, has the multiplication be (a, b)(c, d) = (ac - d*b, da + bc*). We have the conjugation (a, b)* = (a*, -b) and norm N((a, b)) = (a, b)(a, b)*. If you write out the 2^{n}-nion as a vector of real numbers coming from the real coefficients of our pair of 2^{n-1}-nions, then N just is just the square of the magnitude of the vector. Except that general idea fails when we try to apply it again! The next step in the chain, called the sedenions, would take a pair of octonions (a, b) to be a sedenion, with multiplication given by (a, b)(c, d) = (ac - d*b, da + bc*) but now we can't form multiplicative inverses! The octonions had just enough associativity, not full associativity but a weak form, to define multiplicative inverses as a^{-1} = a*/N(a), but sedenions don't even have that much. Similarly, while we could take pairs of sedenions, again we'd end up with pairs of nonzero whatevers whose product is 0. So no more division! Indeed, there's a theorem, called Hurwitz's theorem, that says that we only get normed division algebras over the real numbers in dimensions 1, 2, 4 and 8, and that we've met all of them. Everything else is missing a norm, or multiplicative inverses, or isn't an algebra, or isn't over the real numbers. There are a whole bunch of weird consequences of this fact, for instance hypersphere packing, i.e. how densely can you arrange identical hyperspheres) is easy in dimensions 1, 2, 4 and 8 (and 24, for complicated reasons). Also, parts of fermionic string theory only work in 3-, 4-, 6- and 10-dimensional spacetime, i.e. 2 + 1, 2 + 2, 2 + 4, and 2 + 8; the 2 comes from the surface traced out by a string in spacetime, and then it turns out that the other dimensions have to form a normed division algebra over the real numbers. Bosonic string theory worked in 26-dimensional spacetime, i.e. 2 + 24, and that 24 is the same 24 from the sphere packing.
Also, for those who want a more concrete understanding of the quaternions: You can model the quaternions as 2x2 matrices with complex entries, specifically, (a, b) can be written as |a , b*| |-b, a*| Or you can write it out as 4x4 matrices with real entries: if a = (p, q) and b = (r, s) then we can write ((p, q), (r, s)) as |p , q , r , s| |-q, p, -s, r| |-r, s, p, -q| |-s, -r, q, p| with maybe some sign changes because I'm too lazy to check them. Anyway, the point is that you can get your hands on quaternions without too much work if you are more comfortable with matrices. You can't do that with octonions, because octonions are not associative, but matrices are! Which is another aspect of how weird the octonions are.
One question that might be a bit naive: The complex numbers originally came up as a way to solve certain polynomials, right? I've always seen them as a way to "complete" the reals. Are the quaternions "completing" anything the complex numbers can't do?
The foremost purpose of the complex is to let you solve more polynomials than you could over real numbers, yes. But they're actually so good at doing that that they let you solve every polynomial, even with complex coefficients. So I don't know of any good way to think of them as a sort of "completion" of the reals or complexes that arises in the same way. The motivation for them seems a lot more based on geometry (rotations in 3 dimensions, as Exohedron mentioned) and on being interesting in some more advanced contexts in algebra. Anyway Exohedron, that's a nice writeup! And convenient for me perhaps since I was thinking of writing something about sums of squares for a basic number theory post, and quaternions are somewhat relevant for that. :P
As @Emu stated, the quaternions were thought up for geometric reasons. They aren't a completion or closure of the complex numbers in any meaningful way, because as noted, the complex numbers aren't missing anything from an algebraic standpoint. There's really only one algebraic-ish thing that you can add to the complex numbers, and that's a point at infinity, but even that's pushing the notion of "algebraic"; really more algebro-geometric. However, since we're speaking of algebraic closure, the quaternions are almost algebraically closed! There's an issue with polynomial evaluation being somewhat unhappy in the noncommutative case. In particular, something like P(x) = ix + xi + j has no solutions. Say that a term like x^{3}axbx^{2} has degree 6. Then if we only look at monic polynomials, i.e. polynomials of the form x^{n} + (terms of degree less than n), with coefficients in the quaternions, then we do always have a solution, for the same reasons that there are solutions in the complex numbers: namely, look at quaternions of a given norm; these form a 3-sphere, and then topological arguments say that as you fiddle with the norm, the image of the 3-sphere under the map x -> P(x) has to pass through 0 for some norm. Similarly, the octonions are also almost algebraically closed. Here we have to be super careful because x^{3}axbx^{2} doesn't even mean anything unless we put in parentheses, but once we do, we can again show that any monic polynomial with coefficients in the octonions has a solution, again by playing with spheres.
Nonassociativity always looked really weird to me. Even if it's actually used fairly often in Physics (cross products are definitely nonassociative)
The thing about nonassociativity in physics is that it's very tightly controlled via Lie algebras. A Lie bracket, of which the cross product is the most well-known nontrivial variant, always obeys the Jacobi identity, [[x, y], z] = [x, [y, z]] + [[x, z], y], so while it's not associative on the nose, we can move brackets around and predict which extra terms are going to show up. Also, as I mentioned in one of the posts about things that look like derivatives, the bracket has derivative-product-rule-y behavior with respect to itself, so the nonassociativity is actually a feature rather than a bug.
Oh, right, I completely forgot about Lie brackets! I knew about them because of Poisson brackets and commutators, and it literally never occurred to me to associate them (heheh) with cross products.
Is this related to all the screaming about Lie group E8 that quantum gravity people were doing a while back? I didn't and don't have enough math or physics background to follow the ins and outs of that.
Yes! Lie groups (and therefore Lie brackets) show up a lot in modern physics because symmetry has become a big topic in physics, thanks to Emmy Noether (who totally doesn't get enough credit). For every observable quantity that is conserved, things like electric charge or quark color, etc, there is a corresponding symmetry group, and these are usually Lie groups. The hope is that we can bundle all of these individual symmetry groups into one big group that describes all of the conserved quantities, and the E8 setup was an attempt at that. It got popular for a while because it looked promising, and also since E8 is one of those objects that is the biggest and most complicated object in its class, to the point of barely existing*. So of course Mother Nature would utilize it, right? But alas, some counting arguments showed that E8 couldn't be the promised big group, at least not in the way that that particular attempt outlined it; it predicted too many particles of the wrong types, particles that we ought to have observed by now if they existed. E8 will probably show up in some viable Theory of Everything, but it isn't the big symmetry group that quantum theorists are looking for. Mathematicians of my ilk, i.e. Lie Theorists, have a love/hate relationship with E8; we love it because it's so bizarre and yet it shows up in so many places, and we hate it because it is an awful, awful thing to try to work with. Too weird to make analogies for, too big to do explicit computations on. But we can't escape it. For instance, the hypersphere packing solution in 8 dimensions is called the E8 lattice, because it's intimately connected with E8 the Lie group. *Basically, there are infinitely many (simple) Lie groups based on linear algebra with real, complex or quaternionic** entries. If we try to use octonionic entries, almost everything breaks almost immediately due to the nonassociativity***; because of those "almost"s, we get some Lie groups based on the octonions, but we only get five of them, and E8 is the largest of those. **It's all connected! IT'S ALL CONNECTED! ***ILLUMINATI CONFIRMED!
Goodstein Sequences For any natural number k, we can write other natural numbers as sums of powers of k, so say if k = 2, we can write 37 as 2^{5} + 2^{2} + 1. We define hereditary base 2 by rewriting that 5 as 2^{2} + 1, so that 37 in hereditary base 2 is now 2^{22 + 1} + 2^{2} + 1 In general, a number in hereditary base k is written with just exponentiation and addition and everything number appearing in the expression must be either k or 1. So 37 in hereditary base 3 would be 3^{3} + 3^{1 + 1} + 1 For a natural n, we define a Goodstein sequence G(n) as follows: Write n in hereditary base 2. Call this G(n)(1). Change all of the 2s in the expression to 3s to get a new number, subtract 1, and write the result in hereditary base 3. Call this G(n)(2) Change all of the 3s in the expression to 4s to get a new number, subtract 1, and write the result in hereditary base 4. Call this G(n)(3) Etc. The sequence continues like this until you reach 0. Try it out for n = 2 and n = 3. Do not try it for n = 4. Or at least, try it a little bit for n = 4, but don't expect the end to come any time soon. Goodstein's theorem: Every Goodstein Sequence eventually reaches 0 after a finite number of steps. Sketch of a proof of Goodstein's theorem: define f of a hereditary base-k number as what you get when you replace all the ks in the expression by ω, the ordinal that is the sequence of natural numbers. So f(2^{2} + 1) = ω^{ω} + 1 Now we define P(n)(k) to be what you get when you take the expression of G(n)(k-1) and apply f to it, and then subtract 1. Note that the value of f doesn't change during the base-changing part of the sequence, i.e. f(2^{2} + 1) = f(3^{3} + 1). The only thing that affects it is the subtracting 1 part. So f(G(n)(k)) > f(G(n)(k+1)). Now we're dealing with ordinals, which are well-ordered, so it can't descend forever, so P(n)(k) must terminate at some point, but f is defined unless G(n)(k) = 0, so therefore the sequence G(n) must also terminate. Now for the interesting bit of this: Goodstein's theorem is not provable from first order Peano Arithmetic. What does that mean? If we go back to the Peano axioms, we have a bunch of nice statements, and then we have that pesky induction statement: For all propositions P with one variable, if P(0) is true and P(k) implies P(S(k)), then P(n) is true for all natural numbers n. The "for all propositions" bit is tricky for logicians to deal with, in that it's too powerful for a lot of techniques to show to be consistent. Or at least, it demands a lot for the language to be able to say "for all propositions", since propositions are in a sense infinitely more complicated than individual numbers. Logicians are much happier to do things in "first order" logic, where you're allowed to say "for all natural numbers" and "there exists a natural number such that" but aren't allowed to say things like "for all sets of natural numbers" or "for all propositions with one variable", since a proposition can be identified with the set of all natural numbers for which the proposition holds. So there is a "first order" reformulation of the Peano axioms, where instead of saying "for all propositions", instead we just have a separate axiom for each proposition. Now the "for all"-ing is being done outside of the system itself, which makes it easier to prove things about the Peano axioms. Only now we have the issue of having a lot of systems that obey all of the Peano axioms, and not all of them look like the natural numbers we're used to. We call these alternate systems "nonstandard natural numbers". For instance, some of them have numbers that are bigger than an infinite set of "natural numbers". Some of them aren't even countable! But they all have the notions of 0 and successor and they all obey the induction axioms. For some of these systems, Goodstein's theorem is actually false! There are Goodstein sequences that don't terminate! But we have a proof! Or so I've claimed. Alas, the proof I sketched out requires more power than the Peano axioms actually provide. In particular, the statement "it can't descend forever" actually requires more induction strength than the Peano axioms give, in any formulation. It requires what is called "transfinite induction", i.e. induction past ω, up to an ordinal called ε_{0}, and for some systems that obey the Peano axioms, you can't induct that far.
I wish the parts of mathematics that I really like didn't have so many technical prerequisites. Instead I find myself talking a lot about logic for reasons that I don't think I understand.
I wish I'd seen this video when it came out instead of 2 weeks later, but just. Hamilton. (Since you were talking about quaternions I thought it'd be relevant)