Math(s)! Currently: Quantum Groups

Discussion in 'General Chatter' started by Exohedron, Mar 14, 2015.

  1. Exohedron

    Exohedron Doesn't like words


    I've probably said it before but this guy makes the coolest stuff.
     
    • Winner x 1
  2. Exohedron

    Exohedron Doesn't like words

    When you're trying to prove that a thing can't happen and you come up with all sorts of restrictions and constraints and end up with one tiny, specific case that you can't manage to eliminate so on a whim you see if you can actually construct that special case and then there it is, mocking you and all of that effort.
     
    • Witnessed x 1
  3. evilas

    evilas Sure, I'll put a custom title here

    I mean, "Can't happen except in this one very specific case" is still a useful thing to prove for many things. Maybe you can figure out a way to construct all the special cases or smth.
     
  4. Exohedron

    Exohedron Doesn't like words

    I'm pretty sure I've got a complete classification + construction method for the counterexample cases. I just want them to not exist at all.
     
  5. Exohedron

    Exohedron Doesn't like words

    Supersymmetry as negative dimensions

    My favorite fact from when I was doing Lie theory Penrose stuff is that the symplectic forms are like inner products on negative dimensional spaces.

    Pretend for a moment that that's the end of this post.


    Consider an n-dimensional vector space V with a bilinear inner product g(-,-). The group O(V,g) preserves this inner product, and is isomorphic to the group O(n).
    If we take the trace of the identity on this space, we get n.
    If we take a tensor product of this space with itself some number of times, we can ask about the O(V,g)-invariant subspaces. Each of these subspaces W can be pulled out by a projection operation PW such that for any two spaces W and X, PWPX = PXPW, and we get that PWPW = PW because it's a projection. By the usual notion of projection, the trace of PW is the dimension of W, and it ends up being a polynomial in terms of n.
    For instance, consider the tensor square of V, which I'll write as V2 since I'm too lazy to locate a tensor product symbol.
    V2 is an n2-dimensional space. We can break it into two spaces that I'll call Sym2(V) and Asym2(V) and these are also O(V,g)-invariant. If we take an element (v1,v2) in V2, we get that the symmetrizing projector PS2 sends it to
    (v1,v2) + (v2,v1)
    times some normalization factor and, similarly, the antisymmetrizing projector PA2 sends it to
    (v1,v2) - (v2,v1)
    times some normalization factor.
    The trace of PA2 is n(n-1)/2. For PS2 the trace is n(n+1)/2, and these are in fact the dimensions that you'd get for those spaces.
    But Sym2(V) contains a smaller O(V,g)-invariant space: we make a projection operator Pg that sends (v1,v2) to
    g(v1,v2i (ei,fi)
    times some normalization factor, where the ei are a basis of V and fi are the dual basis of V with respect to g:
    g(ei, fj) = 1 if i = j, 0 otherwise.
    Pg yields a subspace of dimension 1 while its complement in Sym2(V), the projection
    PS2(1 - Pg)
    yields a subspace of dimension n(n+1)/2 - 1
    So those are our polynomials:
    pA2(V) = n(n-1)/2
    pg(V) = 1
    pS2(V) = n(n+1)/2 - 1

    Now, suppose that n is even and that we had instead a vector space U equipped with a symplectic form w, which is like an inner product but instead of having g(a,b) = g(b,a), we have w(a,b) = -w(b,a)
    Again, taking the trace of the identity yields n.
    We can again build tensor products on our space U and look at projections onto Sp(U,w)-invariant subspaces and take the traces of those projection operators.
    So we again start with PA2 and PS2. The trace of PS2 is n(n+1)/2.
    If we look at Asym2(U), we get that we can use w to get a new projection operator Pw that sends (u1,u2) to
    w(u1,u2i (ei,fi)
    times some normalization factor, where the ei are a basis of U and fi are the dual basis of U with respect to w:
    w(ei, fj) = 1 if i = j, 0 otherwise.
    And we get that the space that Pw projects to has dimension 1, and the space that its complement in Asym2(U), the projection
    PS2(1 - Pw)
    yields a subspace of dimension n(n-1)/2 - 1
    So those are our polynomials:
    pS2(U) = n(n+1)/2
    pw(U) = 1
    pA2(U) = n(n-1)/2 - 1

    But note that n(n+1)/2 = (-n)(-n-1)/2, so we get that pS2(U) gives the same thing as pA2(V) if we swap n with -n. Also, n(n-1)/2 - 1 = (-n)(-n+1)/2 - 1, so we get that pA2(U) gives the same thing as pS2(V) if we swap n with -n. And then neither pg(V) nor pw(U) contain any ns, and are identical.

    So we get that the polynomials that we get for the symmetric stuff for V correspond to the polynomials we get for the antisymmetric stuff for U, and vice-versa, if we also swap n with -n (at least up to some overall signs).

    The more general pattern is that all the O(V,g)-invariant subspaces of tensor powers of V can be created using symmetrizers, antisymmetrizers, and Pg, and similarly for the Sp(U,w)-invariant subspaces of tensor powers of U, and they can be matched up by swapping symmetrizers with antisymmetrizers (and vice-versa) and swapping Pg with Pw. The dimensions of the spaces, as functions of n, are polynomials, and corresponding spaces have matching polynomials only with n replaced by -n.
    So we can view U with symplectic form w as a negative-dimensional version of V with inner product g.

    But that's not all!
    In supersymmetry, there is a notion of a "supervector space", by which we mean a vector space which is the direct sum of two components which I'll call U and V, not necessarily the same dimension. The superdimension of this supervector space is the dimension of V minus the dimension of U, and a superinner product on this supervector space acts like an inner product on V, a symplectic form on U, and vanishes if you try it on something from V with something from U.
    So, according to supervector-space theory, the dimension of the symplectic part of the supervector space is also negative.

    And finally, the really physics bit, is that we have fermions and bosons; if you have a wavefunction describing a pair of identical bosons and you rotate the universe around 360 degrees, the wavefunction for the bosons remains the same, while if you have a wavefunction describing a pair of identical fermions and you rotate the universe, the wavefunction picks up a minus sign.
    Usually we describe this using what are called spin-statistics, saying that bosons have integer spin and fermions have spin that is 1/2 plus an integer.
    According to supersymmetry, every particle has a corresponding superpartner, and the superpartners of fermions are all bosons, while the superpartners of bosons are fermions. And you think "okay, so the superpartner of a spin-0 boson has spin 1/2, while the superpartner of a spin-1/2 fermion has spin...?" and you have to ask why the spins change in a particular way, and if the superpartner of a superpartner has the same spin as the original particle, and it becomes a little messy.
    But instead of that, you could instead say "superpartners live in negative-dimensional space" and that's why the symmetric stuff on our end, the bosons, have antisymmetic superpartners, and vice-versa.

    Of course, if you dig into the details this falls apart a bit, because the symplectic analogues of the spinor representations are all infinite-dimensional. Oops.
     
    Last edited: Dec 11, 2019
  6. Exohedron

    Exohedron Doesn't like words

    I think the most important thing I learned at the JMM this year was that Cherry Arbor Designs exists. It's a little pricey, but I now have a very nice set of Penrose Ver III tiles, jigsawed so that they have to obey the edge-matching rules that force an aperiodic tiling.
     
  7. Exohedron

    Exohedron Doesn't like words

    I was talking to a colleague today about Hermitian matrices. I usually don't really care about Hermitian matrices because, being a Lie Theorist, I care more about anti-Hermitian matrices. But if you're doing quantum mechanics then you often care about Hermitian matrices, because physics likes putting factors of i in unnecessary places. Also observables.

    For those of us who have forgotten what a Hermitian matrix is and why we care about it, consider a complex, finite-dimensional vector space V = Cn. We can consider linear transformations from V to itself, and we can write them as n-by-n matrices with complex entries. Given such a matrix, we can consider its transpose, i.e. flip it along the upper-left-to-lower-right diagonal, and we can consider its complex conjugate, i.e. take each entry and replace it with the complex conjugate of that entry, and we can consider its conjugate transpose, i.e. we do both operations.
    Given a matrix A, we often write its conjugate transpose as A†, pronounced "A dagger" like you're the villain in Shakespeare play.
    Given two matrices A and B, (A+B)† = A† + B†, while (AB)† = B†A†; note the order switching.

    A Hermitian matrix is a matrix A such that A = A‌†. An anti-Hermitian matrix is a matrix A such that A = -A‌†. Note that if A is Hermitian, then iA is anti-Hermitian, and vice-versa.

    Since we're looking at vectors in V, we can define an inner product on V as
    <v,w> = ∑i viwi*
    Then we get that
    <v, Aw> = <A†v, w>
    and
    <Av, w> = <v, A†w>
    We get that <v, Av> is real for all v in V if and only if A is Hermitian. This is why quantum physicists like Hermitian matrices, because to observe the value of an observable h on a state φ, they compute <φ, Hφ> for H being the linear operator corresponding to the observable h, and since they want real numbers they only consider Hermitian operators H.

    Okay, technically when I say matrix I probably mean "operator" and when I say "Hermitian" I mean self-adjoint, because there's a chance that n is infinity.

    One nice fact about Hermitian matrices is that a Hermitian matrix can be fully diagonalized, and all of its eigenvalues are real; similarly, an anti-Hermitian matrix is fully diagonalizable and all of its eigenvalues are imaginary.
    In fact, we can sort-of analogize Hermitian and anti-Hermitian matrices to real and imaginary numbers, in the following sense:
    Given a matrix A, A‌ + A† is Hermitian, as is A‌A†. This is akin to the fact that given a complex number z, z + z* is real, as is zz* where z* indicates the complex conjugate of z. Moreover, A‌A† is "positive", in the sense that all of its eigenvalues are nonnegative real numbers, in the same way that zz* is always a nonnegative real number.
    So we define the "absolute value" of a matrix A as
    |A| = √(A‌A†)
    akin to how the magnitude of a complex number z is
    |z| = √(zz*)

    Okay, but what is √ of a matrix? Well, that's a little tricky for the general case, but fortunately we're dealing with the nice case of trying to take the square root of a positive matrix. Firstly, we diagonalize the matrix AA†, getting a matrix whose only entries are along the diagonal. These entries match the eigenvalues of AA†, which are all nonnegative real numbers, so we can take a square root and get another diagonal matrix whose entries are all nonnegative real numbers. Then we change the basis back to whatever it was before we diagonalized.
    And that gets us |A|, whose eigenvalues are the magnitudes of the eigenvalues of A.

    Unlike complex numbers, matrices generally don't commute, and indeed in general A and A† don't commute. Aww, that's unfortunate. This also means that while the sum of two Hermitian matrices is Hermitian, the product of two Hermitian matrices generally isn't Hermitian:
    (AB)† = B†A† = BA, not necessarily equal to AB.
    If we want to get a "product" that takes two Hermitian matrices and spits out a Hermitian matrix, the simplest way to do it is called "Jordan multiplication", in which we say that
    A * B = (AB+BA)/2
    Now note that if A and B are Hermitian, then A * B is also Hermitian; you can work it out by hand if you want to.
    Also note that this kind of multiplication is commutative, A * B = B * A. Nice, right? But unfortunately, it's not associative:
    (A * B) * C = ((AB + BA)/2) * C = (ABC + BAC + CAB + CBA)/4
    A * (B * C ) = A * ((BC + CB)/2) = (ABC + ACB + BCA + CBA)/4
    which aren't quite equal on the nose. And that's kind of awkward. But that's what happens when you try to multiply observables in quantum mechanics.
     
    • Like x 1
  8. Exohedron

    Exohedron Doesn't like words

    The product I mentioned in the last post, *, gives what is called a Jordan algebra. In particular, a Jordan algebra is a vector space equipped with a multiplication operation * that is commutative and obeys the Jordan relation:
    (x * y) * (x * x) = x * (y * (x * x))

    If you start with an associative algebra and define a new multiplication by
    x * y = (xy + yx)/2
    you get what is called a special Jordan algebra. So, for example, you can take the algebra of square matrices over the real numbers, or the complex numbers, or the quaternions. Subalgebras of special Jordan algebras, i.e. subspaces that are closed under the Jordan multiplication, are also called special Jordan algebras. In particular, the Hermitian matrices of each type form a special Jordan algebra.

    Another example is to take Rn+1, writing elements as pairs (x,s) for x an n-dimensional vector and s a real number. We define the product as
    (x,s)(y,t) = (tx+sy, x*y + st)
    where x*y indicates dot product. We call this Jordan algebra a spin factor since they're related to Clifford algebras, which are related to particle spin. Note that these are also special, since they can be embedded as Hermitian matrices of large enough dimension.

    You can consider the simple Jordan algebras, i.e. Jordan algebras that as vector spaces don't decompose into two subspaces that are each Jordan algebras. We can also consider the formally real algebras, in which a sum of squares vanish only if each square is itself 0. Then we get a nice statement about the simple, finite-dimensional formally real Jordan algebras:
    They're Hermitian matrices over the real, complex, or quaternionic numbers, or they're spin factors, or they're exceptional.

    And yes, that means that they're all special or exceptional, which is a sign that something might be wrong with the naming conventions.

    What are the exceptional Jordan algebras? They come from the octonions, of course.
    Consider a 3-by-3 matrix of octonions that is octonionic-Hermitian, i.e. if you take the transpose and then take the octonionic-conjugate of each entry you get the original matrix back. Note that matrices over the octonions don't form an associative algebra, since the octonions themselves aren't associative. But the 3-by-3 octonionic-Hermitian matrices manage to form a Jordan algebra, barely. 4-by-4 octonionic-Hermitian matrices do not.

    Anyway, I think I've mentioned this before, but that's why there are only finitely many exceptional simple Lie algebras/Dynkin diagrams/Coxeter groups; they can all be described by Jordan algebras and objects derived from Jordan algebras, but since we run out of octonionic Jordan algebras at n = 3, while we can do some funny business to extend the range a bit we quickly run out of exceptional simple Lie algebras as well.
     
  9. Exohedron

    Exohedron Doesn't like words

    While I understand the reasoning behind the term, I'm still bothered by the fact that it's called "noncommutative geometry" when it's the algebras that are noncommutative; the geometric objects themselves are noncocommutative.
     
  10. Exohedron

    Exohedron Doesn't like words

    Quantum Spaces, Quantum Groups

    In the mathematical variant of the quantum world, we get a lot of mileage out of pointing at things and saying "what if they didn't commute?" For instance, complex numbers commute, in that if you multiply two complex numbers together it doesn't matter which way you order them, but matrices don't commute, in that the order does matter. So in a sense, matrices are quantum numbers (this goes back to my statement a few posts back, about the analogy between complex numbers and Hermitian matrices).

    So let's talk about spaces. One way to talk about a point P in a space is to give its coordinates, like (x, y, z). These are like functions, x(P), y(P), z(P). And from those basic coordinates, you can define more complicated functions, like f(P) = x(P)2 + y(P) - x(P)z(P).
    And one way to describe the shape of spaces is to invert this picture, and instead of starting with coordinates, you take the entire set of functions to start with and ask what shapes can give rise to those kinds of functions.
    For instance, if you have a sphere, you can describe points on it via (x, y, z), but now you have a special function x2 + y2 + z2 that always takes on a constant value regardless of where you are. This is different from being on a plane, where you don't have a coordinate system that will do that for you everywhere. So we can distinguish between spheres and planes via this kind of thing.

    Now, given a point P and two functions f and g, we can look at the function fg, defined by
    (fg)(P) = f(P)g(P)
    Note that since f(P) and g(P) are just numbers, (fg)(P) = (gf)(P), and so we consider fg and gf to be the same. Functions commute in regards to multiplication.

    But what if they didn't commute?

    Well, how are we going to make them not commute? In quantum mechanics, the easy way is to have f and g spit out linear operators rather than complex numbers, and since linear operators don't commute we get that f(P) and g(P) don't commute, so f and g don't commute.
    But for mathematicians, we can go a different route.

    We take the statement

    (fg)(P) = f(P)g(P)

    and break it a little bit. We define a comultiplication operator Δ that sends points to pairs of points (well, tensor products of points). So we write

    Δ(P) = (P(1), P(2))

    And we now write

    (fg)(P) = f(P(1))g(P(2))

    For classical spaces, P(1) = P(2) = P, but for quantum spaces, we don't even need P(1) and P(2) to be the same! Now we compare fg and gf:

    (fg)(P) = f(P(1))g(P(2))
    (gf)(P) = g(P(1))f(P(2))

    Now f and g are acting on different things, so the two expressions yield different results! So now we have that fg and gf aren't the same anymore! It's like if multiplying the x coordinate of a point by the y coordinate yielded different results than if you did it the other way around, despite them both yielding numbers.

    If P(1) and P(2) are the same, we say that Δ is cocommutative (since its a comultiplication where the corresponding multiplication of functions is commutative), but if they aren't then we say that Δ is noncocommutative. And just as how we say a group is commutative if its multiplication is commutative, we say that a space is noncocommutative if its comultiplication is noncocommutative.

    In mathematics, the study of a space via its functions is often considered algebraic geometry, where classically the functions are commutative. In the weird quantum setting, the functions are noncommutative, the study is called noncommutative geometry. I would consider the spaces to be the geometry, and hence that the study should be called noncocommutative geometry, but I don't make the rules.

    Anyway, what do we know about noncocommutative spaces? That depends. We know a lot about functions on said spaces, so if your questions about them can be formulated entirely in terms of those functions, we're good. But if you're asking about "what do the points look like? Are there lines? Are there curves?" and so on, purely geometric or topological questions, then it's quite a bit more mysterious, because the points clearly aren't like classical points, or else the functions would be commutative. So what are they?

    This is honestly my favorite thing about the study of noncocommutative stuff.
    As a representation theorist, I study representations of groups. A quantum group is what you get when you try to make a group out of a noncocommutative space; you can turn a set into a group by sticking a multiplication on it and defining an identity and an inverse map, and you can turn a noncocommutative space into a quantum group by sticking a multiplication on it and defining an identity and an inverse map.
    A representation of a group is just a bunch of functions on the group, so a representation of a quantum group is just a bunch of functions on the quantum group. Thus it is often the case that we can describe representations of a quantum group, since functions on noncocommutative spaces are understood decently well, but when you ask what a quantum group is as an object, you just get shrugs in return.
     
  11. Exohedron

    Exohedron Doesn't like words

    This is actually kind of hilarious, but also kind of cool from a mathematical standpoint, that the parity error suddenly disappears.

     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice