Math(s)! Currently: Summer of Math Exposition

Discussion in 'General Chatter' started by Exohedron, Mar 14, 2015.

  1. Exohedron

    Exohedron Doesn't like words


    I've probably said it before but this guy makes the coolest stuff.
     
    • Winner x 1
  2. Exohedron

    Exohedron Doesn't like words

    When you're trying to prove that a thing can't happen and you come up with all sorts of restrictions and constraints and end up with one tiny, specific case that you can't manage to eliminate so on a whim you see if you can actually construct that special case and then there it is, mocking you and all of that effort.
     
    • Witnessed x 1
  3. evilas

    evilas Sure, I'll put a custom title here

    I mean, "Can't happen except in this one very specific case" is still a useful thing to prove for many things. Maybe you can figure out a way to construct all the special cases or smth.
     
  4. Exohedron

    Exohedron Doesn't like words

    I'm pretty sure I've got a complete classification + construction method for the counterexample cases. I just want them to not exist at all.
     
  5. Exohedron

    Exohedron Doesn't like words

    Supersymmetry as negative dimensions

    My favorite fact from when I was doing Lie theory Penrose stuff is that the symplectic forms are like inner products on negative dimensional spaces.

    Pretend for a moment that that's the end of this post.


    Consider an n-dimensional vector space V with a bilinear inner product g(-,-). The group O(V,g) preserves this inner product, and is isomorphic to the group O(n).
    If we take the trace of the identity on this space, we get n.
    If we take a tensor product of this space with itself some number of times, we can ask about the O(V,g)-invariant subspaces. Each of these subspaces W can be pulled out by a projection operation PW such that for any two spaces W and X, PWPX = PXPW, and we get that PWPW = PW because it's a projection. By the usual notion of projection, the trace of PW is the dimension of W, and it ends up being a polynomial in terms of n.
    For instance, consider the tensor square of V, which I'll write as V2 since I'm too lazy to locate a tensor product symbol.
    V2 is an n2-dimensional space. We can break it into two spaces that I'll call Sym2(V) and Asym2(V) and these are also O(V,g)-invariant. If we take an element (v1,v2) in V2, we get that the symmetrizing projector PS2 sends it to
    (v1,v2) + (v2,v1)
    times some normalization factor and, similarly, the antisymmetrizing projector PA2 sends it to
    (v1,v2) - (v2,v1)
    times some normalization factor.
    The trace of PA2 is n(n-1)/2. For PS2 the trace is n(n+1)/2, and these are in fact the dimensions that you'd get for those spaces.
    But Sym2(V) contains a smaller O(V,g)-invariant space: we make a projection operator Pg that sends (v1,v2) to
    g(v1,v2i (ei,fi)
    times some normalization factor, where the ei are a basis of V and fi are the dual basis of V with respect to g:
    g(ei, fj) = 1 if i = j, 0 otherwise.
    Pg yields a subspace of dimension 1 while its complement in Sym2(V), the projection
    PS2(1 - Pg)
    yields a subspace of dimension n(n+1)/2 - 1
    So those are our polynomials:
    pA2(V) = n(n-1)/2
    pg(V) = 1
    pS2(V) = n(n+1)/2 - 1

    Now, suppose that n is even and that we had instead a vector space U equipped with a symplectic form w, which is like an inner product but instead of having g(a,b) = g(b,a), we have w(a,b) = -w(b,a)
    Again, taking the trace of the identity yields n.
    We can again build tensor products on our space U and look at projections onto Sp(U,w)-invariant subspaces and take the traces of those projection operators.
    So we again start with PA2 and PS2. The trace of PS2 is n(n+1)/2.
    If we look at Asym2(U), we get that we can use w to get a new projection operator Pw that sends (u1,u2) to
    w(u1,u2i (ei,fi)
    times some normalization factor, where the ei are a basis of U and fi are the dual basis of U with respect to w:
    w(ei, fj) = 1 if i = j, 0 otherwise.
    And we get that the space that Pw projects to has dimension 1, and the space that its complement in Asym2(U), the projection
    PS2(1 - Pw)
    yields a subspace of dimension n(n-1)/2 - 1
    So those are our polynomials:
    pS2(U) = n(n+1)/2
    pw(U) = 1
    pA2(U) = n(n-1)/2 - 1

    But note that n(n+1)/2 = (-n)(-n-1)/2, so we get that pS2(U) gives the same thing as pA2(V) if we swap n with -n. Also, n(n-1)/2 - 1 = (-n)(-n+1)/2 - 1, so we get that pA2(U) gives the same thing as pS2(V) if we swap n with -n. And then neither pg(V) nor pw(U) contain any ns, and are identical.

    So we get that the polynomials that we get for the symmetric stuff for V correspond to the polynomials we get for the antisymmetric stuff for U, and vice-versa, if we also swap n with -n (at least up to some overall signs).

    The more general pattern is that all the O(V,g)-invariant subspaces of tensor powers of V can be created using symmetrizers, antisymmetrizers, and Pg, and similarly for the Sp(U,w)-invariant subspaces of tensor powers of U, and they can be matched up by swapping symmetrizers with antisymmetrizers (and vice-versa) and swapping Pg with Pw. The dimensions of the spaces, as functions of n, are polynomials, and corresponding spaces have matching polynomials only with n replaced by -n.
    So we can view U with symplectic form w as a negative-dimensional version of V with inner product g.

    But that's not all!
    In supersymmetry, there is a notion of a "supervector space", by which we mean a vector space which is the direct sum of two components which I'll call U and V, not necessarily the same dimension. The superdimension of this supervector space is the dimension of V minus the dimension of U, and a superinner product on this supervector space acts like an inner product on V, a symplectic form on U, and vanishes if you try it on something from V with something from U.
    So, according to supervector-space theory, the dimension of the symplectic part of the supervector space is also negative.

    And finally, the really physics bit, is that we have fermions and bosons; if you have a wavefunction describing a pair of identical bosons and you rotate the universe around 360 degrees, the wavefunction for the bosons remains the same, while if you have a wavefunction describing a pair of identical fermions and you rotate the universe, the wavefunction picks up a minus sign.
    Usually we describe this using what are called spin-statistics, saying that bosons have integer spin and fermions have spin that is 1/2 plus an integer.
    According to supersymmetry, every particle has a corresponding superpartner, and the superpartners of fermions are all bosons, while the superpartners of bosons are fermions. And you think "okay, so the superpartner of a spin-0 boson has spin 1/2, while the superpartner of a spin-1/2 fermion has spin...?" and you have to ask why the spins change in a particular way, and if the superpartner of a superpartner has the same spin as the original particle, and it becomes a little messy.
    But instead of that, you could instead say "superpartners live in negative-dimensional space" and that's why the symmetric stuff on our end, the bosons, have antisymmetic superpartners, and vice-versa.

    Of course, if you dig into the details this falls apart a bit, because the symplectic analogues of the spinor representations are all infinite-dimensional. Oops.
     
    Last edited: Dec 11, 2019
  6. Exohedron

    Exohedron Doesn't like words

    I think the most important thing I learned at the JMM this year was that Cherry Arbor Designs exists. It's a little pricey, but I now have a very nice set of Penrose Ver III tiles, jigsawed so that they have to obey the edge-matching rules that force an aperiodic tiling.
     
    • Winner x 1
  7. Exohedron

    Exohedron Doesn't like words

    I was talking to a colleague today about Hermitian matrices. I usually don't really care about Hermitian matrices because, being a Lie Theorist, I care more about anti-Hermitian matrices. But if you're doing quantum mechanics then you often care about Hermitian matrices, because physics likes putting factors of i in unnecessary places. Also observables.

    For those of us who have forgotten what a Hermitian matrix is and why we care about it, consider a complex, finite-dimensional vector space V = Cn. We can consider linear transformations from V to itself, and we can write them as n-by-n matrices with complex entries. Given such a matrix, we can consider its transpose, i.e. flip it along the upper-left-to-lower-right diagonal, and we can consider its complex conjugate, i.e. take each entry and replace it with the complex conjugate of that entry, and we can consider its conjugate transpose, i.e. we do both operations.
    Given a matrix A, we often write its conjugate transpose as A†, pronounced "A dagger" like you're the villain in Shakespeare play.
    Given two matrices A and B, (A+B)† = A† + B†, while (AB)† = B†A†; note the order switching.

    A Hermitian matrix is a matrix A such that A = A‌†. An anti-Hermitian matrix is a matrix A such that A = -A‌†. Note that if A is Hermitian, then iA is anti-Hermitian, and vice-versa.

    Since we're looking at vectors in V, we can define an inner product on V as
    <v,w> = ∑i viwi*
    Then we get that
    <v, Aw> = <A†v, w>
    and
    <Av, w> = <v, A†w>
    We get that <v, Av> is real for all v in V if and only if A is Hermitian. This is why quantum physicists like Hermitian matrices, because to observe the value of an observable h on a state φ, they compute <φ, Hφ> for H being the linear operator corresponding to the observable h, and since they want real numbers they only consider Hermitian operators H.

    Okay, technically when I say matrix I probably mean "operator" and when I say "Hermitian" I mean self-adjoint, because there's a chance that n is infinity.

    One nice fact about Hermitian matrices is that a Hermitian matrix can be fully diagonalized, and all of its eigenvalues are real; similarly, an anti-Hermitian matrix is fully diagonalizable and all of its eigenvalues are imaginary.
    In fact, we can sort-of analogize Hermitian and anti-Hermitian matrices to real and imaginary numbers, in the following sense:
    Given a matrix A, A‌ + A† is Hermitian, as is A‌A†. This is akin to the fact that given a complex number z, z + z* is real, as is zz* where z* indicates the complex conjugate of z. Moreover, A‌A† is "positive", in the sense that all of its eigenvalues are nonnegative real numbers, in the same way that zz* is always a nonnegative real number.
    So we define the "absolute value" of a matrix A as
    |A| = √(A‌A†)
    akin to how the magnitude of a complex number z is
    |z| = √(zz*)

    Okay, but what is √ of a matrix? Well, that's a little tricky for the general case, but fortunately we're dealing with the nice case of trying to take the square root of a positive matrix. Firstly, we diagonalize the matrix AA†, getting a matrix whose only entries are along the diagonal. These entries match the eigenvalues of AA†, which are all nonnegative real numbers, so we can take a square root and get another diagonal matrix whose entries are all nonnegative real numbers. Then we change the basis back to whatever it was before we diagonalized.
    And that gets us |A|, whose eigenvalues are the magnitudes of the eigenvalues of A.

    Unlike complex numbers, matrices generally don't commute, and indeed in general A and A† don't commute. Aww, that's unfortunate. This also means that while the sum of two Hermitian matrices is Hermitian, the product of two Hermitian matrices generally isn't Hermitian:
    (AB)† = B†A† = BA, not necessarily equal to AB.
    If we want to get a "product" that takes two Hermitian matrices and spits out a Hermitian matrix, the simplest way to do it is called "Jordan multiplication", in which we say that
    A * B = (AB+BA)/2
    Now note that if A and B are Hermitian, then A * B is also Hermitian; you can work it out by hand if you want to.
    Also note that this kind of multiplication is commutative, A * B = B * A. Nice, right? But unfortunately, it's not associative:
    (A * B) * C = ((AB + BA)/2) * C = (ABC + BAC + CAB + CBA)/4
    A * (B * C ) = A * ((BC + CB)/2) = (ABC + ACB + BCA + CBA)/4
    which aren't quite equal on the nose. And that's kind of awkward. But that's what happens when you try to multiply observables in quantum mechanics.
     
    • Like x 2
  8. Exohedron

    Exohedron Doesn't like words

    The product I mentioned in the last post, *, gives what is called a Jordan algebra. In particular, a Jordan algebra is a vector space equipped with a multiplication operation * that is commutative and obeys the Jordan relation:
    (x * y) * (x * x) = x * (y * (x * x))

    If you start with an associative algebra and define a new multiplication by
    x * y = (xy + yx)/2
    you get what is called a special Jordan algebra. So, for example, you can take the algebra of square matrices over the real numbers, or the complex numbers, or the quaternions. Subalgebras of special Jordan algebras, i.e. subspaces that are closed under the Jordan multiplication, are also called special Jordan algebras. In particular, the Hermitian matrices of each type form a special Jordan algebra.

    Another example is to take Rn+1, writing elements as pairs (x,s) for x an n-dimensional vector and s a real number. We define the product as
    (x,s)(y,t) = (tx+sy, x*y + st)
    where x*y indicates dot product. We call this Jordan algebra a spin factor since they're related to Clifford algebras, which are related to particle spin. Note that these are also special, since they can be embedded as Hermitian matrices of large enough dimension.

    You can consider the simple Jordan algebras, i.e. Jordan algebras that as vector spaces don't decompose into two subspaces that are each Jordan algebras. We can also consider the formally real algebras, in which a sum of squares vanish only if each square is itself 0. Then we get a nice statement about the simple, finite-dimensional formally real Jordan algebras:
    They're Hermitian matrices over the real, complex, or quaternionic numbers, or they're spin factors, or they're exceptional.

    And yes, that means that they're all special or exceptional, which is a sign that something might be wrong with the naming conventions.

    What are the exceptional Jordan algebras? They come from the octonions, of course.
    Consider a 3-by-3 matrix of octonions that is octonionic-Hermitian, i.e. if you take the transpose and then take the octonionic-conjugate of each entry you get the original matrix back. Note that matrices over the octonions don't form an associative algebra, since the octonions themselves aren't associative. But the 3-by-3 octonionic-Hermitian matrices manage to form a Jordan algebra, barely. 4-by-4 octonionic-Hermitian matrices do not.

    Anyway, I think I've mentioned this before, but that's why there are only finitely many exceptional simple Lie algebras/Dynkin diagrams/Coxeter groups; they can all be described by Jordan algebras and objects derived from Jordan algebras, but since we run out of octonionic Jordan algebras at n = 3, while we can do some funny business to extend the range a bit we quickly run out of exceptional simple Lie algebras as well.
     
  9. Exohedron

    Exohedron Doesn't like words

    While I understand the reasoning behind the term, I'm still bothered by the fact that it's called "noncommutative geometry" when it's the algebras that are noncommutative; the geometric objects themselves are noncocommutative.
     
  10. Exohedron

    Exohedron Doesn't like words

    Quantum Spaces, Quantum Groups

    In the mathematical variant of the quantum world, we get a lot of mileage out of pointing at things and saying "what if they didn't commute?" For instance, complex numbers commute, in that if you multiply two complex numbers together it doesn't matter which way you order them, but matrices don't commute, in that the order does matter. So in a sense, matrices are quantum numbers (this goes back to my statement a few posts back, about the analogy between complex numbers and Hermitian matrices).

    So let's talk about spaces. One way to talk about a point P in a space is to give its coordinates, like (x, y, z). These are like functions, x(P), y(P), z(P). And from those basic coordinates, you can define more complicated functions, like f(P) = x(P)2 + y(P) - x(P)z(P).
    And one way to describe the shape of spaces is to invert this picture, and instead of starting with coordinates, you take the entire set of functions to start with and ask what shapes can give rise to those kinds of functions.
    For instance, if you have a sphere, you can describe points on it via (x, y, z), but now you have a special function x2 + y2 + z2 that always takes on a constant value regardless of where you are. This is different from being on a plane, where you don't have a coordinate system that will do that for you everywhere. So we can distinguish between spheres and planes via this kind of thing.

    Now, given a point P and two functions f and g, we can look at the function fg, defined by
    (fg)(P) = f(P)g(P)
    Note that since f(P) and g(P) are just numbers, (fg)(P) = (gf)(P), and so we consider fg and gf to be the same. Functions commute in regards to multiplication.

    But what if they didn't commute?

    Well, how are we going to make them not commute? In quantum mechanics, the easy way is to have f and g spit out linear operators rather than complex numbers, and since linear operators don't commute we get that f(P) and g(P) don't commute, so f and g don't commute.
    But for mathematicians, we can go a different route.

    We take the statement

    (fg)(P) = f(P)g(P)

    and break it a little bit. We define a comultiplication operator Δ that sends points to pairs of points (well, tensor products of points). So we write

    Δ(P) = (P(1), P(2))

    And we now write

    (fg)(P) = f(P(1))g(P(2))

    For classical spaces, P(1) = P(2) = P, but for quantum spaces, we don't even need P(1) and P(2) to be the same! Now we compare fg and gf:

    (fg)(P) = f(P(1))g(P(2))
    (gf)(P) = g(P(1))f(P(2))

    Now f and g are acting on different things, so the two expressions yield different results! So now we have that fg and gf aren't the same anymore! It's like if multiplying the x coordinate of a point by the y coordinate yielded different results than if you did it the other way around, despite them both yielding numbers.

    If P(1) and P(2) are the same, we say that Δ is cocommutative (since its a comultiplication where the corresponding multiplication of functions is commutative), but if they aren't then we say that Δ is noncocommutative. And just as how we say a group is commutative if its multiplication is commutative, we say that a space is noncocommutative if its comultiplication is noncocommutative.

    In mathematics, the study of a space via its functions is often considered algebraic geometry, where classically the functions are commutative. In the weird quantum setting, the functions are noncommutative, the study is called noncommutative geometry. I would consider the spaces to be the geometry, and hence that the study should be called noncocommutative geometry, but I don't make the rules.

    Anyway, what do we know about noncocommutative spaces? That depends. We know a lot about functions on said spaces, so if your questions about them can be formulated entirely in terms of those functions, we're good. But if you're asking about "what do the points look like? Are there lines? Are there curves?" and so on, purely geometric or topological questions, then it's quite a bit more mysterious, because the points clearly aren't like classical points, or else the functions would be commutative. So what are they?

    This is honestly my favorite thing about the study of noncocommutative stuff.
    As a representation theorist, I study representations of groups. A quantum group is what you get when you try to make a group out of a noncocommutative space; you can turn a set into a group by sticking a multiplication on it and defining an identity and an inverse map, and you can turn a noncocommutative space into a quantum group by sticking a multiplication on it and defining an identity and an inverse map.
    A representation of a group is just a bunch of functions on the group, so a representation of a quantum group is just a bunch of functions on the quantum group. Thus it is often the case that we can describe representations of a quantum group, since functions on noncocommutative spaces are understood decently well, but when you ask what a quantum group is as an object, you just get shrugs in return.
     
    • Like x 1
  11. Exohedron

    Exohedron Doesn't like words

    This is actually kind of hilarious, but also kind of cool from a mathematical standpoint, that the parity error suddenly disappears.

     
  12. Exohedron

    Exohedron Doesn't like words

    You know what, screw it. I'm going to talk more properly about quantum groups, because I like them.

    In quantum mechanics, we take sets and we replace them with vector spaces. A set of possible states becomes a Hilbert space of possible states. This allows us to take linear combinations of things in ways that don't make sense as sets; we call linear combinations of states "superpositions", and some of the fun of quantum mechanics comes from trying to understand superpositions.

    So let's start with a finite set which we'll call G. To turn it into a vector space over C, we take each element of g of G and say that we have a basis vector vg corresponding to g; we get a vector space whose dimension is the size of G. We call the resulting vector space CG.
    If G has some structure, we can implement that structure in CG. For example, if G is a magma, i.e. if we can multiply elements of G to get new elements of G, we can multiply elements of CG:

    vgvh = vgh

    and then extend via linearity, so that, for instance,

    vf(vg + vh) = vfg + vfh

    and so on.

    Really what we're doing is looking at CCG, where I'm using ¤ in place of the tensor product symbol, and considering a map called the multiplication

    m: CCG -> CG

    Since we're doing things in vector spaces, we assume that m is linear.

    We say that CG is an algebra, i.e. a vector space that we can multiply in.
    Note that if multiplication in G is associative, then so is CG, and if multiplication in G is commutative, then so with CG.
    We can write out the definition of associativity in terms of application of maps and tensor products, via:

    (Assoc): m○(m¤id) = m○(id¤m)

    If we define σ to be the map

    σ: CCG -> CCG

    that sends u¤v to v¤u, we get that commutativity looks like

    (Comm): m○σ = m

    If G has an identity, often denoted e, we get that ve is thus the identity element of CG.
    But we go a little bit further, and define a function

    η: C -> CG

    called the unit. This function takes a complex number c and returns

    η(c) = cve

    We'll see why this is important in a bit. The fact that it is a unit map is expressed as two rules, called the left and right unitarity rules:

    (L-Unit): m○(η¤id) = id
    (R-Unit): m○(id¤η) = id

    where we use the fact that C¤CG = CG¤C = CG.

    So if G has a multiplication and an identity element, then (CG, m, η) is a unital algebra.

    Now suppose we have a vector space V over C and a map

    Δ: V -> V¤V

    We call Δ a comultiplication, because it looks like a multiplication in reverse. Note that it doesn't necessarily spit out something of the form v1¤v2; we might end up with a linear combination of such elements, i.e. a superposition.
    We call a vector space with a comultiplication a coalgebra.

    If G is a set, then the natural comultiplication on CG is the diagonal map:

    Δ(vp) = vp¤vp

    extended by linearity: for c in C and p and q in G,

    Δ(cvp+vq) = c(vp¤vp) + vq¤vq

    We can then consider coassociative and cocommutative comultiplications:

    (CoAssoc): (Δ ¤ id)○Δ = (id ¤ Δ)○Δ

    (CoComm): σ○Δ = Δ

    The natural comultiplication for a set G is coassociative and cocommutative.

    We also look at a counit map

    ε: V -> C

    where the counitarity rules looks like:

    (L-coUnit): (ε ¤ id)○Δ = id
    (R-coUnit): (id ¤ ε)○Δ = id

    I.e. taking the coproduct and then applying the counit yields the original element.

    If G is a set, then the natural counit on CG is to send vp to 1 for each point p in G, and extend by linearity.


    Now suppose V is both an algebra and a coalgebra, so it has both a multiplication m and a comultiplication Δ. We say that V is a bialgebra if the multiplication and the comultiplication are compatible:

    Δ○m = (m¤m)○(id¤σ¤id)○(Δ¤Δ)

    In other words, the multiplication map m is a coalgebra homomorphism from V¤V to V, and the comultiplication map Δ is an algebra homomorphism from V to V¤V.
    If we also have a unit map η and a counit map ε, we further want m to be a counital coalgebra homomorphism and Δ to be a unital algebra homomorphism:

    ε○m = ε¤ε
    Δ○η = η¤η

    where here we're using the fact that C¤C = C.

    If G is a group, then CG is naturally a bialgebra with unit and counit. The multiplication is naturally associative and the comultiplication is naturally coassociative. But a group has one more piece of structure, the multiplicative inverse.
    So we end up with a map called the antipode

    S: V -> V

    If G isn't commutative, then the multiplicative inverse swaps things around: (fg)-1 = g-1f-1, so S should do so as well:

    S○m = m○σ○(S¤S)

    And for good measure, the co version:

    Δ○S = (S¤S)○σ○Δ

    For a group, the inverse rule it looks like

    gg-1 = e = g-1g

    To talk about this in the bialgebra context, we need to pick it apart. First, we need to duplicate g, but the map v -> v¤v isn't linear. Fortunately, we do have a linear map from V to V¤V, the comultiplication. Secondly, we should probably replace e with the unit map somehow, but the unit map takes a complex number, not a vector, so we need to use the counit as well.

    We end up with

    m○(S¤id)○Δ = η○ε = m○(id¤S)○Δ

    So putting this all together, a unital, associative, counital, coassociative bialgebra with an antipode is called a Hopf algebra. The natural example is CG for any group G; in addition to being a Hopf algebra, CG is also cocommutative; CG is commutative if and only if G is.
    Another example is C(G), which is the set of functions from G to C. It has a natural multiplication, since you can multiply two functions together, and this natural multiplication is commutative. It has a natural comultiplication derived from G, defined as

    Δ(f)(g¤h) = f(gh)

    The commultiplication is cocommutative if and only if G is commutative.

    CG and C(G) are the natural examples that one gets from a group, and in the finite-dimensional case, from a commutative or a cocommutative Hopf algebra you can extract a group, similarly to how from a quantum mechanical system you can try to extract pure, classical states from all of the superpositions.

    But if your Hopf algebra is neither commutative nor cocommutative, then you can't extract a group; you can't even always extract individual group-like elements other than the identity. So instead we call such a Hopf algebra a quantum group.

    Fun fact: if you have a Hopf algebra that is commutative or cocommutative, then

    (Invol): S○S = id

    but this isn't necessarily the case for quantum groups. So the "inverse of the inverse" isn't necessarily the thing you started with.
     
    Last edited: Apr 10, 2020
  13. Exohedron

    Exohedron Doesn't like words

    One of the good things about Hopf algebras is that you can talk about groups and derivatives as if they were on the same footing, without needing to do any funny business with infinitesimals.

    A group-like element g of a Hopf algebra H has two properties:

    Δ(g) = g ¤ g
    ε(g) = 1

    The first one says that the element is point-like, in that functions evaluated on that element act as if they were being evaluated on an element of a set. The second one says that the element is invertible; we want to avoid funny business like 0, since Δ(0) = 0 ¤ 0, but we definitely don't want 0 to be a group element.
    The two properties also ensure that S(g) = g-1.

    A primitive element d of a Hopf algebra has the following property:

    Δ(d) = d ¤ 1 + 1 ¤ d

    We can consider H acting on Func(H), the set of functions from H to C. The element 1 in H acts by evaluation: 1(X) = X(1) for X in Func(H). A group-like element g acts by sending X to X(g). If we have two functions, X and Y, then g acts by sending the product XY to

    g(XY) = g(X)g(Y)

    Now look at d. d acts by sending X to d(X), and it sends XY to

    d(XY) = d(X)Y(1) + X(1)d(Y)

    This looks kind of like the product-rule for derivatives, and indeed if we denote d(X) = X'(1), we get that

    d(XY) = X'(1)Y(1) + X(1)Y'(1) = (XY)'(1)

    which says that d applies the product rule and then evaluates at 1.

    So a Hopf algebra can contain group-like elements and derivative-like elements.

    Note also that the comultiplication Δ preserves the Lie-bracket, in the sense that

    Δ[d,e] = [d,e] ¤ 1 + 1 ¤ [d,e] = [d ¤ 1 + 1 ¤ d, e ¤ 1 + 1 ¤ e]

    So we get both groups and Lie algebras as one type of object. And indeed, over C, every cocommutative Hopf algebra is the Hopf algebra of a group combined with the Hopf algebra of a Lie algebra. This is one reason to call a noncommutative, noncocommutative Hopf algebra a quantum group, since all of the cocommutative analogs are built from group-theoretic objects.

    This prevalence of group-stuff also means that there is a way to do Fourier-transform stuff here, even in the noncommutative case. You get something rather weird looking when you take the transform of a noncommutative group, because the dual object is noncocommutative. But that's okay, you're in Hopf algebra world, where you're allowed to talk about functions from things that don't have actual points.
     
    • Like x 1
  14. Exohedron

    Exohedron Doesn't like words

    A Musical Paradox

    Okay, so back in the long ago, people realized (okay, a bunch of different people realized at different times) that musical notes differing in pitch by simple ratios sound good together.
    Here by "simple ratios", we mean rational numbers whose numerators and denominators are pretty small.
    The reason for this is due to Fourier decomposition: a sound wave can be expressed as a sum of sine waves, and if the sound wave repeats then the sine waves all end up having a frequency that is a multiple of a single, fundamental frequency which we usually call the pitch of the note; the higher-frequency stuff is called "harmonics" and cause timbre, the difference between, say, a violin playing a C and a piano playing a C.
    When you play two notes whose pitches are in a simple ratio, the frequencies of the harmonics of one note and the frequencies of the harmonics of the other note overlap, and so the result isn't too much more complicated than if they just ended up all over the place. The result is that it sounds like the there is some fundamental frequency for the total set of harmonics, and so you've just got another thing with a definite pitch.

    For instance, suppose you have a note with pitch f, and whose harmonics have frequency 2f, 3f, 4f, etc. If you also have a note with pitch 3f/2, then the harmonics of that are 3f, 9f/2, 6f, etc. So there are overlaps at 3f, 6f, and so on, with 2f, 4f, 9f/2, etc also showing up. Note that these are all multiples of f/2, so you could say that the two notes played together have an implied total pitch of f/2, implied because nothing is actually playing a frequency of f/2, and the rest is just harmonics.

    Anyway, we call this system "just intonation", and for a long time, being "in tune" meant following these small ratios around; two notes were in tune relative to each other if the ratio of their frequencies was small.


    Now for the fun part.

    To make writing music not a terrible hassle, Western musicians decided that between f and 2f they were going to have twelve standard pitches, with the pitches chosen so that they are in small ratios.
    Then the question becomes what the ratios should be.
    So let's call our basic note 0. The simplest nontrivial ratio is 2/1, which we've declared to be going up by twelve notes.
    The next simplest ratio is 3/2, which is about halfway between 1/1 and 2/1, but not exactly half way because we're doing ratios so we should be considering multiplication and not addition when measuring distance. So 3/2 is slightly more than half way, so that's going up by seven notes*. Similarly, going up by five notes should be a ratio of 4/3, since (3/2)*(4/3) = 2/1, i.e. going up by twelve notes.
    The next simplest set of ratios involve 5s. We have 5/4, which is slightly less than 4/3 so maybe that should be going up by four notes. This in turn tells us that 8/5 is going up by eight notes, so that going up by four and going up by eight gives us going up by twelve.
    And now we have a bit of a problem, because going up by eight notes is not the same as going up by four notes twice: (5/4)*(5/4)= 25/16, which is not quite the same as 8/5.
    So should going up by eight notes be the same as going up by four notes twice, or should it just be the simplest ratio after 4/3?
    Well, that ends up depending. 25 and 16 aren't exactly small compared to 8 and 5, so if you are comparing note eight to note zero then you'll want to use 8/5 as your ratio for note eight. But if you're comparing to note four, then if you use 8/5 then you'll end up with note eight and note four having a ratio of 32/25, which is even worse in terms of the size of thr numerator and denominator.
    So up until the late 1500s, the general solution in Western music was to just use whatever other notes are playing nearby to determine where the exact pitches should be. Musical instruments, not really being able to adjust the exact pitches of notes on the fly, had to choose which versions of each note they wanted to use, and as a result some instances of being four notes apart sounded different from other instances of being four notes apart, and composers just had to live with that.

    For instruments that could adjust exact pitch on the fly, like the human voice, this led to an effect called "comma pump", where the discrepancies layered on top of each other until the key that the piece was in, the general notion of what pitch note zero should be at, would be forced upward or downward over time by the demands of staying in tune. In long acapella pieces, you'll often find out that the piece ends up in a different key than it started in.
    In other words, with a comma pump you could be on key, or you could be in tune, but not both.

    Eventually Western musicians decided that this wasn't good. They wanted all instances of note eight to sound the same, and they didn't want comma pumps. So instead of using small ratios, they decided on equal temperament, which said that the ratio between two adjacent notes was going to be exactly 21/12. This is not a simple ratio; it's not even rational. But it does mean that going up eight notes is going up four notes twice, and then going up four notes again gives us a ratio of exactly 2/1.


    Those of us from other musical traditions might recognize that this is a tale of Western musicianship; not all other cultures noticed this effect, and not all other cultures that did notice this effect decided that it was a problem. For instance, equal temperament was actually discovered in China shortly before Europeans found it, but despite knowing about comma pumps and the mathematics behind them, China didn't really care for equal temperament and continued with its incarnation of just intonation.


    Here's a video giving an example of a comma pump and giving a bit of the history of such



    Does this count as applied math?

    * The standard music theory term for what I'm calling a "note" is "semitone", but since standard music theory terminology for pitch classes is almost entirely all bad in the sense of containing at most zero net information, I am going to ignore it. I know it's based on Western classical tradition; if anything that makes me like it even less.

    [Edit] Corrected some of the ratios. 5/3 is nine notes, not eight.
     
    Last edited: Jun 10, 2020
    • Like x 2
  15. Exohedron

    Exohedron Doesn't like words

    I don't like the Cayley-Dickson construction anymore. For n > 3, the basic imaginary units aren't equivalent anymore. If we write the 2n-ions as pairs of 2n-1-ions (a, b), then any automorphism of the 2n-ions has to send (0, 1) to either itself or to (0, -1). That's really disappointing.
     
  16. Exohedron

    Exohedron Doesn't like words

    MIP* = RE

    Suppose you're playing a gameshow. The host asks you a question, and you tell the host the answer. The fun part of the game is that the host doesn't know the answer in advance, so in addition to providing the answer you have to convince the host that the answer you give is correct.
    It turns out this is hard, or can be hard, depending on the kinds of questions, and also how much work the host is willing to do to verify what you tell them.
    To reduce the effort of convincing the host, you can do the proof interactively, where the host will come up with some challenge questions whose answers are easy to verify, but whose answers would be hard to figure out unless you actually had the correct answer.

    Suppose there are a bunch of questions, Q1, Q2, ..., Qn,... and the host is only willing to do an amount of work that is a polynomial function of n,but you are willing to do an arbitrarily high amount of work (look, the host is offering fabulous prizes, okay?). We say that a family of such questions is in IP if there is a strategy that you and the host can use to answer any question in the family without the host doing too much work. It turns out that IP is equal to PSPACE, the set of questions a Turing machine can answer if it has a polynomial amount (in n) of memory.

    We can get a broader class of families of questions if there are two players. Now we allow the players to talk to each other after getting the question but before interacting with the host. MIP is the class of families of questions where the players can convince the host if and only if they have the correct answer. MIP is equal to NEXP, the set of questions a Turing machine can verify the answers to if it has an exponential amount of time. The main advantage is that the host can afford to ask different challenge questions to each player and then check them against each other, so the players can't pass off an incorrect answer as a correct one as easily.

    Now suppose that we toss some quantum mechanics into the mix. Now during the phase where the players can talk to each other, they're allowed to set up a bunch of qubits. However many qubits they want, into whatever state they want. Then during the phase of interacting with the host, the players can use the qubits to determine how they answer the challenge questions. We call the class of families of questions that can be proven in this scenario MIP*.

    The Halting Problem is in MIP*. In other words, if a Turing machine M halts, the players can prove to that it halts, and if it doesn't halt, the players can't convince the host that it does, and the host doesn't have to do a lot of work to run the verification process compared to the size of the description of M.



    So, rough sketch as to how it works: due to some funny business with entangled quantum states, the host actually can use the challenge questions to get a lot of information about the qubits that the players have, and so can almost force the players to do certain things with their qubits if they want to answer the host's challenge questions. In particular, the host can force the players to simulate the host.
    In fact, the host can also use the players to simulate compressed variants of the host, for instance if the host is willing to do an n-step iterated process, the players can be forced to run that iteration 2n times. Doing this simulation destroys some of the entanglement, so there's a limit to how much the players can do, and so if they know that the host is going to require them to run such a simulation, they can create enough qubits in advance.

    So given a Turing machine M and a parameter n, the host does the following:
    1: Run M for n steps. If M halts, then accept "yes" as the answer.
    2: If M hasn't halted yet, force the players to simulate running the host with Turing machine M and parameter 2n.

    But wait, the second step is self-referential! But it turns out that there's a fixed-point theorem that says that you can collapse that into a single process that still requires only polynomial work from the host.

    So if M halts in n or fewer steps, the host accepts "yes". Otherwise, the players compress and simulate:
    If M halts in 2n or fewer steps, the players return the result "yes" to the host, who accepts. Otherwise, the players compress and simulate:
    If M halts in 22n or fewer steps, the players return the result "yes" to the host, who accepts. Otherwise, the players compress and simulate;
    If M halts in 222n or fewer steps, the players return the result "yes" to the host, who accepts. Otherwise...
    ...

    If M does halt, then the compression allows the host to verify that the "yes" was correct. If M doesn't halt then eventually the players run out of entangled qubits and so can't pretend that M halts when it doesn't.



    I'm skipping over the part where we show that MIP* is at most RE, since that's less interesting, and also less funny then forcing the players to simulate increasingly compressed versions of the host until they run out of qubits.


    This result actually has some weird implications for physics, in the sense that it gives an experiment that distinguishes between two models of what it means to have an infinite number of qubits; in one model, the above process still works. In the other model, it would fail.
     
    Last edited: Jun 19, 2020
  17. Exohedron

    Exohedron Doesn't like words

    So it seems that the reason quantum mechanics likes complex numbers is that Noether's theorem doesn't really work as well over the reals or quaternions. I might say more about this later, but it's an interesting result. Basically, complex numbers give you a simple isomorphism between self-adjoint operators (observables) and anti-self-adjoint operators (infinitesimal coordinate changes) such that the Lie bracket of an observable with its image under this isomorphism is 0; in the real or quaternionic case we don't have such an isomorphism.
     
    • Informative x 1
  18. Exohedron

    Exohedron Doesn't like words

    I kind of want to tell the nCatLab people that their whole forgetful-functor paradigm can be mapped to Monopoly, with their property, (infra)structure, and stuff (i.e. money). Which makes me wonder what part of category theory maps to jail.
     
  19. Exohedron

    Exohedron Doesn't like words

    Induction Failure: Borwein Integrals

    Suppose you have a sequence that starts as

    π/2, π/2, π/2, π/2, π/2, π/2, π/2

    What comes next?

    Or to be a bit more explicit, suppose you have a sequence of integrals, that goes

    0 ( sin(t)/t ) dt = π/2
    0 ( sin(t)/t )( sin(t/3)/(t/3) ) dt = π/2
    0 ( sin(t)/t )( sin(t/3)/(t/3) )( sin(t/5)/(t/5) ) dt = π/2
    0 ( sin(t)/t )( sin(t/3)/(t/3) )( sin(t/5)/(t/5) )( sin(t/7)/(t/7) ) dt = π/2
    ...
    0 ( sin(t)/t )...( sin(t/13)/(t/13) ) dt = π/2

    What would you expect from

    0 ( sin(t)/t )...( sin(t/15)/(t/15) ) dt?

    Obviously, the answer is

    π/2 - (6879714958723010531/935615849440640907310521750000)π

    These are called the Borwein integrals, and are a nice example of why computing small examples might give you evidence but won't give you proof.

    So, um, what happened?
    Well, the first thing to notice is that
    1 + 1/3 + 1/5 + ... + 1/13 < 1
    but
    1 + 1/3 + 1/5 + ... + 1/13 + 1/15 > 1
    So that points out where the transition is. But why?

    Well, consider the Fourier Transform. We get that the Fourier Transform of sin(ct)/ct is a step function that is 0 outside of [-c, c]. So, for instance, the Fourier Transform of sin(t)/t is a step function S that's 0 outside of [-1, 1] and is 1/2 inside that interval.

    Moreover, the value of the corresponding Borwein integral is related to the value of S at the point 0.

    Since the Fourier Transform turns multiplication into convolution, we get that the Fourier transform of ( sin(t)/t )( sin(t/3)/(t/3) ) ... is a convolution of such step functions, which we can think of as repeatedly taking moving averages of shrinking window size. We can then consider this as a weighted moving average taken over the initial step function S, whose window has size the sum of the window sizes of the individual moving averages.
    Now the value of the corresponding Borwein integral is given by the value of the weighted moving average at 0.

    For the first several steps, the total window size is less than 1, so the weighted moving average centered at 0 is 1/2. But when we get to t/15, the total window size exceeds 1, and so the window centered at 0 is now grabbing chunks outside the interval [-1, 1], and so the moving average drops.
    Hence the rather abrupt change from the integral constantly spitting out π/2 to giving something smaller.


    This suggests a more general setup: given a sequence of positive real values a0, a1, a2 ..., we get that
    0 ( sin(a0t)/a0t )...( sin(akt)/(akt) ) dt
    is equal to π/2a0 as long as

    a0 > a1 + ... + ak

    and then starts dropping afterwards. By adjusting the sequence, you can make the integrals take the value π/2a0 for as long as you want before it starts dropping.

    So just having a lot of examples isn't enough to demonstrate that a pattern holds. Logical induction is not mathematical induction.


    So far, we know that the first ten trillion smallest nontrivial zeros of the Riemann zeta function all have real part 1/2. That's a trillion pieces of evidence for the Riemann Hypothesis. But we also know that any aberrant behavior is governed by functions that grow so slowly that our computational methods haven't been able to even touch the regimes in which they're significant.
     
    Last edited: Jul 18, 2020
  20. Exohedron

    Exohedron Doesn't like words

    When I was in high school each summer I went to a camp for math nerds and each year we had an animation night, and most of it was garbage but there were two videos that everyone always went nuts for: Bambi Meets Godzilla, and this guy on how to turn a sphere inside out:
     
    • Like x 2
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice