Math(s)! Currently: the Twin Primes Conjecture

Discussion in 'General Chatter' started by Exohedron, Mar 14, 2015.

  1. Exohedron

    Exohedron Doesn't like words


    I've probably said it before but this guy makes the coolest stuff.
     
    • Winner x 1
  2. Exohedron

    Exohedron Doesn't like words

    When you're trying to prove that a thing can't happen and you come up with all sorts of restrictions and constraints and end up with one tiny, specific case that you can't manage to eliminate so on a whim you see if you can actually construct that special case and then there it is, mocking you and all of that effort.
     
    • Witnessed x 1
  3. evilas

    evilas Custom title cause all the cool kids are doing it

    I mean, "Can't happen except in this one very specific case" is still a useful thing to prove for many things. Maybe you can figure out a way to construct all the special cases or smth.
     
  4. Exohedron

    Exohedron Doesn't like words

    I'm pretty sure I've got a complete classification + construction method for the counterexample cases. I just want them to not exist at all.
     
  5. Exohedron

    Exohedron Doesn't like words

    Supersymmetry as negative dimensions

    My favorite fact from when I was doing Lie theory Penrose stuff is that the symplectic forms are like inner products on negative dimensional spaces.

    Pretend for a moment that that's the end of this post.


    Consider an n-dimensional vector space V with a bilinear inner product g(-,-). The group O(V,g) preserves this inner product, and is isomorphic to the group O(n).
    If we take the trace of the identity on this space, we get n.
    If we take a tensor product of this space with itself some number of times, we can ask about the O(V,g)-invariant subspaces. Each of these subspaces W can be pulled out by a projection operation PW such that for any two spaces W and X, PWPX = PXPW, and we get that PWPW = PW because it's a projection. By the usual notion of projection, the trace of PW is the dimension of W, and it ends up being a polynomial in terms of n.
    For instance, consider the tensor square of V, which I'll write as V2 since I'm too lazy to locate a tensor product symbol.
    V2 is an n2-dimensional space. We can break it into two spaces that I'll call Sym2(V) and Asym2(V) and these are also O(V,g)-invariant. If we take an element (v1,v2) in V2, we get that the symmetrizing projector PS2 sends it to
    (v1,v2) + (v2,v1)
    times some normalization factor and, similarly, the antisymmetrizing projector PA2 sends it to
    (v1,v2) - (v2,v1)
    times some normalization factor.
    The trace of PA2 is n(n-1)/2. For PS2 the trace is n(n+1)/2, and these are in fact the dimensions that you'd get for those spaces.
    But Sym2(V) contains a smaller O(V,g)-invariant space: we make a projection operator Pg that sends (v1,v2) to
    g(v1,v2i (ei,fi)
    times some normalization factor, where the ei are a basis of V and fi are the dual basis of V with respect to g:
    g(ei, fj) = 1 if i = j, 0 otherwise.
    Pg yields a subspace of dimension 1 while its complement in Sym2(V), the projection
    PS2(1 - Pg)
    yields a subspace of dimension n(n+1)/2 - 1
    So those are our polynomials:
    pA2(V) = n(n-1)/2
    pg(V) = 1
    pS2(V) = n(n+1)/2 - 1

    Now, suppose that n is even and that we had instead a vector space U equipped with a symplectic form w, which is like an inner product but instead of having g(a,b) = g(b,a), we have w(a,b) = -w(b,a)
    Again, taking the trace of the identity yields n.
    We can again build tensor products on our space U and look at projections onto Sp(U,w)-invariant subspaces and take the traces of those projection operators.
    So we again start with PA2 and PS2. The trace of PS2 is n(n+1)/2.
    If we look at Asym2(U), we get that we can use w to get a new projection operator Pw that sends (u1,u2) to
    w(u1,u2i (ei,fi)
    times some normalization factor, where the ei are a basis of U and fi are the dual basis of U with respect to w:
    w(ei, fj) = 1 if i = j, 0 otherwise.
    And we get that the space that Pw projects to has dimension 1, and the space that its complement in Asym2(U), the projection
    PS2(1 - Pw)
    yields a subspace of dimension n(n-1)/2 - 1
    So those are our polynomials:
    pS2(U) = n(n+1)/2
    pw(U) = 1
    pA2(U) = n(n-1)/2 - 1

    But note that n(n+1)/2 = (-n)(-n-1)/2, so we get that pS2(U) gives the same thing as pA2(V) if we swap n with -n. Also, n(n-1)/2 - 1 = (-n)(-n+1)/2 - 1, so we get that pA2(U) gives the same thing as pS2(V) if we swap n with -n. And then neither pg(V) nor pw(U) contain any ns, and are identical.

    So we get that the polynomials that we get for the symmetric stuff for V correspond to the polynomials we get for the antisymmetric stuff for U, and vice-versa, if we also swap n with -n (at least up to some overall signs).

    The more general pattern is that all the O(V,g)-invariant subspaces of tensor powers of V can be created using symmetrizers, antisymmetrizers, and Pg, and similarly for the Sp(U,w)-invariant subspaces of tensor powers of U, and they can be matched up by swapping symmetrizers with antisymmetrizers (and vice-versa) and swapping Pg with Pw. The dimensions of the spaces, as functions of n, are polynomials, and corresponding spaces have matching polynomials only with n replaced by -n.
    So we can view U with symplectic form w as a negative-dimensional version of V with inner product g.

    But that's not all!
    In supersymmetry, there is a notion of a "supervector space", by which we mean a vector space which is the direct sum of two components which I'll call U and V, not necessarily the same dimension. The superdimension of this supervector space is the dimension of V minus the dimension of U, and a superinner product on this supervector space acts like an inner product on V, a symplectic form on U, and vanishes if you try it on something from V with something from U.
    So, according to supervector-space theory, the dimension of the symplectic part of the supervector space is also negative.

    And finally, the really physics bit, is that we have fermions and bosons; if you have a wavefunction describing a pair of identical bosons and you rotate the universe around 360 degrees, the wavefunction for the bosons remains the same, while if you have a wavefunction describing a pair of identical fermions and you rotate the universe, the wavefunction picks up a minus sign.
    Usually we describe this using what are called spin-statistics, saying that bosons have integer spin and fermions have spin that is 1/2 plus an integer.
    According to supersymmetry, every particle has a corresponding superpartner, and the superpartners of fermions are all bosons, while the superpartners of bosons are fermions. And you think "okay, so the superpartner of a spin-0 boson has spin 1/2, while the superpartner of a spin-1/2 fermion has spin...?" and you have to ask why the spins change in a particular way, and if the superpartner of a superpartner has the same spin as the original particle, and it becomes a little messy.
    But instead of that, you could instead say "superpartners live in negative-dimensional space" and that's why the symmetric stuff on our end, the bosons, have antisymmetic superpartners, and vice-versa.

    Of course, if you dig into the details this falls apart a bit, because the symplectic analogues of the spinor representations are all infinite-dimensional. Oops.
     
    Last edited: Dec 11, 2019
  6. Exohedron

    Exohedron Doesn't like words

    I think the most important thing I learned at the JMM this year was that Cherry Arbor Designs exists. It's a little pricey, but I now have a very nice set of Penrose Ver III tiles, jigsawed so that they have to obey the edge-matching rules that force an aperiodic tiling.
     
  7. Exohedron

    Exohedron Doesn't like words

    I was talking to a colleague today about Hermitian matrices. I usually don't really care about Hermitian matrices because, being a Lie Theorist, I care more about anti-Hermitian matrices. But if you're doing quantum mechanics then you often care about Hermitian matrices, because physics likes putting factors of i in unnecessary places. Also observables.

    For those of us who have forgotten what a Hermitian matrix is and why we care about it, consider a complex, finite-dimensional vector space V = Cn. We can consider linear transformations from V to itself, and we can write them as n-by-n matrices with complex entries. Given such a matrix, we can consider its transpose, i.e. flip it along the upper-left-to-lower-right diagonal, and we can consider its complex conjugate, i.e. take each entry and replace it with the complex conjugate of that entry, and we can consider its conjugate transpose, i.e. we do both operations.
    Given a matrix A, we often write its conjugate transpose as A†, pronounced "A dagger" like you're the villain in Shakespeare play.
    Given two matrices A and B, (A+B)† = A† + B†, while (AB)† = B†A†; note the order switching.

    A Hermitian matrix is a matrix A such that A = A‌†. An anti-Hermitian matrix is a matrix A such that A = -A‌†. Note that if A is Hermitian, then iA is anti-Hermitian, and vice-versa.

    Since we're looking at vectors in V, we can define an inner product on V as
    <v,w> = ∑i viwi*
    Then we get that
    <v, Aw> = <A†v, w>
    and
    <Av, w> = <v, A†w>
    We get that <v, Av> is real for all v in V if and only if A is Hermitian. This is why quantum physicists like Hermitian matrices, because to observe the value of an observable h on a state φ, they compute <φ, Hφ> for H being the linear operator corresponding to the observable h, and since they want real numbers they only consider Hermitian operators H.

    Okay, technically when I say matrix I probably mean "operator" and when I say "Hermitian" I mean self-adjoint, because there's a chance that n is infinity.

    One nice fact about Hermitian matrices is that a Hermitian matrix can be fully diagonalized, and all of its eigenvalues are real; similarly, an anti-Hermitian matrix is fully diagonalizable and all of its eigenvalues are imaginary.
    In fact, we can sort-of analogize Hermitian and anti-Hermitian matrices to real and imaginary numbers, in the following sense:
    Given a matrix A, A‌ + A† is Hermitian, as is A‌A†. This is akin to the fact that given a complex number z, z + z* is real, as is zz* where z* indicates the complex conjugate of z. Moreover, A‌A† is "positive", in the sense that all of its eigenvalues are nonnegative real numbers, in the same way that zz* is always a nonnegative real number.
    So we define the "absolute value" of a matrix A as
    |A| = √(A‌A†)
    akin to how the magnitude of a complex number z is
    |z| = √(zz*)

    Okay, but what is √ of a matrix? Well, that's a little tricky for the general case, but fortunately we're dealing with the nice case of trying to take the square root of a positive matrix. Firstly, we diagonalize the matrix AA†, getting a matrix whose only entries are along the diagonal. These entries match the eigenvalues of AA†, which are all nonnegative real numbers, so we can take a square root and get another diagonal matrix whose entries are all nonnegative real numbers. Then we change the basis back to whatever it was before we diagonalized.
    And that gets us |A|, whose eigenvalues are the magnitudes of the eigenvalues of A.

    Unlike complex numbers, matrices generally don't commute, and indeed in general A and A† don't commute. Aww, that's unfortunate. This also means that while the sum of two Hermitian matrices is Hermitian, the product of two Hermitian matrices generally isn't Hermitian:
    (AB)† = B†A† = BA, not necessarily equal to AB.
    If we want to get a "product" that takes two Hermitian matrices and spits out a Hermitian matrix, the simplest way to do it is called "Jordan multiplication", in which we say that
    A * B = (AB+BA)/2
    Now note that if A and B are Hermitian, then A * B is also Hermitian; you can work it out by hand if you want to.
    Also note that this kind of multiplication is commutative, A * B = B * A. Nice, right? But unfortunately, it's not associative:
    (A * B) * C = ((AB + BA)/2) * C = (ABC + BAC + CAB + CBA)/4
    A * (B * C ) = A * ((BC + CB)/2) = (ABC + ACB + BCA + CBA)/4
    which aren't quite equal on the nose. And that's kind of awkward. But that's what happens when you try to multiply observables in quantum mechanics.
     
    • Like x 1
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice