When you're trying to prove that a thing can't happen and you come up with all sorts of restrictions and constraints and end up with one tiny, specific case that you can't manage to eliminate so on a whim you see if you can actually construct that special case and then there it is, mocking you and all of that effort.
I mean, "Can't happen except in this one very specific case" is still a useful thing to prove for many things. Maybe you can figure out a way to construct all the special cases or smth.
I'm pretty sure I've got a complete classification + construction method for the counterexample cases. I just want them to not exist at all.
Supersymmetry as negative dimensions My favorite fact from when I was doing Lie theory Penrose stuff is that the symplectic forms are like inner products on negative dimensional spaces. Pretend for a moment that that's the end of this post. Consider an n-dimensional vector space V with a bilinear inner product g(-,-). The group O(V,g) preserves this inner product, and is isomorphic to the group O(n). If we take the trace of the identity on this space, we get n. If we take a tensor product of this space with itself some number of times, we can ask about the O(V,g)-invariant subspaces. Each of these subspaces W can be pulled out by a projection operation P_{W} such that for any two spaces W and X, P_{W}P_{X} = P_{X}P_{W}, and we get that P_{W}P_{W} = P_{W} because it's a projection. By the usual notion of projection, the trace of P_{W} is the dimension of W, and it ends up being a polynomial in terms of n. For instance, consider the tensor square of V, which I'll write as V^{2} since I'm too lazy to locate a tensor product symbol. V^{2} is an n^{2}-dimensional space. We can break it into two spaces that I'll call Sym^{2}(V) and Asym^{2}(V) and these are also O(V,g)-invariant. If we take an element (v_{1},v_{2}) in V^{2}, we get that the symmetrizing projector P_{S2} sends it to (v_{1},v_{2}) + (v_{2},v_{1}) times some normalization factor and, similarly, the antisymmetrizing projector P_{A2} sends it to (v_{1},v_{2}) - (v_{2},v_{1}) times some normalization factor. The trace of P_{A2} is n(n-1)/2. For P_{S2} the trace is n(n+1)/2, and these are in fact the dimensions that you'd get for those spaces. But Sym^{2}(V) contains a smaller O(V,g)-invariant space: we make a projection operator P_{g} that sends (v_{1},v_{2}) to g(v_{1},v_{2})Σ_{i} (e_{i},f_{i}) times some normalization factor, where the e_{i} are a basis of V and f_{i} are the dual basis of V with respect to g: g(e_{i}, f_{j}) = 1 if i = j, 0 otherwise. P_{g} yields a subspace of dimension 1 while its complement in Sym^{2}(V), the projection P_{S2}(1 - P_{g}) yields a subspace of dimension n(n+1)/2 - 1 So those are our polynomials: p_{A2}(V) = n(n-1)/2 p_{g}(V) = 1 p_{S2}(V) = n(n+1)/2 - 1 Now, suppose that n is even and that we had instead a vector space U equipped with a symplectic form w, which is like an inner product but instead of having g(a,b) = g(b,a), we have w(a,b) = -w(b,a) Again, taking the trace of the identity yields n. We can again build tensor products on our space U and look at projections onto Sp(U,w)-invariant subspaces and take the traces of those projection operators. So we again start with P_{A2} and P_{S2}. The trace of P_{S2} is n(n+1)/2. If we look at Asym^{2}(U), we get that we can use w to get a new projection operator P_{w} that sends (u_{1},u_{2}) to w(u_{1},u_{2})Σ_{i} (e_{i},f_{i}) times some normalization factor, where the e_{i} are a basis of U and f_{i} are the dual basis of U with respect to w: w(e_{i}, f_{j}) = 1 if i = j, 0 otherwise. And we get that the space that P_{w} projects to has dimension 1, and the space that its complement in Asym^{2}(U), the projection P_{S2}(1 - P_{w}) yields a subspace of dimension n(n-1)/2 - 1 So those are our polynomials: p_{S2}(U) = n(n+1)/2 p_{w}(U) = 1 p_{A2}(U) = n(n-1)/2 - 1 But note that n(n+1)/2 = (-n)(-n-1)/2, so we get that p_{S2}(U) gives the same thing as p_{A2}(V) if we swap n with -n. Also, n(n-1)/2 - 1 = (-n)(-n+1)/2 - 1, so we get that p_{A2}(U) gives the same thing as p_{S2}(V) if we swap n with -n. And then neither p_{g}(V) nor p_{w}(U) contain any ns, and are identical. So we get that the polynomials that we get for the symmetric stuff for V correspond to the polynomials we get for the antisymmetric stuff for U, and vice-versa, if we also swap n with -n (at least up to some overall signs). The more general pattern is that all the O(V,g)-invariant subspaces of tensor powers of V can be created using symmetrizers, antisymmetrizers, and P_{g}, and similarly for the Sp(U,w)-invariant subspaces of tensor powers of U, and they can be matched up by swapping symmetrizers with antisymmetrizers (and vice-versa) and swapping P_{g} with P_{w}. The dimensions of the spaces, as functions of n, are polynomials, and corresponding spaces have matching polynomials only with n replaced by -n. So we can view U with symplectic form w as a negative-dimensional version of V with inner product g. But that's not all! In supersymmetry, there is a notion of a "supervector space", by which we mean a vector space which is the direct sum of two components which I'll call U and V, not necessarily the same dimension. The superdimension of this supervector space is the dimension of V minus the dimension of U, and a superinner product on this supervector space acts like an inner product on V, a symplectic form on U, and vanishes if you try it on something from V with something from U. So, according to supervector-space theory, the dimension of the symplectic part of the supervector space is also negative. And finally, the really physics bit, is that we have fermions and bosons; if you have a wavefunction describing a pair of identical bosons and you rotate the universe around 360 degrees, the wavefunction for the bosons remains the same, while if you have a wavefunction describing a pair of identical fermions and you rotate the universe, the wavefunction picks up a minus sign. Usually we describe this using what are called spin-statistics, saying that bosons have integer spin and fermions have spin that is 1/2 plus an integer. According to supersymmetry, every particle has a corresponding superpartner, and the superpartners of fermions are all bosons, while the superpartners of bosons are fermions. And you think "okay, so the superpartner of a spin-0 boson has spin 1/2, while the superpartner of a spin-1/2 fermion has spin...?" and you have to ask why the spins change in a particular way, and if the superpartner of a superpartner has the same spin as the original particle, and it becomes a little messy. But instead of that, you could instead say "superpartners live in negative-dimensional space" and that's why the symmetric stuff on our end, the bosons, have antisymmetic superpartners, and vice-versa. Of course, if you dig into the details this falls apart a bit, because the symplectic analogues of the spinor representations are all infinite-dimensional. Oops.
I think the most important thing I learned at the JMM this year was that Cherry Arbor Designs exists. It's a little pricey, but I now have a very nice set of Penrose Ver III tiles, jigsawed so that they have to obey the edge-matching rules that force an aperiodic tiling.
I was talking to a colleague today about Hermitian matrices. I usually don't really care about Hermitian matrices because, being a Lie Theorist, I care more about anti-Hermitian matrices. But if you're doing quantum mechanics then you often care about Hermitian matrices, because physics likes putting factors of i in unnecessary places. Also observables. For those of us who have forgotten what a Hermitian matrix is and why we care about it, consider a complex, finite-dimensional vector space V = C^{n}. We can consider linear transformations from V to itself, and we can write them as n-by-n matrices with complex entries. Given such a matrix, we can consider its transpose, i.e. flip it along the upper-left-to-lower-right diagonal, and we can consider its complex conjugate, i.e. take each entry and replace it with the complex conjugate of that entry, and we can consider its conjugate transpose, i.e. we do both operations. Given a matrix A, we often write its conjugate transpose as A†, pronounced "A dagger" like you're the villain in Shakespeare play. Given two matrices A and B, (A+B)† = A† + B†, while (AB)† = B†A†; note the order switching. A Hermitian matrix is a matrix A such that A = A†. An anti-Hermitian matrix is a matrix A such that A = -A†. Note that if A is Hermitian, then iA is anti-Hermitian, and vice-versa. Since we're looking at vectors in V, we can define an inner product on V as <v,w> = ∑_{i} v_{i}w_{i}* Then we get that <v, Aw> = <A†v, w> and <Av, w> = <v, A†w> We get that <v, Av> is real for all v in V if and only if A is Hermitian. This is why quantum physicists like Hermitian matrices, because to observe the value of an observable h on a state φ, they compute <φ, Hφ> for H being the linear operator corresponding to the observable h, and since they want real numbers they only consider Hermitian operators H. Okay, technically when I say matrix I probably mean "operator" and when I say "Hermitian" I mean self-adjoint, because there's a chance that n is infinity. One nice fact about Hermitian matrices is that a Hermitian matrix can be fully diagonalized, and all of its eigenvalues are real; similarly, an anti-Hermitian matrix is fully diagonalizable and all of its eigenvalues are imaginary. In fact, we can sort-of analogize Hermitian and anti-Hermitian matrices to real and imaginary numbers, in the following sense: Given a matrix A, A + A† is Hermitian, as is AA†. This is akin to the fact that given a complex number z, z + z* is real, as is zz* where z* indicates the complex conjugate of z. Moreover, AA† is "positive", in the sense that all of its eigenvalues are nonnegative real numbers, in the same way that zz* is always a nonnegative real number. So we define the "absolute value" of a matrix A as |A| = √(AA†) akin to how the magnitude of a complex number z is |z| = √(zz*) Okay, but what is √ of a matrix? Well, that's a little tricky for the general case, but fortunately we're dealing with the nice case of trying to take the square root of a positive matrix. Firstly, we diagonalize the matrix AA†, getting a matrix whose only entries are along the diagonal. These entries match the eigenvalues of AA†, which are all nonnegative real numbers, so we can take a square root and get another diagonal matrix whose entries are all nonnegative real numbers. Then we change the basis back to whatever it was before we diagonalized. And that gets us |A|, whose eigenvalues are the magnitudes of the eigenvalues of A. Unlike complex numbers, matrices generally don't commute, and indeed in general A and A† don't commute. Aww, that's unfortunate. This also means that while the sum of two Hermitian matrices is Hermitian, the product of two Hermitian matrices generally isn't Hermitian: (AB)† = B†A† = BA, not necessarily equal to AB. If we want to get a "product" that takes two Hermitian matrices and spits out a Hermitian matrix, the simplest way to do it is called "Jordan multiplication", in which we say that A * B = (AB+BA)/2 Now note that if A and B are Hermitian, then A * B is also Hermitian; you can work it out by hand if you want to. Also note that this kind of multiplication is commutative, A * B = B * A. Nice, right? But unfortunately, it's not associative: (A * B) * C = ((AB + BA)/2) * C = (ABC + BAC + CAB + CBA)/4 A * (B * C ) = A * ((BC + CB)/2) = (ABC + ACB + BCA + CBA)/4 which aren't quite equal on the nose. And that's kind of awkward. But that's what happens when you try to multiply observables in quantum mechanics.