TIL that Karl Marx started writing a manuscript on calculus that, while apparently unaware of the work of people like Cauchy, nevertheless managed to anticipate some of the developments in the interpretation of calculus that for the mainstream only came in the 20th century, in particular nonstandard analysis.
Today I learned that, while most mathematicians take for granted that for any infinite set A, AxA has the same size as A, that is in fact equivalent to the axiom of choice. Of course, most mathematicians also take the axiom of choice for granted.
Another use for categories: talking about things that don't exist. For instance, quantum groups don't exist. That is to say, there is no set with a binary operation and a specified point such that the group laws hold, and also where the function ring of the object is noncommutative; sets don't work that way. There is also no field with one element; fields specifically have the axiom that 1 is not 0, but you need to have both in the field, so your field needs to have at least two elements. There is a ring with one element, but it isn't a field, and don't act like a field. But! Categories allow us to treat said non-objects as if they were actual objects. For instance, we can talk about the category of linear representations of a group. These are the maps from the group to matrices where the group operation becomes matrix multiplication. And these categories have particular properties that allow us to study the groups. For instance, the Feit-Thompson theorem that says that every group of odd order is solvable uses representation theory. And we can similarly talk about the category of linear representations of a quantum group! Even though the quantum group doesn't exist as a set, we can use the data describing the quantum group to define a category of representations, and we can use this category to study the quantum group in the way that we use the category of representations to study a group. We can also use category theory to talk about virtual representations, where the representations themselves don't actually exist, for instance by requiring the matrices to have negative dimension, but we can still talk about their behavior because we have the category structure. Given a field, we can talk about geometric spaces defined using the field. For instance, we can say "take the vector space Fn and take the space of solutions to the polynomial P(x)" and that gets us a geometric object, and we can study the category of these spaces. For instance, we can define a notion of a "curve", a one-dimensional thing, even when the field isn't the reals, and the category tells us how these one-dimensional things behave. And we can build such a category for the field with one element! It's a strange category, because it's built from a strange field, but we can still do it! And the relations between the spaces in this category look like the relations for spaces built from fields that actually exist, so we can say things about this field with one element based on things we learned from fields that actually exist by moving through this category language. This is one way to "prove" the Riemann Hypothesis: there's an analogue for the Riemann Hypothesis that is true for function fields of curves defined over a field, and the integers would be the function field for a curve defined over the field with one element, if the field with one element existed. So if the field with one element actually existed, then the Riemann Hypothesis would necessarily be true! The point is, categories allow us to make analogies between things that do exist and things that technically don't exist, or at least that don't exist in the same sense as the things that do exist.
For those of us who like innocuous-looking axioms that provide surprising results, here's a set of six axioms that look purely algebraic that end up forcing a category to be (equivalent to) either the category of real Hilbert spaces or the category of complex Hilbert spaces. The tricky bit, in my opinion, is showing that Solèr's theorem applies, which then forces the scalar field to be either the reals or the complex numbers (having first shown that there are scalars and that they do form a field). https://arxiv.org/abs/2109.07418 So another step closer to putting quantum mechanics on really algebraic foundations. Which I would appreciate because I don't understand analysis at all.
So nostalgebraist-autoresponder is a GPT-3 bot that generates human-like text via a large neural network trained on a corpus of a lot of stuff from the internet, plus trawling through nostalgebraist's own tumblr. Anyway, here's some fake category theory
The Quantum 36 Officers Problem Apocryphally, Euler was once challenged by Catherine the Great to solve the following problem: there are six regiments and six ranks that an officer can have, and each regiment sends you 6 officers, one of each rank. So you have 36 officers total. And for some reason, you want to arrange them in a square so no row contains two officers of the same rank or from the same regiment, and no column contains two officers of the same rank or from the same regiment. Can you find such an arrangement? Let's look at a simpler problem. Suppose we only care about regiments, and there are only three regiments, call them A, B and C. Then we have a solution that looks like A,B,C B,C,A C,A,B We call such an arrangement a Latin Square, where there are n labels and no label appears twice in the same row or column. So the condition on regiments is exactly the condition on Latin Squares. The condition on ranks is also the Latin Square condition. Suppose there are three ranks, 1, 2 and 3. We have a solution that looks like 1, 2, 3 3, 1, 2 2, 3, 1 Furthermore, we can combine the two solutions to get a solution to the 9 officer problem: A1, B2, C3 B3, C1, A2 C2, A3, B1 Tada! We call such a thing a Graeco-Latin Square, wherein each no officers from the same regiment have the same rank, and we have to arrange them into a square where no rank or regiment shows up twice in the same row or column. You can probably, with a bit of work, show that there isn't any Greco-Latin Square for two regiments and two ranks. Just not enough room to maneuver. So let's scale up. What about, say, 4, or 5? A1, B2, C3, D4 B4, A3, D2, C1 C2, D1, A4, B3 D3, C4, B1, A2 A1, B2, C3, D4, E5 B5, C1, D2, E3, A4 C4, D5, E1, A2, B3 D3, E4, A5, B1, C2 E2, A3, B4, C5, D1 Do you see a pattern? What about 6? Because I want to get to the real fun part, I'm just going to tell you that there isn't a solution for 6. Euler found solutions for all odd n and all multiples of four and figured out that there were no solutions for n = 2 or 6, and conjectured there were no solutions for n = 2 mod 4. It turns out that there are in fact solutions for n = 2 mod 4 except when n = 2 or 6; the solution for n = 10 was one of the first examples of computer-aided combinatorics. But! Note the title of this post. Let's get quantum. We start by rethinking the problem a little bit. Instead of saying "lets arrange officers", we'd rather just say "here's a square grid, put some labels on the squares." So what we want is a map from the (i,j) square on the grid to a pair (k, l) where k indicates regiment and l indicates rank. Call this map φ. We can write φ as a matrix Uij,kl. In this case, we would write the state of the grid as Σi,j,k,l Uij,kl |i>|j>|k>|l> where |i>|j>|k>|l> indicates that the officer in row i and column j is from regiment k and has rank l. A solution to the officers problem says that φ is a permutation of pairs (i,j) to pairs (k,l), since no pair of (regiment, rank) shows up twice. But we can also read φ as a map ρ from pairs of (row, regiment) to (column, rank) since no regiment shows up twice in the same row and hence specifying (row, regiment) also specifies a column that the officer from that regiment shows up in, and also the specifies the rank of that officer, and this map ρ is also a permutation. Similarly, we get a map σ from pairs of (row, rank) to (column, regiment) for the same reason, and σ is also a permutation. In terms of matrices, we get ρ giving the matrix Uik,jl and σ giving the matrix Uil,jk, which are all basically the same matrix as Uij,kl but with the indices moved around. The point is that for a solution of the officers problem, all of these matrices are permutation matrices, i.e. that each row in the matrix has exactly one entry that's 1 and the rest are 0, and each column in the matrix has one entry that's 1 and the rest are 0. If we want to change this to a quantum problem, we replace "permutation matrix" with "unitary matrix", i.e. a matrix with complex entries such that UU† = I, where † indicates complex-conjugate-transpose. And so a solution to the quantum officers problem would be a unitary matrix Uij,kl such that the corresponding matrices Uik,jl and Uil,jk are also unitary. What does this mean for the contents of the grid? Since we're no longer using permutation matrices, rather than each officer having a well-defined regiment and a well-defined rank, we instead get superpositions of (regiment, rank) pairs. Moreover, the officers are entangled with each other: the (regiment, rank) superpositions for each officer are not completely independent of those for the other officers. Anyway, it turns out that although the classical 36 officers problem is impossible, the quantum 36 officers problem has a solution! Also the solution is related to certain quantum error-correcting codes. See this paper for details: Thirty-Six Entangled Officers of Euler: Quantum solution to a classically impossible problem
TIL that algebraic geometers got super lucky terminology wise, in that the reason that abelian varieties are called such is not due to them being commutative as groups! Rather, Abel was looking at generalizations of elliptic integrals and those all lead to varieties that admit group structures which all happen to be commutative! From now on, I'm going to pretend that this is only fact about algebraic geometry that I know. [EDIT] Okay, apparently it's the other way around: commutative groups are called abelian because they're the Galois groups of the polynomials that give us abelian varieties. Source: David A Cox This makes the fact a lot less fun, that it's not a coincidence.
I'm still extremely happy that we have a complete classification of the finite simple groups, and that it's not too awful. I can even describe all of the infinite families! The sporadics still escape me, though.
Man, learning what the "groups of Lie type" actually are and how to visualize them is one of those things in my "math stuff I wanna learn more about sometime" list. Honestly just, more group theory in general tbh.
The finite simple groups, i.e. groups such that aren't decomposable into smaller groups, come in 18 infinite families, with 26 sporadics that don't fit into those groups. Of the 18 infinite families, two are fairly easy to describe: one is the cyclic groups of prime order, i.e. the integers modulo p for prime p, and the other is the alternating groups, i.e. the even permutations of n elements. The other 16 infinite families are all derived from the groups of Lie type. I'm going to go through these in roughly ascending order of difficulty, although I definitely don't try very hard at the Suzuki-Ree groups. Consider the set of n x n invertible matrices, the General Linear Group GLn. We can consider matrices with real entries, i.e. GLn(R), or matrices with complex entries, i.e. GLn(C). This is a group, since you can multiply two invertible matrices and get another invertible matrix, and you can invert invertible matrices, and matrix multiplication is associative. This is one of the first examples of an infinite group, once you've gotten past the integers and vector spaces. We can impose some basic conditions on the matrices to get subgroups. For instance, we might impose orthogonality: a matrix M is orthogonal if MMT = I, if the matrix times its transpose is the identity. The set of orthogonal matrices also yields a group, On, and again we can ask for it over R or C. Note: we'll mostly focus on SOn, the orthogonal matrices of determinant 1 (i.e. the special orthogonal matrices). Another, slightly weirder condition is symplectic-ness. We define a matrix J such that J2 = -I. The easiest way is via blocks: 0 -Ir Ir 0 where r = n/2. Note that we need n to be even here. Then we say that a matrix M is symplectic if MJMT = J. These matrices again form a group, Spr. For reasons, we often denote GLn+1 by An, we denote SO2n+1 by Bn, Spn by Cn, and SO2n by Dn. I've talked a bit about this classification before in my previously blather about ADE classifications and Dynkin diagrams. These are all Lie Groups, in that in addition to the algebraic being-a-matrix-with-conditions structure, they also have geometric structure coming from the geometric structure of R and C, and the algebraic and geometric structures play nicely together. Note: for Lie groups, sometimes GLn is replaced by SLn, the matrices with determinant 1. But note that the condition of being an invertible matrix isn't really attached to R or C. Indeed, we can take the entries of our matrices to live in, say, the integers Z. We can still look at invertible matrices with integer entries. We can also look at integer matrices with determinant 1 or orthogonal integer matrices or symplectic integer matrices, and get An(Z) or Bn(Z) etc. Or we could use a finite field. A finite field has one parameter, q, which says what size it is, so we call it Fq. But otherwise it's just a field in the way that R or C is a field: you can add, subtract, multiply, and divide by everything that isn't 0. So we can look at An(q), the group of (n+1) x (n+1) invertible matrices with entries in Fq. Similarly, we can look at special orthogonal matrices over a finite field: the condition MMT = I is perfectly well-defined over Fq. So we can look at SO2n+1 and SO2n (you have to be a little careful if q is a power of 2 but I'm going to ignore that for now). So we get Bn(q) and Dn(q). And we can look at symplectic matrices: MJMT = J also works fine over Fq. So we get Cn(q). The important part is that all of the conditions, invertibility, specialness, orthogonality, symplectic-ness are all algebraic conditions requiring only addition, subtraction and multiplication to define, and so we can talk about them over any commutative ring, be it R, C, Z or a finite field Fq. So that gets us 4 of the families of Classical Groups of Lie Type: An(q), Bn(q), Cn(q) and Dn(q). But if you remember back to my talk about Dynkin Diagrams, we can also look at the exceptional Lie groups: E6, E7, E8, F4 and G2. We can again define these over either R or C, and if you're careful you can also define them over finite fields, to get the Chevalley Groups: E6(q), E7(q), E8(q), F4(q), G2(q) Again, there's some funny business if q is a power of 2 or 3, or 5 for E8, but whatever. Okay, that gets us 9 of the 16 families of groups of Lie type. Now for something a little bit harder. There is a nice map from C to itself called complex conjugation. The important bits are that it leaves all the real numbers in place, and it is a field automorphism: (x+y)* = x* + y*, and (xy)* = x*y*. We can then define the conjugate transpose M*, which takes the transpose of M and then complex-conjugates all of the entries. And we say that a matrix is unitary if MM* = I and say that it lives in 2An. There's also a map from SO2n to itself. In terms of matrices, we define L to be the identity matrix except the last 1 in the bottom right corner is replaced by -1. Then we write M' to be taking LMTL and then complex-conjugating all of the entries. We say that a matrix M lives in 2Dn if MM' = I. I don't know if this group has a name. What's the corresponding version for a finite field? Well we need a field with a "complex conjugation". We can go from R to C by taking a quadratic polynomial with coefficients in R (for instance x2 + 1), taking a root of that polynomial (usually called i), and then completing to a field. We get complex conjugation by sending one of the roots of the polynomial to the other (sending i to -i), and sending anything in R to itself. For a finite field Fq, we get a field Fq2 by taking a quadratic polynomial with coefficients in Fq, taking a root of that polynomial, then completing to a field. We get a "conjugation", called the q-Frobenius map, that sends one of the roots of that polynomial to the other, and leaves anything in Fq alone. Applying the q-Frobenius map twice gets back to the start. Fun fact: the q-Frobenius map can be realized as x -> xq. This clearly respects multiplication, but for a field of size q2 it also respects addition! Hence we can define M* to be transpose and then apply the q-Frobenius map to each entry for "unitary" matrices, and define M' to be transpose, apply L on both sides, and then apply the q-Frobenius map to each entry for whatever we call the 2Dn version. So we can define 2An(q2) and 2Dn(q2) (usual caveat about when q is a power of 2). So the full set of Classical Groups of Lie Type are: An(q), Bn(q), Cn(q), Dn(q), 2An(q2), 2Dn(q2). The transpose for 2An and the more complicated thing for 2Dn corresponds to the fact that their Dynkin diagrams have a symmetry: An has a line for a diagram, so the symmetry is just flipping the line around, and Dn has a line that branches into two 1-segment branches at the end, so the symmetry is just swapping the branches. E6 has a diagram of a node with three branches: one that is 1 segment long and two that are 2 segments long, and so we get a symmetry that is swapping the two 2-segment branches. Hence we get a similar "transpose" for E6, and can build 2E6(q2) that I'm not going to try to illustrate. For D4, the diagram is a node with three 1-segment branches, and so we get a symmetry not just by swapping any two branches but by cycling all three branches. We also get that on Fq3, the corresponding q-Frobenius map requires 3 applications to get back to the start. So we can define a 3D4(q3); this has no analogue over C or for n other than 4 and so is "exceptional" even though D4 isn't exceptional as a Lie group. Thus we get the Steinberg groups 2E6(q2) and 3D4(q3) Then there are the weird ones: the Suzuki-Ree groups 2B2(22n+1), 2G2(32n+1), and 2F4(22n+1). Note that in each of these cases the diagrams are almost symmetrical. For B2, we get two nodes connected by a double line, for G2 we get two nodes connected by a triple line, and for F4 we get a node, connected to a node by a single line, which is connected to a node by a double line, which is connected to a node by a single line. The only reason these diagrams aren't completely symmetrical is that the multi-lines are directed, and normally that means that we can't do a swap. But for fields of very specific sizes, i.e. the ones given, we can do a swap because a bunch of stuff cancels out. As you might expect, this has absolutely no analogy over C, so I'm not going to bother trying. Honestly, if I were doing this "properly", I'd be starting not with finite fields but with algebraically closed fields with positive characteristic, defining algebraic groups over those fields, and then taking subgroups that are invariant under various automorphisms. That would illustrate why the Frobenius maps matter and how 2An, 2Dn, 2E6 etc. work more clearly. But that's also a lot of effort.
You sometimes see the groups defined to be the actual simple groups, which are somewhat harder to deal with. Sometimes they're quotients of the matrix groups, or subgroups, or subquotients. Also a lot of algebraic geometers want to do things the proper way and force you through algebraic groups over algebraically closed fields, which is a nicely uniform presentation and might be good, like, the fifth time you've seen the classical groups. Thanks Chevalley. But yeah, matrix groups over finite fields. Not necessarily the best way to approach them if you're doing research, but probably the best first pass.
Summer of Math Exposition Forgot to talk about this last year, but the 2nd Summer of Math Exposition just wound up, and we have videos! Summer of Math Exposition is a "competition" run by the youtube channel 3Blue1Brown, wherein people make math exposition videos over the summer, and eventually these are collated and ranked according to various metrics. There's a lot of good stuff in here; I learn things all the time. Check out the summary video
I guess I already knew that linear algebra over things that aren't fields is kind of wonky, but here's a neat construction that I figured out while noodling around today: Take the matrix A given by [0, 1] [0, 0] We usually think of A as having one eigenvalue, 0, and having one actual eigenvector [1, 0] and a generalized eigenvector, which is anything that isn't a scalar multiple of [1, 0]. But suppose you move to a ring with a nontrivial order-2 nilpotent, i.e. a nonzero value e such that e2 = 0. Then the vector [1, e] has eigenvalue e. In fact, for any non-nilpotent number k, we get that [1, ke] is an eigenvector with eigenvalue ke, since (ke)(ke) = k2e2 = 0. Another way to look at it is that zI - A has determinant z2, and so for any z such that z2 = 0, z is in the spectrum of A. For fields, the only such z is 0, but over a ring with nontrivial order-2 nilpotents, there are plenty of such elements.
Trying to learn type theory properly. As expected, the difference between judgmental equalities and identifications is giving me trouble, mostly I think because of the notation. In set theory, the logic used is first-order logic, and so there is a notion of "equals" that comes from logic that is independent of set theory. In particular, equality is not a set, nor is it a set-like thing that is too big to be a set. In type theory, specifically Martin-Löf dependent type theory, there is a notion of "is the same thing" called judgmental equality, and you can say that two objects are the same thing. But there's also a notion of an identification, which is an element of a type. The type is parameterized by two objects, call them x and y for now, and the elements of the type are all the ways that x and y can be identified with each other. If x is judgmentally equal to y then we can construct an element reflx which is an identification of x and y. But there are identifications between things that aren't judgmentally equal. For instance, one-element sets are not judgmentally equal, because the elements could be different. But they can be identified. Anyway, the problem I'm having is that the standard notation for "judgmentally equal" is an "=" with a dot over it or a "≡", while the type of identifications between x and y is written as "x = y". Which I sort of get, given that it's analogous to the type of functions from x to y, which is written as "x → y", as in "f: x → y" indicates that f is a function from x to y. So an identification between x and y is written as "f: x = y". But I'm having trouble parsing that as "f: (x = y)" and not as "(f: x) = y". Fun fact: you can build a natural numbers type with various versions of the Peano axioms, but, if you choose the ones I used a bunch of posts ago, where you define + using "m + S(n) ≡ S(m+n)", while you can use them to prove that m + 0 ≡ m for all m, you can't use them to prove 0 + n ≡ n for all n; instead the best you can do is to construct identifications in the type ∏n (0 + n) = n
Still learning about type theory. So far my favorite fact from type theory is that the axiom of choice is analogous to the statement that multiplication distributes over addition.
ChatGPT is getting pretty good at making fake mathematics. I mean, if you actually read it you can tell it's wrong, but it's formatted and styled like real mathematics. It even has some true statements in it! True and possibly relevant statements! Of course it also has a lot of false statements.
Apparently a group of mathematicians found an aperiodic monotile that's homeomorphic to a disk! In other words, a flat, single-piece polygon that will tile the plane, but any such tilings won't have any translational symmetry. In the 70s, Roger Penrose found several sets of pairs of tiles that have this property, and there are cases like Taylor-Socolar tiles where if you redefine "polygon" to include shapes that are several disconnected pieces you can get aperiodic monotiles, but now we have an aperiodic monotile that is a single connected piece! Source: Science News ArXiv preprint: An Aperiodic Monotile