Eh, you can do a lot of it just via character formulas. I mean, they won't necessarily tell you what the basis polynomials look like, but they'll tell you how many there are in any given degree, and you can do some stuff from there.
*Space Filling Curves!* Suppose you had an n-by-n square grid and you want to visit each square on the grid. So you start in one corner, and wander around, and eventually you get to every square in some order. Maybe you just go in rows, one by one. Now suppose you want to visit each square only once. Still, not the hardest problem to figure out; you have to switch direction every once in a while, but not bad. So this gives you an ordering on the squares in terms of which squares are visited before which other squares. Okay, great. Now suppose that each of the squares in the original grid get divided into four smaller squares, so now you have a 2n-by-2n grid of small squares. Well, again you can go around visiting them, and again you'll get an ordering on the small squares. Does the ordering on the small squares reflect the ordering on the big squares? In other words, if you have two big squares A and B, with A visited before B, do you visit the small squares that you got from A before you visit the small squares you got from B? Maybe! There are certain types of paths that you can use to traverse the grid that will preserve the ordering. And not only for a single subdivision: there are a family of curves you can use so that going from a 1-by-1 to a 2-by-2 to a 4-by-4 to an 8-by-8 to ... will still keep the ordering. How? Using space-filling curves. In general, a space-filling curve is a curve whose range is the entirety of a region of 2 (or higher) dimensional space, for instance a square. Most of the time, space-filling curves are described in terms of a sequence of non-space-filling, finite-length curves, with the idea that for the next curve in the sequence looks like the previous curve but wigglier, so that each curve is an approximation of the next curve, like with our idea of ordering. This allows us to consider the space-filling curve as a limit of the sequence of curves. One such space-filling curve is the Hilbert curve. You can see that the first image, the lower-left is visited first, then the upper-left, then the upper-right, and then the lower-right. In the next image, four points in the lower-left are visited, then four points in the upper-left, then four points in the upper-right, and then four points in the lower-right. And so on. You might recognize this curve from xkcd's Map of the Internet as the way that the IP address blocks are ordered. There are of course other kinds of space-filling curves, based on triangular grids rather than square for instance, or in more dimensions. Here is some music based on space-filling curves, with the coordinate values corresponding to pitches and the different dimensions corresponding to different registers or instruments. One might ask whether a space-filling curve counts as a fractal, as most of them are highly self-similar. On the other hand, their ranges do have a integer dimension, and so do their domains, so if we take a "fractal" to be something with a "fractional" dimension, then this isn't it.
The wave equation according to a linear algebraist Consider a string floating in the air, initially straight. If we wiggle one end of the string, the wiggle will propagate down the string, forming what we generally call a "wave". There's a simple description of this process, which of course ignores a lot of what's actually going on but gives a decent approximation. Assume instead of a string, we have a lot of weights in a row, each of mass m, attached by massless springs. Let the spring constant of each of the springs be k. Write u(x,t) for the displacement at time t of the mass that's normally at position x from being actually at x. At the point x+h, the force according to Newton is ma(t) = m(d^{2}/dt^{2})u(x+h,t) Hooke's Law states that deformation of a material is linearly related to how much stress is being put on it. That's in particular true for the model of springs we are looking at. So in particular, the force on the weight at point x+h comes from the two springs on either side of it. On the one side, we have the spring connecting to the weight at point x+2h, which, being deformed by an amount u(x+2h,t) - u(x+h,t), exerts a force of k(u(x+2h,t)-u(x+h,t)). On the other side, we have the spring connecting to the weight at point x, which, being deformed by an amount u(x+h,t)-u(x,t), exerts a force of k(u(x+h,t) - u(x,t)), but in the other direction. So the total force is k(u(x+2h,t) + u(x,t) - 2u(x+h,t)). Setting the forces from Newton and Hooke equal to each other and dividing both sides by m gives us: (d^{2}/dt^{2})u(x+h,t) = (k/m)(u(x+2h,t) + u(x,t) - 2u(x+h,t)) Replacing m by Mh/L and letting h go to 0, i.e. assuming instead of discrete weights and springs we have a continuous system, we get (d^{2}/dt^{2})u(x,t) = (kL/M)(d^{2}/dx^{2})u(x,t) Which we call the wave equation in one (space) dimension. But really what I want to talk about is the wave equation in higher space dimensions. So we're going to take the right side of the wave equation in one dimension and add more variables. Where? Well, u(x,t) becomes u(x,t), where x is now a vector. The important bit is the operator d^{2}/dx^{2}, which now becomes a sum Σ_{i} d^{2}/dx^{2}_{i} which we will write as Δ. So for two dimensions we would have Δ = d^{2}/dx^{2} + d^{2}/dy^{2} Δ is called the Laplace operator. So now our wave equation looks like (d^{2}/dt^{2})u(x,t) = c^{2}Δu(x,t) And now we get to ask what its solutions look like. There's a bunch of analysis you can do to solve this as if it were a boundary-value problem. I'm not an analyst by any stretch, so I'm going to solve this like a linear algebraist. First we perform separation of variables: we pretend that we can write u(x,t) as w(x)f(t). Because why not? Now we write our equation using this separation: (d^{2}/dt^{2})u(x,t) = (d^{2}/dt^{2})w(x)f(t) = w(x)(d^{2}/dt^{2})f(t) = w(x)f''(t) cΔu(x,t) = cΔw(x)f(t) = c^{2}f(t)Δw(x) So we have that w(x)f''(t) = c^{2}f(t)Δw(x) Note that on the left side, w(x) has been left alone, and on the right side, f(t) has been left alone. Also note that (d^{2}/dt^{2}) and Δ are linear operators, since, for instance, Δa(x) + Δb(x) = Δ(a(x) + b(x)). So the clear path forward is to look for eigenfunctions. For d^{2}/dt^{2}, we have eigenfunctions of the form e^{ωt}, which eigenvalue ω^{2}. When ω is real, we get positive eigenvalues. When ω is imaginary, we get negative eigenvalues but we also get complex functions, so usually we piece these together to get trigonometric functions, sin(ωt) and cos(ωt). Similarly, we can continue our separation of variables to deal with the Laplacian, writing w(x) = v_{1}(x_{1})v_{2}(x_{2}) And then see that our solution is a bunch of exponentials e^{k1x1}e^{k2x2}... = e^{k•x} and the eigenvalue is |k|^{2}, the square magnitude of k. So we have our function u(x,t) = e^{k•x}e^{ωt} = e^{k•x+ωt}, and our equation becomes ω^{2}e^{k•x+ωt} = c^{2}|k|^{2}e^{k•x+ωt} Or in other words, we want ω^{2} = c^{2}|k|^{2} What is ω? We normally call that frequency, it's the rate at which our wave goes up and down in time. What is k? Well, in one space dimension, it controls how our wave goes up and down in space, i.e. wavelength. Well, technically 1/wavelength. For multiple space dimensions k is a vector and hence 1/wavelength is certainly not the correct way to think about it, even if the units are correct, but whatever. So we have a relationship: frequency (squared) is proportional to 1 over the wavelength (squared). We usually call this proportionality constant the speed of the wave (squared), hence the probably evocative designation of c (squared). Of course, to most mathematicians this is a rather boring take on the wave equation. What we want is not the wave equation as seen as a linear algebra problem; what we want is the wave equation as seen as a boundary-value problem, because that's where all the fun is: the constraints. Suppose that we have a drum, i.e. a piece of hide or plastic fitted to some shape so that its boundary is fixed in place but the interior can swing freely. Fixing the boundary prevents us from just setting anything as k, and indeed may prevent us from trying our separation of variables for the space dimensions, if the boundary isn't a rectangle. So the problem of finding the possible eigenfunctions from Δ becomes a lot trickier. Indeed, there is a question "Can you hear the shape of a drum?" I.e. based on the possible eigenvalues (and thus frequencies) that the drum admits, can you determine the shape of the boundary? In general the answer is no; in 1992 Gordon, Webb and Wolpert constructed to shapes that yield the same eigenvalues. But these were kind of awful shapes, so then the question started getting constraints put on it: if you know that the boundary is a certain type of shape, can you determine the exact shape? The answer is yes in some cases.
I keep forgetting how super nice de Rham cohomology is. Sure, it only applies to smooth manifolds, but still, it's super pretty.
Super! "In America, algebra graded: 'super'. In Soviet Russia, graded super algebra!" A lot of algebras are interesting mainly in terms of how they deviate from being commutative or associative or both. A particular kind of algebra that shows up in theoretical physics a lot is a superalgebra, which is associative and graded commutative, or supercommutative. We start with a vector space V that can be decomposed into two subspaces, V_{even} and V_{odd}, and say that elements of V_{even} are even and elements of V_{odd} are odd. We make a degree function where |u| is 0 for u even and |u| is 1 for u odd. We'll say that an element of V is homogeneous if it lives entirely in either V_{even} or V_{odd}. We then make an associative multiplication on V to get an algebra such that the product of two even vectors is even, the product of two odd vectors is even, and the product of an even and an odd vector is odd. We'll assume that we have a copy of our base field inside V_{even} just to reduce hassle. We say that this algebra, with the separation into even and odd (called a Z_{2} grading) is a superalgebra. Given a superalgebra define the supercommutator: for homogeneous u and v, we have [u, v] = uv - (-1)^{|u||v|}vu If u or v is even, then this is just the regular commutator. If both u and v are odd, then this looks like uv + vu, which is sometimes called the anticommutator. A supercommutative superalgebra is then an algebra where the supercommutator is 0. Even elements then commute with everything, while odd elements anticommute. So let's take a small example. Suppose our V is generated by two odd elements, called x and y. Then a basis for our vector space could be 1 and xy in V_{even} and x and y in V_{odd}. Note that we can't get any more elements, since x^{2} and y^{2} have to be 0. Our multiplication table then looks like: * | 1 x y xy 1 | 1 x y xy x | x 0 xy 0 y | y -xy 0 0 xy| xy 0 0 0 Note the middle square, with that minus sign. If you're familiar with differential forms, these form a supercommutative superalgebra via the wedge product. The cohomology of a space with field coefficients also forms a superalgebra using the cup product. We can also consider other super objects: A superring is like a superalgebra, but with the underlying structure being only a ring, not necessarily an algebra. A Lie superalgebra takes the supercommutator as its superbracket, and imposes the super Jacobi condition: (-1)^{|x||z|}[x, [y, z]] + (-1)^{|x||y|}[y, [z, x]] + (-1)^{|y||z|}[z, [x, y]] = 0 We can exponentiate this to get a supergroup. Just as a space like a manifold has a commutative algebra of smooth functions, a supermanifold has a supercommutative superalgebra of smooth functions. The simplest example is a super vector space: For a vector space, the algebra of functions is usually considered to be the polynomials in the coordinates for that vector space. For instance, if our vectors are of the form (a_{1}, a_{2},..., a_{n}) then we end up with polynomials in n variables. For a super vector space, we have two types of coordinates, even and odd, and hence we end up with polynomials in those two types of variables. A more complicated example is the tangent bundle of a manifold, with the function superalgebra being the differential forms. The idea of supersymmetry in physics is based on these objects, where we have spacetime being a supermanifold and the gauge group being a Lie supergroup. Just as in regular physics where the Lie algebra of the gauge group corresponds to bosonic degrees of freedom, in supersymmetry the Lie superalgebra of the gauge supergroup has its even part corresponding to bosonic degrees of freedom, and its odd part corresponding to fermionic. Supersupersupersupersuper
It's almost a trivial statement, but I still really like the fact that every function from a finite-dimensional vector space over a finite field to the same finite field is a multivariate polynomial function, and hence that every map from a finite-dimensional vector space over a finite field to any other finite-dimensional vector space over the same finite field can be written as a vector of multivariate polynomials. Everything's algebraic because you don't have enough room to do anything fancier.
Cayley-Klein's Coparallel Points Consider a pair of lines in the Euclidean plane. Perhaps the lines meet at a point, perhaps they don't. We say that two lines are parallel if the lines do not meet at a point. Given a point and a line that doesn't pass through the point, there is exactly one line through the point that is parallel to the first line. This is the parallel postulate, and you might have seen it broken in places. One place where it is broken is if we work on a sphere instead of a plane. Then a line is a great circle, i.e. a circle whose center is at the center of the sphere. Given such a line and a point not on the line, there are no lines through the point that are parallel to the first line. Indeed, every line intersects every other line in two points. Contrast with hyperbolic space. We can model hyperbolic space as follows: consider the disk (a filled circle) and say that a line in this space is a circle that intersects the edge of the disk at right angles, and that the edge of the disk isn't considered part of the space. Now given such a line and a point not on the line, there are infinitely many lines through the point that don't intersect the first line. So we have three options for how many parallel lines: 0, in the spherical case, 1, in the Euclidean case, and infinity, in the hyperbolic case. You've probably seen at least a bit about these spaces before. So now we do what mathematicians often like to do, and pretend that none of the words mean anything so we can rearrange them as we like. We say that two points are coparallel if the points are not joined by a line. Given a line and a point not on the line, we can ask how many points on the line are coparallel to the first point. We have three options: 0, 1, and infinity. 0 we've seen above. So how about 1? Consider movement. Suppose that you can go any finite speed you want. So if you start at some time t1 and some point p1, you can get to any other point p2 at some time t2 as long as t2 - t1 isn't 0. But you can't get to point p2 at time t1, because you'd have to move infinitely fast. So the points (t1, p1) and (t1, p2) are coparallel. This sounds like a not-unreasonable setup, right? This is, in a sense, the intrinsic geometry of Galileo's mechanics, and hence gets the name Galilean geometry. I'm going to call it (1, 1) geometry. In a previous post I also talked about Minkowski geometry, which in the case of only one spatial direction says that you can't go faster than the speed of light. So the points (t1, p1) and (t2, p2) are coparallel if the absolute value of (p2 - p1)/(t2 - t1) is greater than c, which we set to 1. Hence we have an infinite number of coparallel points on a given line. So Minkowski geometry is (1, ∞) geometry. So we've seen five geometries: (0, 0), which is spherical, (1, 0), which is Euclidean, (∞, 0), which is hyperbolic, (1, 1), which is Galilean, and (1, ∞), which is Minkowskian. Now suppose that we want to do a more general setup. We consider some 2-dimensional surface S sitting in 3-dimensional space, and we say that our "points" are the points of S, and our "lines" are the intersections of S with certain types of planes that pass through the origin. For example, spherical geometry is clearly given by S being the surface defined by x^{2} + y^{2} + z^{2} = 1. Euclidean takes the surface S being defined by x^{2} = 1, x > 0. Hyperbolic uses x^{2}-y^{2}-z^{2} = 1, x > 0. Galilean uses x^{2} = 1, x > 0, and then disallows planes that contain the y-axis. Minkowski uses x^{2} = 1, x > 0, and disallows planes containing any vectors (x, y, z) with |y/z| >= 1. What about a (0, 1) geometry? We use the surface S defined by x^{2} + z^{2} = 1, and then disallow planes that contain the y-axis. The intersection of the remaining planes with our surface S are now ellipses. Any two ellipses intersect in two points on S, defined by the ways that the corresponding planes intersect. Given a point and an ellipse that doesn't pass through the point, there is exactly one point on the ellipse such that the line between the point and the point on the ellipse is parallel to the y-axis, and hence doesn't count as a "line" in our model. So we have no parallel "lines", and one coparallel point. Appropriately, this is called co-Euclidean geometry, being dual to Euclidean geometry. What about an (∞, 1) geometry? We use the surface S defined by x^{2} - z^{2} = 1, x > 1, and disallow planes containing the y-axis. Now there are plenty of parallel lines, coming from planes whose intersection lies in a direction (x, y, z) where |z/x| >= 1. Also, given a point and a "line" not passing through the point, there is precisely one point on the "line" whose line to the first point is parallel to the y-axis, and hence not a "line". So we have lots of parallel "lines", and one coparallel point. Appropriately, this is called co-Minkowski geometry. What about a (0, ∞) geometry? We use the surface S defined by x^{2} - y^{2} + z^{2} = 1, and then disallow planes containing any vectors (x, y, z) with y^{2} >= (x^{2}+z^{2}). The intersections of these planes with S are loops (I don't recall if they're actually ellipses). Again, no parallel "lines". But because we've disallowed so many planes, we now have plenty of coparallel points. Appropriately, this is called co-hyperbolic geometry. Finally, we have our last guy, the (∞, ∞) geometry. We call this one doubly-hyperbolic geometry. We model this guy by the surface x^{2} - y^{2} + z^{2} = 1, and disallow planes that don't contain any vectors (x, y, z) with y^{2} >= x^{2} + z^{2}. Now we have lots of parallel "lines" and lots of coparallel points, because we've chosen a frankly weird set of "lines". Here we see a problem, though: each plane generates two separate lines on our surface, S. So let's revise our setup by "folding it in half": instead of saying that our "points" are the points of S, we'll say that our "points" are pairs of points on S, of the form (x, y, z) and (-x, -y, -z), i.e. antipodal points. Similarly, "lines" might end up being two piece in our original model, but in our folded model they're really just one piece. This makes things a little bit easier. For instance, we can get rid of those x > 0 conditions for our surfaces. Also, now when we have a pair of "lines" that intersect, they intersect in precisely one "point", i.e. any pair of "lines" are parallel or meet at a unique "point", just as any two "points" are coparallel or are joined by a unique "line". We call these nine geometries the Cayley-Klein geometries. Because we have lines and points, we can define other geometric notions on them, like distance and angle, and you get fun things like theorems about distances in Euclidean geometry become theorems about angles in co-Euclidean geometry (there's a co-Pythagorean theorem!) and adaptations of the sine and cosine rules. Indeed, the spherical, Galilean, and doubly-hyperbolic geometries are their own duals, so in each of those spaces you get two theorems for the price of one. This also gives us a funny other thing going on: the difference between spherical, Euclidean and hyperbolic geometry can be considered a form of "curvature", how far from flat the surface is. More negatively curved, more parallel lines. The difference between Euclidean, Galilean, and Minkowski is what is your speed limit: none in Euclidean, finite speeds only in Galilean, and below speed 1 in Minkowski. Lower speed limit, more coparallel points. So we have that your speed limit is a dual to curvature???
The notes I'm reading don't go into any category-theoretic stuff. They just mention some things that don't work in positive characteristic or when you're far away from algebraically closed. Do operads make it more unified or just more abstract?
They let you generalize, and there's usually more than one valid generalization in positive characteristic
So this is fun: A hexagonal tiling of the sphere Notably, it's provable that you can't actually tile a sphere entirely using only hexagons. Yet if you go the linked site and click and drag the image around, you'll see that it's completely covered in hexagons. So what's happening here? (Don't spoil it for others if you know/figure it out) I might explain later, if I remember to.
I have a guess but googling suggests that this guess was wrong? Spoiler I thought it was just a grid in fish-eye like the game Tetrisphere but apparently there are pentagons hiding somewhere in there?
Quadratic Reciprocity This is basically the only real number theory that I know that isn't Galois theory, so here we go. Consider the integers modulo 7. We have the usual rules of addition and multiplication, with the extra rule of 7 = 0. So we really only have to worry about the numbers 0, 1, 2, 3, 4, 5, and 6. In general, 0 is boring, so we'll focus on nonzero numbers. What are the (nonzero) squares modulo 7? 1^{2} = 1 2^{2} = 4 3^{2} = 9 = 2 4^{2} = 16 = 2 5^{2} = 25 = 4 6^{2} = 36 = 1 So the squares are 1, 2 and 4. So we see that 3 is not a square modulo 7, and 5 is not a square modulo 7, but 11 (being 4 modulo 7) is a square modulo 7, while 13 (being 6 modulo 7) is not. In fact, let's just generate a bit of data: 3 is a square modulo 11 and 13, but not 5 or 7 5 is a square modulo 11 but not 3, 7 or 13 7 is not a square modulo 3, 5, 11 or 13 11 is a square modulo 5 and 7 but not 3 or 13 13 is a square modulo 3 but not 5, 7 or 11. So we want to ask: given two odd prime numbers p and q, is there a relationship between whether p is a square modulo q and whether q is a square modulo p? The answer, due to Gauss, is yes! The law of Quadratic Reciprocity states that if either p or q is 1 modulo 4, then p is a square modulo q if and only if q is a square modulo p, while if both p and q are 3 modulo 4, then if p is a square modulo q then q is not a square modulo p, and vice-versa. There are like a million different ways to prove this; Gauss alone came up with eight. There are a few other cases of interest. We have that 2 is a square modulo p is 1 if p is 1 or 7 modulo 8, and p-1 is a square modulo p is 1 if p is 1 modulo 4. You should test all of these out on some small cases. Now suppose you have two squares modulo p, a^{2} and b^{2}. Then we get that (ab)^{2} = a^{2}b^{2}, so the product of two squares is a square. If you have a square and a nonsquare, then their product will be a nonsquare. And also, if you have two nonsquares, their product will be a square! This is special to modulo p, and is not true in the integers or the rational numbers. So we define the Legendre symbols, which I'm going to write as f(q, p), to be 1 if q is a square modulo p, and -1 if q is not a square modulo p.* So we get that if either p or q is 1 modulo 4, then f(p, q)f(q, p) = 1, and is if both p and q are 3 modulo 4 then f(p, q)f(q, p) = -1. Of note is that if p is 1 modulo 4, then (p - 1)/2 is even, and if p is 3 modulo 4, then (p - 1)/2 is odd. So we get that if either of p or q is 1 modulo 4, then (p - 1)(q - 1)/4 is even, and if both are 3 modulo 4, then (p - 1)(q - 1)/4 is odd. So (-1)^{(p - 1)(q - 1)/4} is 1 if either of p or q is 1 modulo 4 and is -1 if both are 3 modulo 4. Hence we can write the law of Quadratic Reciprocity as f(p, q)f(q, p) = (-1)^{(p - 1)(q - 1)/4}. Further stuff: Note that while we require p to be prime, we don't actually need q to be prime. Indeed, we can simply say that f(a, p) is 1 if a is a (nonzero) square modulo p, and -1 if a is not a square modulo p; just for completeness, we define f(a, p) to be 0 if a is a multiple of p. The Legendre symbols work nicely with multiplication: f(a, p)f(b, p) = f(ab, p). This gives us a way to talk about squares modulo large primes. Suppose that p is really big, and we want to know if a is a prime modulo p. We could just list out all of the squares modulo p and see if a is on the list, but that could take a while. Instead, we first divide out any square factors of a, since they don't change the fact of a being a square or not. Then we factor the remains of a into primes a_{1}, a_{2}, ..., a_{k}, and compute f(a_{i}, p). But f(a_{i}, p) is still asking whether something is a square modulo a big prime. So instead, we compute f(p, a_{i}). Since a_{i} is a factor of a, it is a prime smaller than p, so f(p, a_{i}) is asking whether something is a prime modulo a smaller prime. Then we use Quadratic reciprocity to relate f(a_{i}, p) to f(p, a_{i}). If a_{i} is still too big to list the squares for, we do this again. Write b = p mod a_{i}. Then divide out all of the square factors of b, and factor the remains to get b_{1}, ..., b_{l}, and compute f(b_{j}, a_{i}) by instead computing f(a_{i}, b_{j}) and using Quadratic Reciprocity. Since b_{j} is a factor of b, it is smaller than a_{i}. And so on. So that's Quadratic Reciprocity. There are reciprocity laws for higher powers, but they are somewhat more complicated, and the search for these laws led to a lot of the early development of general algebraic number fields. * The Legendre symbol is usually written like a fraction inside parentheses, as if p and q were numerator and denominator. While I understand the utility of simple, concise notation, it also really bothers me, so I'm not using it outside of this footnote.
TIL that Europe stopped using infinitesimals in 1632 not because of any particular foundational or logical issues, but because the Jesuit clergy banned it. Source: Principle of Least Action - History and Physics by Rojo, A and Bloch, A.
I don't actually have access to the book itself, but I'm guessing the same reason that Cantor got a lot of flak: something about how talking about infinity is encroaching upon God.
Every once in a while I think about this song. Two thoughts: 1: The real test is not how many of the references you get, but how many of them you can explain. 2: The best joke is that they're the Klein Four Group, but there are five of them.