Math(s)! Currently: Summer of Math Exposition

Discussion in 'General Chatter' started by Exohedron, Mar 14, 2015.

  1. Exohedron

    Exohedron Doesn't like words

    The Riemann sphere is so great, and shows up all over the place. One great place is in Penrose's twistor program, where the basic insight is that the set of light rays through a given point is a 2-sphere, and thus can be viewed as a Riemann sphere. Based on this, a lot of physical law ends up reducing to "Nature is holomorphic".
     
  2. evilas

    evilas Sure, I'll put a custom title here

    I have never heard of this before and am now very very interested. *looks up twistors on Wikipedia*
     
  3. Exohedron

    Exohedron Doesn't like words

    My favorite thing about twistors is that they're blatantly nonlocal and there's really no way to make them local. Instead of the basic geometric objects being spacetime points, the basic geometric object for twistor theory is the lightlike trajectory.
     
  4. evilas

    evilas Sure, I'll put a custom title here

    How are they nonlocal? Could you explain? Does it treat all events at 0 proper distance from each other as equal or something?

    (Also, I, umm, started doing some Googling and it seems like a lot of that theory and what he says is related to "consciousness"? Which sounds a lot like quantum mysticism, unless maybe I'm not getting the full picture.)
     
  5. Exohedron

    Exohedron Doesn't like words

    Penrose is a good physicist who has unfortunately wandered off the deep end with regards to cognition. Which is sad, because he did a lot of really good work in relativity and his twistor program is brilliant even if it turns out to be completely wrong.

    Anyway, nonlocality means that events that are spacelike separated can still influence each other. That was the thing that Einstein objected to in quantum mechanics: spooky action at a distance.
    In the twistor setup, the notion of "spacelike" is kind of contrived, because to a ray of light, "spacelike" and "timelike" aren't meaningful.
     
  6. TheSeer

    TheSeer 37 Bright Visionary Crushes The Doubtful

    I know why you said that, because my professors said it pretty much the same way, but it's one of the most misleading sentences in physics.

    The kind of nonlocality that's been actually shown to exist is missing most of the features of causality. It's a bit like the puzzle with the hats from the brainteaser thread - one thing isn't causing the other, there's "just" some interesting statistical correlations due to their having a common cause in their (timelike) past. Scarequotes because the physics is actually very cool, despite not living up to the "wow! quantum entanglement is faster than light!" hype.

    Hey! Should I do a post on Bell's Theorem?
     
  7. WithAnH

    WithAnH Space nerd

    Do eeeeeeeeeet!
     
  8. evilas

    evilas Sure, I'll put a custom title here

    YES do it!
    ...hmmm... another thing I always found fascinating: The delayed-choice quantum eraser. Which, I mean, you could technically say it's the same thing only changing space for time. (Does that mean that a description of one should be able to be Lorentz-transformed into the other?)
     
  9. Emu

    Emu :D

    Oh hey I see I was summoned the other day to talk about finite fields and p-adic fields. I'll see if I can come up with something for that soon - probably at least a post about finite fields, I could talk about how you build Z/pZ and why it's a field, and then talk a bit about what people do with finite fields and with projective spaces over them.

    That might also lead naturally into a post about elliptic curves... I was talking about that with @evilas the other day, actually. Could give a vague description of what the Birch and Swinnerton-Dyer conjecture is and why it's interesting (and/or some of the other big conjectures about elliptic curves), perhaps.
     
  10. Exohedron

    Exohedron Doesn't like words

    Yes on Bell's theorem.

    And I tend to forget influence versus correlation because I am more naturally a relativist than a quantum person. So good catch.
     
  11. Exohedron

    Exohedron Doesn't like words

    Hmm. Maybe a post about the more accessible Millenium problems might be a decent idea. I have no idea how to make, say, the Hodge conjecture or the Yang-Mills Mass Gap accessible, but the standard statement of Navier-Stokes isn't too bad, nor is P v NP. I'll leave the Riemann Hypothesis and Birch-Swinnerton-Dyer to our number theorist, though.
     
  12. Emu

    Emu :D

    Yeah, I don't really know as much Hodge theory as I'd like but I don't really see how to say anything about the Hodge conjecture in a down-to-earth way. BSD is already pushing it but I think I could at least get across an idea of what's going on. RH shouldn't be too bad to talk about either.

    And don't forget about the Poincare conjecture just because it's solved! That one also seems reasonable to explain, at least by analogy to the 2-dimensional situation.
     
  13. evilas

    evilas Sure, I'll put a custom title here

    Whoever explains that should definitely try to explain why 4 dimensions were particularly hard to solve.
    ...
    ... @Exohedron , why do I have the feeling that it's intricately linked to quaternions?

    (Seriously, it's almost like there were something special about 3 and 4 dimensions that.... oh. Right.)

    (No but really, I'm still wondering why quaternions haven't been the answer to a Quantum-Relativistic Theory of Everything. 3+1 dimensions is almost too perfect of a coincidence to not be a good answer.)
     
  14. Emu

    Emu :D

    I can answer that one, sort of: 1 and 2 dimensions don't have enough room for much interesting to happen, and dimensions 5 have so much room that you can move things around freely enough to solve the problem (at least, if you're Stephen Smale). That leaves the hard cases being dimension 3 (the one solved by Perelman about 10 years ago) and dimension 4 (which was solved by Freedman in the 80's). Studying 3-dimensional and 4-dimensional manifolds is one of the big areas of topology these days, since those are the sweet spot where interesting stuff happens - if you go smaller everything's already been done for a hundred years, and if you go bigger then there's enough flexibility that what you're trying to solve is probably either fairly straightforward or intractably hard.

    Remark: Dimension here is the dimension of the sphere itself, which is one less than the dimension of the space it's sitting in. So to calibrate, the usual sphere you think of (in 3-dimensional space, consisting of the points exactly distance 1 from the origin - not the entire solid area inside!) is the 2-dimensional sphere - if you're walking on the surface of the sphere you have 2 dimensions to choose to go. The hardest case of the Poincare conjecture was the 3-sphere, which is something sitting in 4-dimensional space so it's hard to visualize it.

    Remark 2: I didn't actually say what the Poincare conjecture was, and well I don't want to make a detailed post now, but basically the n-dimensional Poincare conjecture says: Any n-dimensional manifold which isn't obviously not a sphere, actually is a sphere (topologically, which means if I give you this n-dimensional thing and you're allowed to continuously deform it as much as you want, you can turn it into a sphere). For instance the surface of a cube is topologically a sphere, since you can imagine pushing in the corners and massaging it into a sphere shape without doing any cutting or gluing. But the surface of a donut is obviously not a sphere, because it has a hole in the middle that you can loop something around and a sphere doesn't have that (and the sort of deforming we're talking about will always keep the hole there).
     
  15. Exohedron

    Exohedron Doesn't like words

    The reason that quaternions haven't been the answer to Everything is that while the quaternions are nice algebraically, they're too rigid to allow for interesting differential stuff to happen: any quaternionic-holomorphic function, for the naive notion of holomorphic, must be linear. But we definitely do not live in a linear world.
    So any theory that uses quaternions to try to talk about our 3+1 dimensional space has to use a weaker notion of holomorphic, but there are a bunch of different weakenings and so it's hard to know which one to use if any.

    EDIT: Also, even without the calculus question, the quaternions simply have the wrong kinds of symmetries. Just as the complex numbers don't make a good model of the Minkowski plane because they're not Lorentz-invariant, neither are the quaternions. The natural symmetry group of the quaternions is SO(4), not SO(3, 1). You'd have to use some form of split-quaternions to get the right kind of behavior.
     
    Last edited: Aug 25, 2016
  16. Exohedron

    Exohedron Doesn't like words

    The Millenium Problems

    At the turn of the century, the Clay Math Institute took a look at the current state of mathematics and, inspired by Hilbert's 23 problems for the 20th century, put together a list of seven problems that it thought should be guiding lights for mathematical research in the 21st century.

    They are
    -The Riemann Hypothesis
    -The Yang-Mills Mass Gap
    -The Navier-Stokes blow-up problem
    -P v NP
    -The Hodge Conjecture
    -The Birch-Swinnerton-Dyer conjecture
    -The Poincare Conjecture

    A 1,000,000 USD prize was attached to each of these problems, to be awarded to the first person or persons to submit a full solution to the Clay Math Institute and deemed complete and correct by their reviewers.
    So far, one of them has been solved: the Poincare Conjecture is now the Perelman theorem, as of 2006. That leaves six left.

    We'll talk about these in whatever order seems to work. Which means that the Hodge conjecture and the Yang-Mills Mass Gap might never get talked about, because those require a lot of background to even state.
     
    Last edited: Aug 27, 2016
    • Like x 1
  17. Exohedron

    Exohedron Doesn't like words

    P v NP

    P v NP is, in the usual statement, about how difficult it is to solve types of problems. In particular, how the difficulty of solving increases as the problem gets bigger, versus how the difficulty of checking a given solution increases as the problem gets bigger.

    Firstly, since a lot of people misunderstand this: to write down an n-digit number requires n digits. That sounds really obvious when said like that, but that means that it doesn't take any more digits to write down 9 million than it does to write down 1 million. When we talk about the "size of the input" to a computer program here, we're not talking about the actual value, we're concerned with how many digits (binary digits in particular, i.e. bits) it takes to write down the input. So a number that is of size n has value roughly 2n; this is very rough, but the point is that "size" grows significantly slower than "value". 111111 and 100000 have the same size, but interpreted as binary, the value of 111111 is almost twice that of 100000.

    Okay, now let's talk about multiplication. Suppose we have two n-bit numbers. To multiply these two numbers together is pretty straightforward: for each bit in one number, you multiply it by each bit of the other number, and then add together the appropriate bit-by-bit products and carry where appropriate. So you need to do roughly n2 multiplications, and roughly n2 additions of those products, and maybe 2n carries or something. The point is that as n increases, the number of basic arithmetic operations you need to do doesn't blow up wildly.
    In particular, if we write M(n) as the maximum number of arithmetic operations you need to do to multiply two n bit numbers, then there is some polynomial A(n) such that for large enough n, A(n) > M(n). In this case we have two "roughly n2" sets of things we need to do and a "roughly 2n" set of things we need to do, so we're probably looking at no more than 2n2 + 2n arithmetic operations in total. So we can take A(n) to be, say, 3n2, and once n gets bigger than 2 we're set.
    We say that multiplication can be done in "polynomial time", i.e. there is some algorithm and some polynomial in the size of the input that grows faster than the worst-case-scenario number of steps it would take the algorithm to multiply two numbers of size n. We often shorten the statement to saying that multiplication is "in P".
    Of course, you can always use an algorithm that wastes a lot of time doing unnecessary things, but we only look at the worst-case scenario for each possible algorithm and then look for the most efficient of those, and then see how that is affected by the size of the input.

    Theoretically we like polynomial time problems; they're "easy". Of course, the polynomial could be really bad, it could have a very high degree, like na million, or there could be a really big coefficient out front, like 2a millionn, so that "large enough" is something ridiculous. But in theory polynomial time problems don't grow awful.


    Now let's consider factorization: we have an n-bit number and we want to factor it into two smaller numbers. Or more specifically, we are told that the n-bit number is the product of two particular smaller numbers, and we want to see if this is true. The straightforward way to do this is to take the two smaller numbers, multiply them together, and see if that matches the n-bit number. This checking of whether we got the right answer occurs in polynomial time, because it takes polynomial time to multiply the two smaller numbers, and then maybe checking for equality has to occur bit-by-bit, so that's roughly n more steps. So we can still bound the number of steps to check the proposed answer by some polynomial.
    So we say that factorization is "in NP", where NP stands for "nondeterministic polynomial", because we can check whether an answer is correct in polynomial time.*

    But now let's look at actually factoring an n-bit number. The straightforward way would be to just check all the numbers smaller than the n-bit number, but such a brute-force method would, in the worst-case, make us check (square root of the value of the number) cases. Recalling that the value of the number is roughly 2n, that means we need to check 2n/2 cases, and that definitely grows faster than any polynomial; there is no polynomial function A(n) such that for all large enough n, A(n) > 2n/2.
    So the brute-force way of factoring is not polynomial time. But maybe there is a polynomial time algorithm for factoring!

    So the P v NP question is as follows: suppose that we know that a problem is in NP, i.e. that answers can be checked in polynomial time. Does that mean that the problem is in P, i.e. that answers can be found in polynomial time? Or are there problems that are qualitatively easier to check answers to than to solve?
    Note that the polynomial bound on how long it takes to check an answer is not necessarily the polynomial bound on how long it takes to find an answer; indeed, if you know how to find an answer you can just run that program and compare the results to the supposed answer to be checked.

    There are a number of problems that are known to be NP-hard, which means that if there is some way to solve those problems in polynomial time, then all NP problems can be solved in polynomial time. Basically there is a way to translate solutions of an NP-hard problem into solutions of any NP problem, and the translation itself only takes polynomial time. Note that an NP-hard problem isn't necessarily in NP. There are problems that are much harder than NP, where even checking the answer gets really difficult.

    Fortunately, there are a number of problems that are known to be NP-complete, i.e. are NP-hard and are known to be in NP**. The most famous one is the travelling salesman problem: a salesman wants to travel between n cities by the shortest route, visiting each city once. It's certainly easy to check if a route is shorter than some given length, but it's hard to find routes smaller than a given length unless the given length is quite large.
    The set of known NP-complete problems is actually quite interesting in itself. For instance, we get NP-complete problems by trying to do specific things in generalizations of many commonly known games like Bejeweled and Minesweeper and Pokemon. There's also the knapsack problem, i.e. stuffing as many things of varying size into a finite-size knapsack.
    Here is Wikipedia's list.

    I think that most people who have an informed opinion on P v NP think that there are problems in NP that are not in P. There is enough of a gap between the problems known to be in P and the problems known to be NP-complete in terms of style and focus and difficulty that it would be very strange if NP were equal to P. But nobody has proven it yet. There are a lot of very promising failed attempts at proving that NP is distinct from P, and I'm sure a lot of promising attempts at showing that they are the same although I've paid less attention to those.
    Our current encryption systems depend on certain NP problems not being in P. For instance, the RSA algorithm depends on factoring being hard. And so far factoring appears to be hard.
    But we also for a long time thought that testing for whether a number is prime is hard, because the simple way to test if a number is prime is to try and factor it. However, it turns out that there are polynomial-time algorithms for checking whether a number is prime that completely bypasses the need to try factoring it! So it is not completely obvious that NP is distinct from P.


    The question of P v NP is part of a field called "computational complexity", which basically asks "how does the amount of resources necessary to solve a type of problem increase as the complexity of the problem increases?" Sometimes those resources aren't time (number of operations needed) but memory, how much the computer needs to store while doing the computation; an algorithm that requires less than some polynomial amount of memory lives in PSPACE. Sometimes you're allowed to involve randomness in your solving algorithm or in the problem; that gives another class of problem.
    Sometimes you allow the difficulty to grow exponentially, but not significantly worse than that. That gets you into EXPTIME and EXPSPACE. And so on. There's a lot of different levels and types of difficulty, but P v NP is the problem that got the most attention because it's kind of at the bottom of the hierarchy and also has some consequences.


    *Technically, there are two relevant classes: NP in which you can check whether an answer to a question is "yes" in polynomial time, e.g. "is there an example of this thing?" and co-NP in which you can check whether an answer to a question is "no" in polynomial time, e.g. "are there no examples of this thing?" The current state is that we believe that co-NP and NP are not equal, but if NP = P then co-NP also equals P and vice-versa.

    **Factorization of integers is not one of these problems. It's in NP, but it's suspected to not be NP-complete. It's known to be in co-NP, and any problem in both NP and co-NP can't be NP-complete or co-NP-complete if NP is not equal to co-NP, as is currently believed.
     
    Last edited: Aug 27, 2016
    • Like x 1
  18. Exohedron

    Exohedron Doesn't like words

    Navier Stokes

    The Navier-Stokes Existence and Smoothness problem is quite far away from my fields of expertise, but that's only stopped me a few times before, so let's do this.

    A lot of laws of physics are stated in the following fashion: "if you know what is happening now, you can figure out what will happen in the future". Of particular interest is the notion of a differential equation. Differential equations relate quantities to how those quantities change in time.
    For instance, suppose you have a ball on a metal spring. We can talk about the position of the ball at a given moment, and we can talk about how stretched the spring is based on that position. Springs exert force depending on how stretched they are, so we get that the force exerted by the spring depends on the position of the ball. Then Newton's second law says that the acceleration of the ball, how the ball's velocity changes in time, is proportional to the force exerted by the spring.
    So our equation now says that "the current change in (the change in the ball's position over time) over time is proportional to the ball's current position". This might be written as d2x/dt2 = kx. Then you get to try to solve for x as a function of t supposing that you're given the value of k and some data on what the ball is doing at the present moment.

    The Navier-Stokes equation is a differential equation describing the flow of fluids, which covers things like water and oil but also things like air. What kind of fluids? Let's consider water. We can exert pressure and stress on water, gravity acts on water, maybe other forces too, the water has a density and a viscosity. The Navier-Stokes equation uses those parameters the way that our spring equation had that parameter k, so that we can ask "given the values of these parameters, what's the solution?". Some of those parameters are single numbers, some of them are functions of position and time.
    Unlike the position of a ball, you can't describe the position of a body of water by a few numbers. Instead, what you want to describe is how the water at a given position is moving at a given moment. In other words, you want a function that takes in a position and a time and spits out a velocity. We call this function the flow field. In addition, we also want a function that describes the pressure of the fluid at each point; this is called the scalar pressure field.
    The Navier-Stokes equation is then a differential equation relating the flow field and the scalar pressure field to their various derivatives, based on various characteristics of the fluid. A solution then consists of a flow field and a scalar pressure field that satisfy the differential equation with the given parameters.
    As you might expect, an equation that can model the behavior of water and air and liquids and gases is very useful in things like meteorology or ocean currents, but also in things like aerodynamics and blood flow and anything where liquid or gas moves around. Powerful computers are currently in use to find numerical solutions or approximations to solutions for the Navier-Stokes equation for all of these uses.

    Mathematicians however aren't always simply trying to find solutions. Sometimes solutions are too hard to find. Sometimes individual solutions are kind of boring.
    Thus sometimes mathematicians want to find out what properties all solutions to a given type of differential equation must have in common. Are all the solutions nice, for various notions of "nice"? Is there a unique solution? Are there any solutions at all?

    So the Navier-Stokes existence problem asks "does each set of parameters and initial values give a differential equation that has a solution?" This is currently not known.
    The Navier-Stokes smoothness problem asks "does each set of smooth parameters and initial values give a differential equation that has a smooth solution?" i.e. given parameters where the functions are all differentiable as many times as you like, is there a solution that is also differentiable as many times as you like? This is currently not known.

    We'd like both to be true. We would like there to be solutions because if there aren't, then what is nature doing? If we can set up the parameters to give a situation with no solution, then what should we expect to happen?
    We would also definitely like the solutions to be smooth. We would be unhappy to learn that our currently-reasonable liquid or gas could just blow up without any chemistry or atomic physics going on, just due to things like constant pressure. We expect some resistance, sure, but we don't expect explosions and we'd like not to have to expect explosions, at least not in a infinite amount of time. Things are allowed to go to infinity as time goes to infinity, but not before then.


    Note: the N-S problem as stated by the Clay Math Institute is actually somewhat more restricted than the description I gave. For instance, the fluid is assumed to be incompressible. Also there are two situations being considered. 1): where the initial values for the flow field and scalar pressure field are periodic; 2): where the initial values drop off at some prescribed rate as you head to infinity.
    Also, the N-S equation by itself doesn't actually specify the number of dimensions, just that there is one time dimension and at least one space dimension. The Clay Math Institute assumes that there are three space dimensions.

    The N-S existence and smoothness question is "yes to both" if there are only two space dimensions.
    Other partial results are that the answers are yes if the initial values for the flow field are sufficiently small, or if we only look at a short enough time period.
    On the other hand, Terrence Tao showed that an averaged version of the N-S equation has solutions that do blow up in finite time, and claims that his method could help find solutions for the original N-S equation that blow up.

    Beyond that, I have no idea what the general consensus amongst knowledgeable mathematicians is. If anyone here does partial differential equations or fluid dynamics and would like to chime in, feel free. This is the most obviously real-world applicable of the Millenium Problems, so of course I know very little about it.
     
    Last edited: Aug 30, 2016
    • Like x 2
  19. Exohedron

    Exohedron Doesn't like words

    The Poincare Conjecture

    Suppose you have a 2-sphere, like the surface of the Earth. You can wrap a rubber band around the sphere, and then shrink the band to a point without lifting it off the sphere.
    However, if you have, say, the surface of a donut, then if the rubber band goes through the hole in the donut then there's no way to shrink it to a point without moving it off the surface.

    So we say that the sphere is "simply connected" while the donut is not. Another way to think about being simply connected is that if you pick two points on the sphere, then any two paths between the two points can be deformed into each other in a continuous fashion, by making a bunch of small deformations and detours to the paths. In contrast, a path that goes along the outside of the donut and one that goes through the hole can't be deformed into each other without doing something drastic.

    Another example of a non-simply connected surface is the plane with a puncture in it, i.e. the plane minus a point. A loop that goes around the puncture can't be shrunk to a point without passing through the puncture and thus leaving the surface.

    Things other than surfaces can be simply connected or not simply connected. Three-dimensional Euclidean space is simply connected. Three-dimensional Euclidean space minus a point is also simply connected, because you can just move your loop to avoid the puncture.

    We also say that a space X that is simply connected has "trivial first homotopy group", i.e. π1(X) = 0. Never mind what that means, exactly. The important bit there is "first".

    We can also consider not just loops but higher-dimensional spheres.
    For instance, for Euclidean 3-space, we can take any 2-dimensional sphere and shrink it to a point, but if we had a punctured Euclidean 3-space, then any sphere enclosing the puncture can't be shrunk without having to pass through the puncture. π2(R3) = 0, π2(R3\{0}) is not 0.

    The Poincare Conjecture asks how strong the condition of having trivial homotopy groups is.

    There are a few more concepts we need: homeomorphism, manifolds, and compactness.

    Homeomorphism is our notion of equivalence; a map between two topological objects is a homeomorphism if it is invertible, and both the map and its inverse are continuous, i.e. sending "nearby" points to "nearby" points. Only "nearby" isn't really a distance-based notion here.
    An open interval is homeomorphic to the entire real line; hopefully you can imagine stretching the interval out to infinity in both directions, and conversely shrinking the entire line to fit in the interval. Both are continuous operations; no cutting, no gluing, no holes. In contrast, a pair of intervals is not homeomorphic to a single interval, because there's a disconnect in the pair that doesn't exist in the single interval, so the map from the single interval to the pair isn't continuous.

    A manifold is an object that looks locally like Rn for some n. It's easier to illustrate than explain: an idealized X where the lines are infinitely thin isn't a manifold because while it looks like R1 along the arms, at the crossing it doesn't look like Rn for any n. It looks like part of R2, but it doesn't look like R2.
    Circles and spheres and Euclidean spaces and punctured any of the above are manifolds, but things with intersections or sharp boundaries tend not to be manifolds. Being a manifold basically means being nice and smooth everywhere.

    Compactness means that everything has a limit in the space in question. The plane isn't compact because you can head off forever in any direction you choose. The open interval also isn't compact because although it's bounded by distance, you can do Zeno's half-the-remaining-distance trick to get closer and closer and closer to one of the ends but never reach it, and there's no point within the space that you're approaching.
    Basically, you can't choose an infinite sequence of points without having an infinite number of them bunch up at a point within the space.
    The spheres are all compact. A closed interval is compact. A region of the plane with its boundary is compact, but that same region minus a point in the middle is not compact, because like the open interval you can approach the missing point without hitting it, and there's no single point within the space that you're bunching up at.

    So finally, the Poincare Conjecture states that if we have a compact 3-dimensional manifold X such that π1(X) = 0, then X is homeomorphic to a 3-sphere.
    The Generalized Poincare Conjecture states that if we have a compact n-dimensional manifold X such that the homotopy groups of X are equal to the corresponding homotopy groups of the n-sphere, then X is homeomorphic to an n-sphere.

    The generalized version was known for a long time to be true for 2-spheres. 1-spheres, i.e. circles, aren't simply connected. So those are the boring cases, easily proven because there's no real room for anything complicated to happen.

    A combination of work by Smale, Stallings, Zeeman and Newman proved that the generalized Poincare conjecture for dimensions 5 and up, by going through the more rigid "piecewise linear" variant of the question. Smale, usually credited with the proof of the 5+ dimensional Poincare conjecture, used a technique called "the Whitney Trick". The Whitney trick in n-dimensions, used to untangle n/2-dimensional objects from each other, turns the untangling into a problem about 2-dimensional objects. But when n = 4, then n/2 = 2, so the Whitney trick hasn't actually bought you anything there. Hence the technique fails for dimensions less than 5.
    In general, topology in dimension 4 is known to be weird, and this is one of the reasons.

    Freedman then showed that the Poincare Conjecture is true in dimension 4, I don't actually know much about how this was done.

    Finally, in 2003, Gregori Perelman showed the Poincare Conjecture in dimension 3.
    What he actually did was show the Thurston Geometrization conjecture, which stated that 3-dimensional manifolds had to be made of combinations of 8 types of pieces. The only combination of those pieces which then yielded a compact and simply-connected manifold was the 3-sphere.
    Perelman's work came at the problem from an interesting angle, in that he looked at the issue as one of differential equations related to the heat equation from physics, with the equation describing how the manifold would deform according to some law. As the manifold deformed, it would break apart into its pieces, and Perelman showed that those pieces were the ones of the Geometrization conjecture.

    Perelman published his work as several online papers. When the papers were eventually accepted as having a few minor gaps but essentially complete, Perelman was offered both the Fields Medal and the Clay Math Institute's Millennium Prize. He refused both, and the associated prize moneys, and vanished into seclusion, leaving academia entirely. Mathematics journalists tell tales of having to stake him out at the house where he lives with his mother in order to get reluctant interviews.

    There's actually more to the Poincare question. The original version talked only about continuity, but we can ask for more stringent conditions. What if we wanted to ask about smoothness? We mentioned smooth functions before, as functions that you can differentiate as much as you want.
    Recall that a manifold looks locally like Rn. We can take the differentiation structure of Rn and transfer it to our manifolds, and then talk about smoothness for maps between manifolds. So what if, instead of just homeomorphism we insisted on diffeomorphism, i.e. a map between manifolds that is invertible where both the map and its inverse are smooth?
    The smooth Poincare conjecture is known to be generally false, true in dimension 2, 3, 5 and 6, known to be false in dimensions 7 and a few other dimensions, and unknown in dimension 4. A counterexample to the smooth Poincare conjecture is called an "exotic sphere"; an object that is homeomorphic to but not diffeomorphic to the standard n-sphere.

    Dimension 4 is really the weird place. It's large enough that you can get nontrivial things like pairs of surfaces that intersect at isolated points, but not so large that you can necessarily get those surfaces to avoid each other just by moving them a bit. Also the topological and smooth behaviors of things start to diverge; one of my professors said that it was because while the Whitney trick will eventually work if you do it an infinite number of times, that's topologically an okay procedure but not okay from a smoothness perspective.
    Also, there are exotic R4s, i.e. manifolds that are homeomorphic to standard R4 but not diffeomorphic; continuous, but not smooth. As a differential geometer, that absolutely blows my mind. It looks like an R4 as long as you don't try to do any calculus.
    There are no other exotic Rns, only in dimension 4. That means that if you take one of the exotic 7-spheres and remove a point, you get a manifold that is homeomorphic to R7, and therefore must be diffeomorphic to it. So all the smoothness misbehavior of the exotic 7-sphere can be focused at a single point.

    Anyway, this ends the set of Millenium Prize problems that I think I should talk about, since while I love pretending to be more knowledgeable than I am about math, I try not to do it when there are people who actually know what I'm only pretending to.
     
    Last edited: Feb 10, 2017
  20. Exohedron

    Exohedron Doesn't like words

    Quick detour to hype a book:
    I've always thought that mathematics should be presented as a more hands-on activity. Geometers have to develop the ability to "visualize" high dimensional or non-embeddable objects, but for those without the training, there's so much you can show and so much you can learn just by having a thing in your hands.
    Henry Segerman's videos make it look like he has the best job.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice