Math(s)! Currently: Summer of Math Exposition

Discussion in 'General Chatter' started by Exohedron, Mar 14, 2015.

  1. Exohedron

    Exohedron Doesn't like words

    Fact i had forgotten until today: Given a graph, i.e. a set of points (called vertices) and edges connecting them, we can talk about Hamiltonian circuits. You can describe a graph with n vertices by specifying for each pair of vertices whether there is an edge between them or not; in other words, a graph of n vertices is given by n(n-1)/2 pieces of information.
    Given that as our measure of the size of a graph, determining whether a given graph has a Hamiltonian circuit or not is NP-complete.
     
    • Like x 1
  2. seebs

    seebs Benevolent Dictator

    oh hey funny thing gray code coming up because i'm looking at a rotary encoder that has docs saying it's "sort of like" gray code, which it's not.

    also: Towers of Hanoi is a usable binary coding system.

    https://www.seebs.net/log/articles/246/hanoi-coding
     
    • Informative x 1
  3. evilas

    evilas Sure, I'll put a custom title here

    This video also explains it, though it's a lot more basic in how it starts:


    EDIT: Similarly, in part 2 he shows you how the "restricted" Tower of Hanoi (where you can only move the disk to the adjacent stick and not to the furthest one) is a usable ternary coding system
     
    Last edited: Jul 4, 2017
    • Useful x 1
  4. Exohedron

    Exohedron Doesn't like words

    The Liar's paradox, the statement "This sentence is false", can't be true because then it would be false (i.e. not true), but it also can't be false because then it would be true (i.e. not false). So we usually say that it is undecidable, because we can't consistently say that it is true or false.

    Hat tip to Gerard Westendorp on Google Plus for the following sentence:

    "This sentence is either false or undecidable."

    The sentence can't be true, because then it would either be false (i.e. not true) or undecidable (i.e. not true).
    The sentence can't be false, because then it would be true (i.e. not false).
    The sentence can't be undecidable, because then it would be true (i.e. not undecidable).

    So now what do we call it?
     
  5. Exohedron

    Exohedron Doesn't like words

    "The laws of mathematics are very commendable but the only laws that apply in Australia is the law of Australia."
    -Australian Prime Minister Malcolm Turnbull
     
    • Winner x 1
  6. Exohedron

    Exohedron Doesn't like words

    One of my favorite stories about famous talks is about Frank Nelson Cole. But before I describe his talk and why I love it so much, I have to give a bit of background.

    A Mersenne Prime, named after Marin Mersenne, is a prime number of the form 2n-1. These are nice primes for a bunch of reasons, not the least of which is that they're easy to specify for both humans and computers. And in our age of digital cryptography, we could always use more primes that are easy for computers to deal with.

    Let's look at the first few numbers of the form 2n-1:
    21-1 = 1
    22-1 = 3
    23-1 = 7
    24-1 = 15
    25-1 = 31
    26-1 = 63
    27-1 = 127
    28-1 = 255

    You can see that when n is prime for these first few, 2n-1 is prime, and when n is not prime, 2n-1 is not prime. Does this hold in general?
    It's not super hard to show that when n is composite, then 2n-1 is also composite. If that statement isn't immediately clear to you, it might be a good puzzle to poke at.
    Unfortunately, it is not the case that when n is prime, 2n-1 is necessarily prime. For instance, 211-1 = 2047 = 23*89. So not all primes give us Mersenne primes. It is not even currently known whether there are an infinite number of Mersenne primes.

    Some time in the 17th century, Mersenne came up with a list of values of n for n less than 257 such that 2n-1 was prime, but he got a few entries wrong. For instance, his list didn't contain n = 61, even though it turns out that 261-1 is indeed prime.

    Edouard Lucas, in 1876, showed that 267-1 is not prime despite Mersenne's belief that it was, but he did so without finding the factors.
    Which brings us to Cole's 1903 talk, wherein, without speaking a word, he went up to the blackboard, computed 267-1 on one side of the board, computed 193,707,721*761,838,257,287 on the other, and then sat back down while the audience applauded.
     
    Last edited: Jul 15, 2017
    • Like x 1
  7. Exohedron

    Exohedron Doesn't like words

    p-Adics

    Let's talk about the p-adics, because the p-adics are fun. And weird. But fun.

    Let's talk about the absolute value of a rational number. This is just saying that if x is positive, then |x| = x, and if x is negative, then |x| = -x, and if x is 0 then |x| = 0. From this, we can define a distance between two numbers by saying "the distance between x and y is the absolute value of their difference". And this gives us a geometry on numbers. A kind of boring geometry, but a geometry nonetheless.

    But we can do fun things by playing with the absolute value function, and hence the distance function. For instance, given a nonzero integer x, we can ask for the largest power of, say, 5, that divides x, and then say that |x|5 is 1 divided by that largest power of 5. So |1|5 = 1, and |5|5 = 1/5, and |10|5 = 1/5, and |100|5 = 1/25. Just to be complete, we say that |0|5 is still 0.
    Note that this new absolute value function shares an important property with the usual absolute value: |a|5|b|5 = |ab|5. So it's not entirely different from our usual notion of absolute value.
    Now this gives us the distance function d5(x, y) = |x - y|5. And this guy is kind of weird. For instance, the distance between 0 and 5 is 1/5, while the distance between 0 and 25 is 1/25, and the distance between 0 and 125 is 1/125, so it looks like as you get bigger, the distance gets smaller. But! the distance between 0 and 4 is 1, and the distance between 0 and 24 is 1, and the distance between 0 and 124 is 1, so it's not quite that simple.

    What about for rational numbers? Obviously, d(x, y) = |x - y| still works for rational numbers. What about our new d5?
    Well, what does it mean for 5 to divide a rational number? Not terribly much, because you can always divide any rational number by 5 and get a rational number. So instead we'll be a bit more precise: given a nonzero rational number, we can write it as a/b where a and b are integers that have no factors in common, and this expression is unique if we insist that b has to be positive. Now if a is divisible by 5 then b is not, so we can say that |a/b|5 is 1 divided by the largest power of 5 that divides a, while if b is divisible by 5 and a is not, then |a/b|5 is the largest power of 5 that divides b, and if neither is divisible by 5 then |a/b|5 = 1.
    So |10/3|5 = 1/5, while |7/50|5 = 25, and |4/9|5 = 1.
    The distance function on rational numbers then corresponds.

    The resulting geometry is now really, really weird!

    For instance, if we have three rational numbers x, y, and z, then the normal distance function obeys what is called the triangle rule: d(x, y) + d(y, z) is at least as big as d(x, z): the sum of the lengths of two sides of a triangle can't be smaller than the length of the third side. When we're looking at d5, however, we get that not only is d5(x, y) + d5(y, z) at least as big as d5(x, z), d5(x, z) can't even be bigger than whichever of d5(x, y) or d5(y, z) is larger. A little fiddling then tells us that all of our triangles are isoceles, i.e. two sides have to be the same length.

    Another fun situation: look at closed disks, by which we mean pick a point and a length and consider all the points no more than that length away from the chosen point. So for instance, if we picked the point 1 and length 1/5, we'd be looking at the interval [4/5, 6/5] in the normal distance function. This is kind of bland, but for now let's just note that there's a definite center and two endpoints at length exactly 1/5 away from 1. Moreover, the center is the only point that looks center-ish, i.e. that is the same distance from both of our endpoints. Every other point in our disk is too close to one endpoint and too far away from the other.
    Now suppose we wanted to look at a closed disk in our new distance function. So our center is 1, and 6 has distance 1/5 away from 1, as does -4, and 11, and 16, and in fact an infinite number of other points. So we have a bunch of endpoints, or perhaps boundary points is the better term. Note that, for instance, 26 is distance 1/25 away from 1, so is in the disk but isn't a boundary point. Similarly, 51 is in the disk but not a boundary point. Call the points in the disk but not on the boundary "interior" points.
    Fun fact: d5(26, 6) = 1/5, and d5(26, 11) = 1/5, and indeed, if d5(1, x) = 1/5, then d5(26, x) = 1/5. Similarly, if d5(1, x) = 1/5 then d5(51, x) = 1/5. Indeed, every point in the interior of the disk is distance 1/5 away from every point on the boundary: every interior point is the center of the disk.

    And this is only the 1-dimensional geometry!

    We can generalize this a bit. The only special fact about 5 that's at work here is that 5 is prime, so we can define a | |p and a dp for any prime p. We call these the p-adic absolute value function and the p-adic distance function, respectively. And for any prime p, we get a weird geometry on the rational numbers.

    Puzzle: where did I use the fact that 5 is prime?

    Why do we care? Well, these give us a new way to look at the rational numbers in terms of factorization and primes, and number theorists always like those. But they also give us an alternative to the real numbers.

    One way to construct a real number is as a sequence of rational numbers. Our decimal system does this: pi is approximated by the sequence 3, 3.1, 3.14, 3.141, etc, and we trust that in our system this sequence converges to pi. When we say "converge", we really mean that the distance between successive terms gets small enough fast enough that the terms bunch up at one particular point that we can treat as a number.
    Note the "fast enough". The sequence 1, 1+1/2, 1+1/2+1/3, 1+1/2+1/3+1/4, etc has terms where the difference between consecutive terms gets small, being 1/n, but not fast enough; the sequence actually goes off to infinity.

    The p-adics are nicer. There's no "fast enough" condition: as long as the p-adic distance between consecutive terms goes to 0, no matter how slowly, a sequence of rational numbers converges in the p-adic sense. So we can look at p-adically convergent sequences of rational numbers, getting the set Qp of p-adic numbers*.
    What kinds of numbers do we get? Well, let's look at the p = 5 case again, and look at squares.
    22 = 4 and d5(4, -1) = 1/5.
    72 = 49, and d5(49, -1) = 1/25.
    572 = 3249, and d5(3249,-1) = 1/125
    In fact, we can construct a sequence of integers whose squares get closer and closer to -1 in the 5-adic sense. So just as we construct √2 in the normal sense as a sequence of rational numbers whose squares approach 2 in the normal sense, in the 5-adics we can construct a square root of -1 via finding numbers whose squares approach -1 in the 5-adic sense.
    However, there is no square root of 3 in the 5-adics, i.e. there is no sequence of rational numbers whose squares approach 3 in the 5-adic sense.

    Puzzle: I've given you the first three elements of a sequence that approximates √-1 in the 5-adic sense. The next is 182. Find the one that comes after.
    Puzzle: Find a general pattern for approximating √-1 in the 5-adic sense.

    Note that the p-adics are different from the finite fields that I talked about in an earlier post. For one thing, the field of p-adic numbers isn't finite: it contains at least all of the rational numbers, and indeed is uncountable (in fact, has the same cardinality as the real numbers). Also the p-adics are characteristic 0: if you add 1 to itself any number of times you aren't going to get back to 0, whereas for a finite field you eventually will.

    One more thing about the p-adics: just as you can algebraically close the real numbers by adding in a square root of -1, you can algebraically close the p-adics. But unlike the algebraic closure of the real numbers, the algebraic closure of the p-adics has a lot of gaps in the distance-sequences sense: taking convergent sequences from the algebraic closure of Qp doesn't necessarily yield elements of the algebraic closure of Qp. So we can add in all the limits of convergent sequences and get a field sometimes denoted by Cp. Cp is now both algebraically closed and has no gaps, so we can't really go any further with these two ideas.
    Fun fact: as a field, i.e. if we only speak of addition, subtraction, multiplication and division, as a field Cp is isomorphic to C, it just has a funny notion of distance attached to it. This follows from the fact that, unlike the axioms for the natural numbers, the axioms for an algebraically complete field of a given characteristic form a logically decidable theory (in the Godel sense), and thus all models of an algebraically closed field of characteristic 0 of a given cardinality have to be isomorphic as fields.

    * We have to do the usual thing of saying that two convergent sequences converge to the same thing if the distance between corresponding terms goes to 0, i.e. if we have two convergent sequences x1, x2, x3, etc and y1, y2, y3, etc and dp(xn, yn) goes to 0 as n goes to infinity, then we say that the two sequences converge to the same thing.
     
    Last edited: Jul 30, 2017
  8. Ben

    Ben Not entirely unlike a dragon

    Oh fuck, that's what the p-adics are! That makes so much sense!!!!

    I have nothing useful to contribute because I am currently being very slow and bad at writing analysis proofs, mostly bc I can't rub more than one brain cell together at a time.

    (I'm currently trying to prove that if a set has no finite cover of epsilon balls, then there exists at least one sequence in it that has no Cauchy subsequences.
    I can draw the answer (make a subset with all the center points of some cover's balls, then just keep picking ones that are not close to each other, preferably in one 'direction'), it's just not terribly helpful for figuring out how to write it down coherently.)
     
  9. Exohedron

    Exohedron Doesn't like words

    I really wish I had done more with the p-adics in grad school, because I've always disliked real analysis but p-adic analysis sounds super interesting, since the convergence properties are so much better than for the real numbers. On the other hand, you can't take exponentials of numbers that don't have p-adic absolute value at most 1/p, which makes a lot of the Lie stuff I like somewhat tricky.
     
    Last edited: Jul 30, 2017
  10. syntheme

    syntheme Active Member

    you just gotta define everything as an algebraic group and use the p-adic/K[[t]] analogy a lot
     
  11. Exohedron

    Exohedron Doesn't like words

    I actually didn't know what an algebraic group actually was until about a year ago; I avoided anything that sounded like algebraic geometry because I found commutative algebra really boring. Since I've learned that you can skip most of the commutative algebra I've gotten more interested.
     
  12. Exohedron

    Exohedron Doesn't like words

    Not enough people are aware of this groundbreaking paper where two leading mathematicians, in a scant 35 pages, prove that it is possible to divide by three.

    It's not quite as silly as most descriptions of the paper make it sound, because it's trying to do something involving infinite sets and trying to do it constructively rather than via the Axiom of Choice.
     
  13. Emu

    Emu :D

    Interestingly enough, you can actually go a bit farther here - Cp is "complete" (which means it has no gaps), but it's not "spherically complete" which is a related but stronger property. So instead sometimes you need to take something even bigger, the "spherical completion" Ωp. So what on earth is this and why is Cp not good enough?

    The definition of "complete" we have is that all sequences that 'look like they should converge' actually converge to a point. You can rephrase this in terms of nested sequences of discs - it's equivalent to saying that any nested sequence of closed discs where the radii go to zero all have a common point. So if I stick down a disc of radius 1 in the plane, then a disc of radius 1/2 inside that, a disc of radius 1/3 inside of that, and so on, they'll all stack up and there will be a single point in the middle underneath them all (which the sequence of centers of discs converge to!). And it turns out asking "any nested sequence of discs with radii going to zero has a common point" is equivalent to what we mean by "complete". So for instance the rational numbers are not complete because I can take the disc of "all rational numbers q satisfying |q-π| < 1", stick the disc of "all rational numbers q satisfying |q-π| < 1/2" inside, stick the disc of "all rational numbers q satisfying |q-π| < 1/3" inside that, and so on, and these discs won't have a common point because they're narrowing in on π which isn't rational. But if I do the same thing with real numbers, they do all have π as a common element.

    But the definition "any nested sequence of discs with radii going to zero has a common point" looks a bit weird - why the talk about radii, and why can't I just say "a nested sequence of discs has a common point"? After all if I have a disc of radius 1+1/2 with a disc of radius 1+1/3 inside with a disc of radius 1+1/4 inside that and so on, then I have a bunch of big discs stacked up, nothing is shrinking to zero, and it seems like I should end up with lots of points contained in all of them. And if you're in Euclidean world then of course that's true. If I take the center point of my disc of radius 1+1/2, and I have any disc of radius 1 or bigger inside of it, that has to contain the center because a disc of radius 1 has a diameter of 2 that has to fit inside my slightly bigger disc.

    This is where things get weird with the p-adic numbers. The property that Exohedron was talking about being really nice (that there's no "fast enough" condition) also makes things really weird compared to our geometric intuition. In the p-adic world, a disc of radius 1 also has diameter 1 (precisely: if I take a starting point x and make my disc "all points y satisfying |y-x| <= 1", then any two points y,z in this disc actually also satisfy |y-z| <= 1). This is basically the same thing Exohedron said about "every point of a disc is a center of the disc". This throws the geometric argument above out the window - I can take a disc of radius 1+1/2, cram in a disc of radius 1+1/3 that avoids any single point I want, cram a disc of radius 1+1/4 inside of that which avoids another point of my choice, and so on. So the question is: can I make a nested sequence of discs of radius 1+1/n such that there isn't anything that ends up inside of all of them, that for any point I look at in one of the discs it ends up outside of a later one?

    For Qp the answer is no, every sequence of nested discs will actually have a common point. Roughly speaking this is because Qp isn't big enough to have enough wiggle room to make a nested sequence of discs that avoids everything. But Cp is big enough, and it turns out the answer is yes for that. So it has this strange property that any nested sequence of discs of radii 1/2, 1/3, 1/4, ... will always have a common point, but I can cook up a nested sequence of discs of radii 1+1/2, 1+1/3, 1+1/4, ... that has no common point. Thus it's "complete" but not "spherically complete"! To get something that is spherically complete you need to take an even bigger field that's called Ωp.

    So okay, who cares? Not all that many people, but it ends up being an important distinction when you get into functional analysis (studying infinite-dimensional things like Banach spaces) over p-adic fields. There's a whole bunch of nice theorems in functional analysis for our usual world of R and C which get used a lot in math, and it turns out most of them have analogues over 'nonarchimedean fields' like the p-adics - but some of them (notably the Hahn-Banach theorem) require spherical completeness of the base field to actually be true.
     
    Last edited: Aug 4, 2017
  14. Exohedron

    Exohedron Doesn't like words

    Wow. I had no idea that was a thing that could happen. Cool.
     
  15. Exohedron

    Exohedron Doesn't like words

    [​IMG]
    From a talk on magnitude homology by Tom Leinster.

    Text:
     
  16. Exohedron

    Exohedron Doesn't like words

    So for those of you who don't follow certain parts of math news, there's a new, not-obviously-wrong attempt at proving that P is not NP by Norbert Blum, who is the chair of the CS department at the University of Bonn. arXiv preprint, CS stackexchange discussion.
    I am completely unequipped to evaluate either the paper itself or the surrounding discussion, but it's exciting because apparently the paper isn't obviously wrong and the author has good academic credentials, which makes the paper much more interesting than the vast majority of attempts to solve P v NP. Even if it isn't a solid proof, or even a holey proof, it should still bring some interesting stuff to the table.
    Of course, if it is a proof, or even just most of a proof, that would be the second Millennium Problem solved, the first in over a decade.


    More news as this develops, modulated by my ability or lack thereof to understand said developments.

    Update 08/18: Apparently there is a fatal flaw, of which I don't understand but am reporting on anyway because I am a good journalist. From what I can tell, it boils down to the existence of a function proven to be in P to which the reasoning that Blum used to show various things to not be in P should also apply. Hence Blum's reasoning must be wrong.
     
    Last edited: Aug 18, 2017
  17. syntheme

    syntheme Active Member

    Not surprised that even a solid non-cranky P!=NP claim turned out to be mistaken. Honestly, of the Millenium Problems, that's the one most likely to take an actual millenium, if you ask me
     
  18. Exohedron

    Exohedron Doesn't like words

    I do like that we seem to be getting good stuff out of the attempts, though. There was one a few years back that tried translating things into statistical mechanics that, even though it failed, had a bunch of interesting ideas in it. This one didn't seem to have a lot of really new stuff, but some of the uses of old stuff seem a bit novel.
    I actually don't really know what's happening in terms of the other problems. Have there been notable attempts at BSD recently? What about Hodge?
    Or is it that people might be more content to get incremental results toward them (which might simultaneously make them appear easier, whereas the attempts at P v NP seem to be all or nothing which might make it appear harder in comparison)? I can't even recall the last time I heard of a serious attempt at the Riemann Hypothesis, to be honest, despite it supposedly being the most well-known of the Millennium problems.
    Of the other problems, I have the completely unjustified gut feeling that Navier-Stokes will be the next to fall, and I really, really want Yang-Mills to not have a solution because I hate the Standard Model.
     
  19. Exohedron

    Exohedron Doesn't like words

    Anyway, I figured that if the attempt didn't work out, it's good to show people that even mathematicians struggle at doing math.
     
  20. syntheme

    syntheme Active Member

    Well, I think P=NP is the hardest because in a vague sense (and certain precise senses that I don't know much about tbh) P!=NP makes it a lot harder to solve problems like P=NP, whereas P=NP would make it easier to solve problems like P=NP, so if P=NP you'd expect the proof to be easy but if P!=NP you'd expect the proof to be very hard, whereas the other ones don't have their own likely truth value making themselves harder.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice