John Forbes Nash, Jr., known for the Nash Equilibrium and having schizophrenia, died in a car crash on Sunday along with his wife. Much sads. So I'm going to talk about his Equilibrium. Suppose you have a two players playing a game. It's a specific type of game, wherein each player has a pre-specified, finite set of possible moves, and the payoff from any particular move is known in advance. In particular, if player 1 makes move a and player 2 makes move ii, then both players know how many points or whatever each player will get as a result of those choices. We further assume that both players are completely rational and both want to win and they both know that they both know that blah blah blah etc. Everybody knows everything except for which particular choice the other player is going to make. We call this a game of perfect information. Furthermore, we allow what are called "mixed strategies", where instead of picking a particular move, each player is allowed to assign a probability to picking each move, so maybe player 1 picks move a with probability 1/2, and b with probability 1/3, and c with probability 1/6. The expected payoff is then the total of the payoffs for each choice of moves from the two players, weighted by the probability of them picking those choices. So what can player 1 do? Without knowing what player 2 is going to do, player 1 wants to find a strategy that gets her an expected win no matter what player 2 does. But that doesn't always exist. In lieu of that, player 1 maybe wants a strategy that is expected to be better than any other strategy player 1 can make, regardless of what player 2 does. But that also might not exist. So maybe player 1 wants a strategy that is expected to be better than any other strategy player 1 can make, assuming that player 2 makes the best strategy as far as player 2 is concerned. But what's the best strategy for player 2? That depends on what player 1 does! So this is kind of circular. Player 1's best strategy depends on what player 2's best strategy is, and player 2's best strategy depends on what player 1's best strategy is. So we need a new way of looking at this. Suppose that if player 2 picks strategy IV, then player 1's best strategy is strategy A. And suppose that if player 1 picks strategy A, then player 2's best choice is choice IV. Then if player 1 picks strategy A then player 2 is going to pick strategy IV, and vice-versa. So we have here an equilibrium, a pair of strategy that, if the two players both think that pair is going to be chosen, neither will want to deviate from it. Of course, this kind of situation can arise if you have any number of players, not just 2. As long as everybody knows the payoffs for everybody else, everybody can figure out what the optimal choice is supposing that they know what all the other players are going to do, so they can figure out if there is an equilibrium. Nash's theorem is then that such an equilibrium has to exist. It may not be unique, but it must exist. This might not be the best payoff for all, or even any player! Like a compromise that nobody wants but is the only thing that will make it through the committee, it could be that there are other sets of choices that will net both players more points, but are only possible if everyone cooperates. But if the players can't trust one another, then the equilibrium might only give a small or even a negative payoff for each player, and the only reason it isn't worse is that neither player wants to be the only person to deviate. Think of global thermonuclear war: nobody wants to be the only person not launching nukes, even though the best option would be for nobody to launch nukes. Moreover, if you think that the other person is going to launch a nuke, you want to launch first. So Nash equilibria sometimes lead to what appears to be irrational behavior, even though everyone is acting rationally, taking the best option available according to what they know of everybody else. This theorem has obvious consequences for game theory and economics and population biology and anything that can be modeled as a bunch of agents making choices. It depends on having perfect information and assuming rationality of everyone involved, and also on the threats of deviation being credible. Successive work by Nash and others generalized his original idea to the cases of non-perfect information and non-credible threats; what if you don't know how much value other countries put on not being nuked, and how likely they are to actually launch nukes once they start threatening to? There is also work describing when we should expect there to be a unique Nash equilibria, and when that unique equilibrium is actually the best outcome for everyone. Nash also made several large contributions to other branches of mathematics. For instance, the Nash-Moser theorem talks about existence of solutions of certain types of partial differential equations useful in celestial mechanics, and the Nash Embedding theorem allows us to move from descriptions of geometric objects as patched-together chunks of R^{m} to descriptions as subsets of R^{n}. He was actually on his way home from receiving the Abel Prize, the analogue of the Nobel Prize for mathematics, for his work on partial differential equations when he died.
I was wondering if you guys could help me? Like I said some time ago, I want to try the entrance exams for med school and I just suck at math. I've been trying to think of it as a language, because I am good at languages, but... There's no such thing as a math-Portuguese dictionary. I learned English on my own with an English grammar book, a dictionary, and every single cd booklet my grubby little hands managed to touch; I can't do that with math. I practice English everyday just by acting normally on the internet; I can't do that with math. Or physics, for that matter. My tutor is an angel and really good at explaining the very basics, but I'm still feeling terribly humiliated by taking so damn long to understand; I don't feel like I understand math with the absolute certainty I understand Portuguese or English, like, I can barely understand it at all. And even the basic venn-diagram-groups-thing exercises make me a bit angry. For some reason exact sciences don't sound rational at all in my brain?? The examples used to... Exemplify just make me confused. Idk, I wanted to know if you guys knew anywhere with a teaching method that reads like "basic sentence structure is subject plus verb plus object and X verb is conjugated Y way" instead of. Idk. Those stupid lesson based second language textbooks where chapters are themed and grammar is taught out of order and they think it's teaching if you memorize half a dozen colors and objects and can say that the blue book is on the table but not that it should be in the fucking bookcase. Only like, the math equivalent? Like I learned best by reading the structure of the language an then getting the vocabulary and I hate when it's taught the other way around and it's the same feeling I get anywhere near ~exact sciences~ because I guess my brain and numbers just have a weird relationship?
Okay, now for something that a lot of people don't immediately recognize as math: knots! By which I mean bits of string curled up and twisted and you'd really like to listen to music but you made the mistake of putting your earbuds in your pocket for like ten seconds and haha look at them now. Well, not quite. Mathematicians are mostly concerned with knots made of one piece of string, no branching or splitting, where you take the ends of the string and fuse them together, leaving you with a complicated loop with no ends. And the question is "given this knotted loop, can you by tangling and untangling turn it into this other knotted loop, assuming you can stretch it all you want but you can't cut it?" If the answer is yes, then we say that the two knots are "equivalent", and the game is to figure out how to tell if two knots are equivalent without going through the effort of actually turning one into the other. We fuse the ends together because, with sufficient patience, any knot where you don't fuse the ends together can be undone, by taking one end and pushing it back through any loops it passes through. If you fuse the ends together, then the question becomes more interesting. There is, of course, a mathematically formal way of stating the question: given two topological embeddings f and g of S^{1} into R^{3}, is there an orientation-preserving homeomorphism h: R^{3} -> R^{3} such that h*f(S^{1}) = g(S^{1})? This is one of those math problems that, when it was first turned into a math problem, everyone thought it was going to be easy. We have plenty of hands-on experience with knots, so figuring this kind of thing is something we should be able to do just by playing with some string and observing some patterns, right? But it turns out to be very difficult, prompting developments in many, many branches of mathematics ranging from algebra to geometry and even having implications for statistics and abstract quantum theory. In fact, one of the earliest motivations for studying knots was a theory of Lord Kelvin in the 1860's that the various types of atoms came from different types of knots in the aether. That theory didn't work out, but it did prompt mathematicians and physicists to seriously consider the question of what kinds of knots there are and now knots are coming back into physics. One thing that mathematicians do to make the question more tractable is to reduce it from questions about string in R^{3} to questions about diagrams in the plane, which at the very least can be drawn on paper. We hold the knot up to a light source and record the shadow it makes. We want specific kinds of shadows, just for the sake of clarity. We don't want the shadow of our knot to run along itself anywhere, so that if it overlaps itself, it does so at an angle so we can well where each bit of it is heading. We also don't want more than two bits crossing at any given point; otherwise it's kind of hard to tell what's going on. Also when we do have a crossing, we mark which strand is above the other (or at least, closer to the light source) by cutting the lower one and leaving a bit of a gap between the two ends we just created and the over strand. Our diagrams end up looking like this: Spoiler So what does our question look like in terms of these knot diagrams? How do the tanglings and untanglings of strings floating in the air translate to their shadows on the plane? In 1927, Alexander, Briggs and Reidermeister came up with what are called the Reidermeister moves, a set of three things you can do to a knot diagram that gives you the diagram of an equivalent knot, and indeed any diagram of any equivalent knot can be reached through some combination of these moves, plus things like shifting strands around without changing any of the crossings. Spoiler Now instead of making a movie of us turning one knot into another, we just draw two diagrams and make a list of Reidermeister moves that will turn one diagram into the other. This is still a hard problem, but now it's a bit easier to talk about than the somewhat airy "tangling and untangling". So now we can start trying to extract information from these diagrams. We can do things like, say, count the number of crossings, which can be easily counted for any given diagram. So the circle has 0 crossings, and the trefoil knot, the simplest non-trivial knot, has 3 crossings. Of course, we can always add loops via twisting like in the first Reidermeister move, so really we should say the minimum number of crossings on any diagram of the knot. This is harder to count, because it's not immediately clear what are all the diagrams of a given knot. So what we want to do is to figure out something that can be computed from a diagram and that doesn't change when we apply the Reidermeister moves to get an equivalent diagram. We call such things "knot diagram invariants". For example, if you look at the diagrams above, at each crossing it looks like we have one strand cutting another strand in two. Suppose that actually happens, so we end up with a bunch of segments. We say that such a diagram is 3-colorable if we can color each segment red, green or blue such that at least two colors are used and at each crossing, the three segments, one going over and two that we got from cutting the under one, are either all the same color or all different colors. If a diagram is 3-colorable, then all equivalent diagrams are also 3-colorable, so if one knot has a diagram that is 3-colorable, and another knot has a diagram that is not 3-colorable, then the two knots can't be equivalent. There are a bunch of diagram invariants known so far, and they can be used to distinguish some knots from some other knots, but a complete set, capable of showing that any two inequivalent knots are actually inequivalent and showing that any two equivalent knots are actually equivalent, is currently unknown. There are other ways of dealing with knots other than diagrams. For instance, we can build what is called a "Seifert surface", which is the kind of thing you'd get if you made the knot out of metal, dipped it in soapy water and looked at the soap film you got when you pulled it back up. We can examine the properties of these surfaces, asking, for instance, how many holes it has. Other methods include looking at the knot complement, the shape you get when you take R[sup3[/sup] and delete the knot, leaving a strangely shaped void in space. The geometry of things forced to stay away from that void leads to information about the knot. Closely related to knots are objects called "braids". Yes, like the hair. We start with n points in a row, and to each point we attach a piece of string, and then we do some stuff, and then we attach the ends of the strings to another row of n points. That gives us a braid. And then we again ask when are two braids the can be turned into one another without any cutting, only now we can't move any of the points, and we can't move our strings around the points. We have a very similar notion of a diagram for a braid, again showing the over-under behavior at each crossing. We can turn a braid into a knot by joining each of the n points at the start to the corresponding point at the end with an unknotted piece of string, giving us some number of loops. Braids are a lot easier than knots to work with, but many different braids can give the same knot, so knowing about braids doesn't always tell us much about knots. There are also objects called "tangles", which are a bunch of strands with no restrictions on where the ends are. but again for any given tangle you can't move the ends and you can have the strands go around the ends. A related but much less well-developed idea is higher-dimensional knots. A string is essentially 1-dimensional; you can go along the string one way, or you can go back along the string in the other direction, and that's really all that matters. Two pieces of string that aren't parallel give two dimensions, one for each string, and can cross each other by having one be displaced along the third dimension, "above" the other piece of string. Now suppose you had surfaces; two surfaces that aren't parallel give four dimensions total, since each surface has two dimensions, and you can separate two "crossing" surfaces by moving one along a fifth dimension. Now instead of loops we have things like knotted spheres and you can ask when are two knotted spheres equivalent. There has been relatively little work done on this, at least partially because most people want an answer to the strings-in-R^{3} question first. Edit: none of the pictures are showing up for me. Is this the case for other people? If it doesn't work, clicking on the broken images should take you to the images themselves, which I took from Wikipedia.
@witchknights I wouldn't say that the main issue is linguistic. Math is mostly the same grammar as whatever language it's being taught in, so it would be mostly a vocabulary problem. Except I don't think it's that simple. There could be two things going on here. One is dyscalcula. Numbers don't always make sense to everyone, and sometimes that's a brain issue rather than an education issue or a personality issue. I have no advice in this case, as I don't know much about dyscalcula other than it exists and can get pretty cumbersome. The other is that math requires, to some extent, thinking in a particular way, which may tie to the dyscalcula. Rather than learning a language, mathematics is more like learning how languages work in the abstract, without real connection to anything that anyone actually speaks; instead of learning an order "subject verb object", you learn what parts of speech look like, what subjects look like, what verbs look like, what objects look like, what ergatives look like, according to a person who doesn't know any actual languages. You learn syntax tree theory, you learn binding and governance theory, you learn minimalism. And if you like that stuff, then you might be good at math if you try to view it that way, that mathematics is like a metalanguage. If you want to learn how to speak a language, however, it can be frustrating because none of this looks like it will help, and indeed most of it won't.
Sorry I hadn't seen this before! The ping didn't go through. I think a lot of my problems with math can be tied up with attention issues, which... I don't think are that simple to solve beyond "pay more attention". It works like this, sorta: I read relatively fast - Nothing to write home about, but I can average one The Graveyard Book, in English, every two or three hours; a good traffic jam's worth of time. A lot of the time, though, I don't read carefully. I can't quite describe it, but I have a good enough vocabulary that I can sort of get the beginning and ending of words instead of reading sound-by-sound? And I know I words are written wrong or right because of their general shape-thing, how they "feel". That's really hard to do with math. So I usually forget a lot of concepts because I just skim things - and I don't actually read the symbols on things. It's like any symbols or abbreviations of things just become this... Wiggly gibberish. I have trouble remembering the process and logics because I just skip right over them. It's my problem, of course, but it's still frustrating. I only started getting better at things when I started writing out all the logic to everything, but it takes a while and it's sorta... Sad.
Yeah, I have that problem too. I'm also a speed reader, which is okay for fiction but totally doesn't work for mathematics, at least partially because math is really fucking dense and there are all sorts of symbols and everything has meaning but only within the given text and there isn't much syntactic fluff. So I speed through a math paper and fifty pages later I haven't absorbed anything. Ugh. It takes practice, it takes patience, it takes fortitude. Learning math is a skill; it can be trained but it needs to be developed and it needs to be practiced. If you don't like reading math, and plenty of mathematicians don't like reading math, you might have more luck doing math. Get yourself some triangular grid paper and just start coloring stuff (triangular grid paper is the best thing ever). Or if you have a printer print some of the shapes from http://www.korthalsaltes.com/ and make some polyhedra. Or check out some of Roice Nelson's stuff: https://plus.google.com/+RoiceNelson/posts; don't worry about the words, just look at the pictures and try to figure out what's going on. Find or make a picture of Pascal's triangle and color all the even numbers red, or color all the numbers divisible by three green, and look at what kind of patterns you get. Or if you're more of an audio person, here's a rhythm generator based on the method we use to divide numbers. Play with the controls and listen to the things you get out. The important part is to get your hands dirty. It won't make it easier to read texts, necessarily, but it will at least make the subject itself less dry.
For a moment I was worried about tooting my own horn, but this entire thread is me tooting like I'm the worst trumpet in a high school band, so whatever: If you want more advanced mathematics than what I post here and want to see more what research level mathematicians (or at least one research level mathematician) do, check out the #LayraExplains hashtag on Google Plus. That's me, talking about all sorts of nonsense. Most recently I've been going through the linear algebra behind quark colors, inspired by the recent pentaquark evidence. The prerequisites are usually linear algebra and multivariable calculus and the patience to dig through an unmapped dependency tree of posts I've made, but I should have each individual post at least linking the post that it most immediately relies on. The best one is probably this one about Rubik's noncubes I'm going to stick to mostly nontechnical stuff here when possible. G+ is more advanced because I talk to math profs there and don't want to sound too handwavy.
Prompted by some discussions on @seebs tumblr, I'm going to talk about something a little different today: Nonstandard logic! What I mean by logic is stuff like "if A then B", "A and B implies B" etc. The nitty-gritty stuff that underlies our abilities to reason formally. Just a quick reminder: single capital letters are supposed to be statements, sentences that assert something or other; other capitalized things are technical terms that look a lot like their colloquial usages but which I am capitalizing to distinguish them since they do behave slightly differently. We can do various things to statements, for instance from statements A and B we can make the statement A AND B. The statement A AND B is TRUE when both A and B are TRUE, and is FALSE if A is FALSE, B is FALSE, or both are FALSE. Similarly we can make the statement A OR B, which is TRUE if A is TRUE, B is TRUE, or if both are TRUE, and is FALSE if both A and B are FALSE. Finally, we can build NOT A, which is TRUE if A is FALSE and is FALSE if A is TRUE. I look like I'm yelling. So what do I mean by nonstandard logic? In particular I'm going to talk about Intuitionism. Normally, in classical (propositional) logic, if-then constructions can be turned into AND/OR/NOT constructions. Specifically, the statement "if A then B" is equivalent to B OR (NOT A). Either B is TRUE, or NOT A must be TRUE, since we can't have both A and NOT B be TRUE if the statement "if A then B" is TRUE. For example, "if it is sunny, then the birds sing" is equivalent to saying "the birds sing OR it is not sunny". We can't have both the sun out and the birds not singing. Note that the birds sing all the goddamn time anyway, regardless of the weather, and this is also covered since OR allows both statements to be TRUE. So suppose I claim to be the Pope. I am not the Pope, but suppose that statement A is "I am the Pope". And we can let B be anything we want, for instance, "the moon is made of cheese" So we end up with the statement "If I am the Pope then the moon is made of cheese." Is this statement TRUE? According to standard propositional logic, yes! In fact, it's equivalent to "the moon is made of cheese OR I am not the Pope." And since the statement "I am not the Pope" is TRUE, "the moon is made of cheese OR I am not the Pope" is also TRUE, and thus "If I am the Pope then the moon is made of cheese" is TRUE. And in fact we can put in anything we want for B. The statement "If I am the Pope then the moon is not made of cheese" is also TRUE, for the same reason. From this kind of nonsense we get what is called the Principle of Explosion (PoE): Suppose that we have a contradiction, by which we mean a statement of the form A AND (NOT A). "I am the Pope AND I am not the Pope". If A is TRUE then NOT A can't be TRUE, and if NOT A is TRUE then A can't be TRUE. So the statement A AND (NOT A) must be FALSE. So from the FALSE statement A AND (NOT A), we can construct "if A AND (NOT A) then B", which by our previous argument is TRUE for any A or B since A AND (NOT A) is always FALSE. So this is the PoE: in standard propositional logic, from a contradiction you can prove anything. Indeed, from a FALSE statement you can prove anything, but in standard propositional logic we usually refrain from declaring things to be TRUE or FALSE unless we really have to, so we usually only apply the PoE to things we know to be FALSE. To a lot of people, this is kind of awful. We don't like contradictions anyway, but this feels rather extreme that you can prove absolutely anything, TRUE, FALSE or meaningless, from a contradiction. And so a lot of very foundational work on logic is trying to get around this construction in various ways. So what is Intuitionism? Intuitionism says that we're going to remove what is called the Law of the Excluded Middle (LEM). We've seen the problems of the statement A AND (NOT A), but what about the statement A OR (NOT A)? A OR (NOT A) asserts that at least one of A or NOT A must be TRUE, which generally seems pretty reasonable, but it's one of the things that gets us into trouble with the PoE. So Intuitionism says to throw it out; it may be true for some statements A but it's not necessarily true for all statements A, so we can't use it as a general law. The LEM is equivalent to the Law of Double Negatives: NOT(NOT A) implies A. Again, this seems reasonable, but this is actually more directly part of the PoE, in particular the translation of "if A then B" to B OR (NOT A). So we toss it out. This means that we get a weaker system, we can prove fewer things. Which might be good, since before we could prove too many things. But we may have now tossed out too much, and can't prove things that we want to be able to prove (this should remind you of the "which things are sets?" problem). In particular, proof by contradiction doesn't quite work anymore: showing that NOT A leads to a contradiction doesn't yield a proof of A, it only yields a proof of NOT(NOT A), which is strictly weaker than A. We also get that not all contrapositives are equivalent to their original statements. Given "If A then B", taking the contrapositive yields "If NOT B then NOT A", which is reasonable, but taking the contrapositive again yields "If NOT(NOT A) then NOT(NOT B)", which is strictly weaker than the original statement "If A then B" since NOT(NOT B) does not imply B. Similarly, starting with "If NOT A then B" and taking the contrapositive yields "If NOT B then NOT(NOT A)", which is strictly weaker than the classical contrapositive, "If NOT B then A". Again, this is necessary to get around the PoE, since the statement "If A then B" is always TRUE if B is TRUE, so it's contrapositive would be "If NOT B then NOT A", and thus if A were the statement NOT C we'd get "If NOT B then NOT(NOT C)"; since B is TRUE, NOT B is FALSE so we have a proof of NOT(NOT C); with the LEM we'd then be able to prove C regardless of what C is. Having lost the LEM, we can't assert that any statement A has to be TRUE or FALSE as in classical logic, since if we could we'd get the LEM immediately. Instead we get things called Heyting algebras. One model uses open sets of the real line. AND is intersection, OR is union, NOT is the interior of the complement (the complement of a bunch of open intervals is a bunch of closed intervals, the interior of a closed interval is just removing the endpoints to get an open interval). The empty set corresponds to FALSE and the entire real line corresponds to TRUE, but there are plenty of values in between. If A is the set {y: y > 0}, then NOT A is the set {y: y < 0}, and then A OR NOT A is the set {y: y > 0 or y < 0}, which is not the entire real line since it's missing a point. Fun fact: Without the LEM, NOT(NOT A) does not imply A, but NOT(NOT(NOT A)) still implies NOT A. Triple negation still works. You can see this for the real-line model. A lot of computer-proof systems use Intuitionistic logic rather than standard classical, because Intuitionism aligns better with the notion of computability. Namely, Intuitionistic proofs of the existence of things always come with explicit algorithms for finding examples of those things. The guy who founded Intuitionism was named Luitzen Brouwer. The other thing Brouwer is famous for is his Fixed-Point Theorem in topology. The Fixed-Point Theorem is one of the main ingredients in, for instance, Nash's proof of the existence of Nash Equilibria. Fun fact: Brouwer's Fixed-Point Theorem is false in Intuitionistic logic.
Quick link because it's fun: A die that acts as a d2, a d3 or a d4 (or in fact as simultaneously rolling a d2 and a d3, or a d3 and a d4, but not a d2 and a d4). Henry Segerman has the best job and I envy him so much.
my god, this post is full of intriguing walls of text. major kudos for the spiders georg meme, and the interesting stuff on definitions and sets. i want to come back and try to parse other things.
I am a wall-of-text generator. The tricky bit is figuring out how to write a wall of text that is understandable to a somewhat-lay audience but not large enough to keep out the Mongols. I'm sure anyone who goes into research has this problem, except for maybe combinatorialists, because combinatorics is a lot of simple puzzles that have no business being as hard as they are.
The Erdős discrepancy problem Suppose you're in a game show. You're placed on a path made of equally sized squares. Two squares to your left (not counting the square you're on) is a red square, two squares to your right (not counting the square you're on) is a blue square. The rest are black. Then you make a string of Ls and Rs, standing for Left and Right. So you might pick LL, or LRRRLRLRLR, or LRRRRRRRRRRRLRRRL, or something. Then the host, who knows what string you made, picks a number k. And then you need to look at every kth letter in your premade string and move one square according to those letters; when you run out of letters, you stop where you are. So if your string is LRRRLRLRLR and the host picks 1, then you need to move left, then right three times, then left, then right, etc, whereas if the host picks 2, then you need to move right five times and then stop, since all the Ls are in odd positions. And if the host picks 3 then you need to move right, then right, then left, and then stop. You get a dollar for each letter in your string (regardless of what number the host picks) unless you happen to ever step on one of the colored squares. What's the largest amount you can make? This game isn't as easy as you might think at first. For example, you might think to just alternate left and right, to balance out your movements. If you go left first, you don't want to go left again, so then you go right, but then you can go left again without losing, and then you go right again: LRLR. But then the host picks 2 and makes you go right twice and you lose. So you have to start with LRR, and then what? Spoiler You can make 11 dollars. See if you can find a sequence of 11 letters that doesn't land on a colored square regardless of what the host picks. Suppose now that the colored squares are three squares to your left and right respectively, the rest of the squares being black. How many dollars can you make? Spoiler By massive computation, it is known due to Lisitsa and Konev that you can't make more than 1160 dollars. At the time of announcement, a lot of the journalist reports on it claimed that it indicated a new age of computer-aided and computer-driven mathematics, which is totally not the case. I'll probably make a post about this, but using massive computation to solve these kinds of search and case-checking problems is not new, nor is it generally considered mathematics proper; there is a rising and exciting field of actual computers-doing-mathematics in the sense of computers constructing proofs, performing logical deductions, etc, but this is not considered one of those instances. Also, they didn't prove that you can actually make 1160 dollars, just that you can't make more than that. Finally, the Erdős discrepancy problem in its full form: If the two colored squares are n squares to your left and right respectively, the rest of the squares being black, is there always a finite limit to the amount of money you can win in this game? More formally, suppose you pick an infinite sequence a(1), a(2), a(3), etc where each term is either +1 or -1. Then the Erdős discrepancy conjecture is that for any finite positive C, sup_{d, n in N} |Σ_{j = 1}^{n} a(jd)| > C. Paul Erdős was an interesting guy. He wasn't attached to a university. He would travel around to various institutions, sit down with people there, and do math, a wandering sage, an itinerant mathematician who survived on guest lecturing and awards who would stay in one place only long enough to collaborate on a few papers before moving on. And he wrote a lot of papers with these collaborators. Actors have a Bacon number based on appearing in movies; mathematicians have an Erdős number based on collaborating on papers. Mine is unfortunately infinite since I never bothered to collaborate with anyone, but I know some people with low Erdős numbers, which isn't super hard since Erdős died in 1996. Anyway, Erdős formulated the discrepancy problem in 1932. It was recently solved* by the closest thing that mathematics has to a rock star right now, Terrence Tao, who is also an interesting guy, but who tends to stay in mostly one physical location. Fittingly, it was solved via Tao combining several group efforts, taking inspiration from the Polymath Project set up by Timothy Gowers to suggest attacks on notable open problems as well as from discussions on Tao's own blog.** *At least, Tao put up a preprint of a paper claiming to have an answer. It is currently under review in the sense that interested mathematicians are poking at it. ** Spoiler The answer appears to be yes, there for any finite number n, there is a finite limit to how long your string can be before the host can pick a k that makes you lose. Also I don't think Tao actually said anything about what the limit might be, but I haven't read the paper very carefully.
I have just realized that math might exist from a Buddhist stand point whereas a dog doesn't exist. Someone must know of this horrific realization because it makes math seem. Horror. Cool and intriguing. Now I am going to collapse in a corner while hugging my copy of the Vibhanga to myself feeling lost and afraid.
Can you elaborate on that? I know a lot of mathematicians have latched onto, consciously or unconsciously, a Platonist or neo-Platonist notion of the "existence" of mathematics, but I'd love to hear a Buddhist perspective.
In Buddhism existence is defined in two ways. Conventionally and absolutely. A dog conventionally exists because, circumstantially, it is currently around and we have dubbed it a dog. However absolutely the dog does exist because dog is not dog, is not of dog, and does not possess dog. So basically the dog is not always a dog, will not always be a dog, hasn't always been a dog, and can change. Whereas something that absolutely exists is immutable and unchanging no matter what. So while a dog isn't a dog, isn't of dog or possessing dog two is two, is of two, and possesses two. Whatever we might call it or wherever it is two is always the same.
There's an interesting debate within philosophy of mathematics about whether "two is two, is of two, and possesses two". In particular are the strains of nominalism and conceptualism, which deny the existence of "two" beyond human convention. You can have two dogs, but from an existence standpoint that twoness is firmly attached to the dogs in question, rather than being an entity in its own right. Plato and his philosophical descendants placed mathematical objects as absolute existences, or at least more absolute than physical objects. The world of mathematics was placed second only to the world of Ideal Forms, for while we can conceive of a "triangle" we cannot produce a true triangle. So of course there are people who ask if we can actually conceive of a triangle, or whether we are just pretending that the symbol "triangle" actually has a referent. There are also Formalists, who don't necessarily believe in existence at all; what they do believe in is rules, that mathematics is pushing meaningless symbols around and that if you follow the same rules then you get the same results. But rules are subject to human convention and the symbols all hold no semantic content. But I do think that most of the mathematicians who I talk to, when asked about it, would say that mathematics has some sort of independent existence and that this existence is timeless, contextless and beyond social convention. But even among them there are questions of what kinds of mathematical objects exist. Does infinity exist? Do infinite sets exist? Can we claim that something exists without knowing where it is or what it looks like or how to build it? To most mathematicians the answer to these questions is "yes" if they bother to think about it at all, but the question of whether mathematics is or can be absolute is not resolved.
Thank you for explaining that more. I am not a mathy person at all. Just a religious, linguistic, and philosophyey person. I doubt I'll ever be a mathy person but it is interesting. At least from that right. Everything is more interesting once I can apply either religion or language to it because I am just...That sort of person. Thankfully pretty much everything can have those things applied to it!
There's a lot of room for philosophy in mathematics, but most mathematicians and philosophers don't know about it or ignore it. Most mathematicians aren't even aware that the philosophy of "what is mathematics" has undergone vast changes in the past hundred or so years, and looks like it will continue to change. For all of the claims of lofty abstraction and idealization, mathematics is still very much like engineering, seeing what can be done with tools known to work, questioning the tools only when ambition overcomes possibility. Also because most attempts to formulate a philosophy of mathematics or foundations for mathematics get too bogged down in details before leading anywhere interesting or useful. If it takes you hundreds of pages to justify the notion of addition, nobody's going to care about your work, but if you skip a lot of steps and don't check for consistency, no one's going to take you seriously. Which is why mathematicians don't really care about the big hole at the bottom of mathematics, because everyone agrees that patching it would be too hard and too costly, so we all pretend it isn't there.
Equality and equivalence Inspired by @EulersBidentity asking about isomorphism, even though the word never gets used from hereon in. But examination and extrapolation on the notion of equivalence and sameness has brought about a bunch of exciting changes in mathematics, physics, computer science and philosophers, even if only among a small community, of which I am a enthusiastic observer. So we have the notion of something being itself, I am I and all of that. And really when we say that mathematically two things are equal, we're actually saying that we're looking at only one thing, and maybe it's just described differently on the two sides of the equal sign. 1 + 1 = 2. There is also the notion of two things being equivalent. There are many notions of equivalence, but one of the most important notions is that if two things are equivalent, you should be able to substitute one for the other. "1 + 1" and "2" are distinct as sequences of symbols, but mathematically they're equivalent because anywhere you see "2" you can replace it with "1 + 1" and not screw anything up. We usually just say that 1 + 1 = 2 and hope for the best. Things are obviously equivalent to themselves: replacing something by itself certainly shouldn't break anything, although it might annoy some people. So suppose you have a square block in a square hole. You can remove the block and put it back in the hole, and now we're back where we started. Or! You can remove the block, spin it by 90 degrees, and put in back in the hole, and since the block is square it should fit just fine despite being rotated. So the block is equivalent to itself rotated by 90 degrees. It's also equivalent to itself rotated 180 degrees, or 270 degrees. And I'm sure that there are philosophers who will argue with you about whether rotating the block 90 degrees and putting it back in gets you the same block, but it's certainly equivalent. So we can say that rotation by 90 degrees is an equivalence for square blocks. So now consider rotating the block 180 degrees clockwise, and then another 180 degrees clockwise. You are back to your starting position. All of that rotation did nothing to the overall position of the block. So rotating by 180 degrees twice is equivalent to doing nothing. That's right, equivalences can be equivalent to other equivalences. Okay, yeah, that's kind of awful. But we can say it, and it's true, and that's all a mathematician needs to build a tower: equivalences of equivalences of equivalences of... But what's the point? Well, mathematicians study this kind of thing for its own sake, so forget them. What about physics? Particle physics these days is couched in a language known as "gauge theory", where in addition to observable things like position and momentum and energy, the underlying physical world is described using a whole bunch of hidden other stuff known as "gauge degrees of freedom" that are entirely unobservable. In other words, we can have two distinct descriptions that give rise to identical physical situations, and we would say that these are gauge equivalent. Yes, there's that word again. Except there's actually some physics encoded in these gauge degrees of freedom and gauge equivalences, and so we have to ask about which gauge equivalences give rise to the same gauge physics, i.e. are equivalent gauge equivalences. Yeah. Anyway, moving on to computer science. This is more along the lines of the replacement question, which is important because computers, as far as we know, don't actually know stuff. All they can do is manipulate strings of symbols according to rules, and the most important of those rules is which strings of symbols can be replaced with other strings of symbols. This is hard, often because choices need to be made and computers aren't great at making choices. And then once a choice is made somewhere you need to make sure that the same choice is made everywhere else. Or wait, maybe not the same choice, but an equivalent choice. And there we go again. Equivalences of equivalences. Also of interest is the formalization of mathematics so that computers can be used maybe not to come up with but to assist in verifying proofs. But then we get to the question of encoding mathematics in a way that computers, which don't know things but can manipulate strings of symbols, can actually deal with without driving the mathematicians involved more nuts than they currently are. Computers don't like equality of things that look different. What they like are equivalences, but once you've gone that route you need to keep track of which equivalences are the same equivalence, and which are merely equivalent equivalences. Finally philosophy. All of this started at the bottom with the notion of what it means for two things to be the same. We have a being itself. I am I. But what does it mean when we say "X = Y"? What does it mean for one thing to be another thing? Going back to the block, are the block and the block rotated by 90 degrees the same thing? They're equivalent, and there's a way to get from one to the other, but not a unique way. We would think that there should be some obviously best identification of a thing with itself, one that shouldn't require more description beyond "it is what it is". So the existence of multiple, equally good but distinct ways to get from the block to the block rotated 90 degrees seems to indicate that no, they're not exactly the same block. But the various ways to get from one to the other are equivalent, and so there's really only one way, right? As far as anyone is concerned, all of these different ways might as well be just the ideal "being itself" way, right? Unless there are multiple equivalences going from one of those various ways to another, in which case we run up against the same problem. Unless those multiple equivalences are themselves equivalent. Etc. Anyway, I'm sure most philosophers would view this as a lot of overblown, hairsplitting drivel. Which is true, until we get to questions like continuity of identity and discernability of identicals and ontology versus description versus perception. Finally, back to why mathematicians might care. Beyond the philosophy and the computer science, mathematicians have models of these equivalences of equivalences of... in ways that seem geometrical in nature. You take your things that are equivalent and represent them as dots and for each equivalence between them you draw a line between the corresponding dots. And then for two equivalences that are equivalent, you stretch a sheet so that the edges of the sheet are the two lines corresponding to the equivalences. And then for any two {equivalences of equivalences} that are equivalent, you fill the space between the corresponding sheets. And so on upwards in dimension, because mathematicians are bad at stopping. And then you have a geometric object. Well, topological really, because you don't really care how long or wide your equivalences are; that's not a meaningful question. But now you've built a valid mathematical object out of that equivalence-of-equivalence garbage. And so of course the first question any mathematician asks is "when are two of these things the same? When are they equivalent?"