As you might be able to tell, I am slowly losing the battle against my desire to talk about algebraic topology and homological algebra. Which is bad, because algebraic topology is not one of those subjects that I'm good at.
Okay, so orbitals. If you've taken a chemistry class or cracked open a chemistry textbook, you might have seen those pictures of electron orbitals, with the p and the s and the d and the f and maybe some other ones. And they look like little lobes or spheres or whatnot, usually brightly colored, and they're supposed to be "instead" of the Bohr model where the electrons orbited the nucleus in nice circular orbits. So what does it mean when we say that electron orbits look like those pictures? Firstly, what can we say about electrons? Suppose we have a single proton, a single point of positive electric charge. We get an electric field that extends outward radially, completely symmetrically if the proton isn't moving and there isn't anything else relevant around it. So if we have a physically-feasible description of an electron orbiting that proton, we should be able to rotate that description around the proton and get another physically-feasible description. From a quantum mechanical standpoint, a physical description consists of a function from points in 3-space to the complex numbers called the wavefunction ψ. So if we want to draw a picture of the orbit, we need to figure out what kind of wavefunctions we're looking at. We want to talk about the angular momentum of the electron. So what is angular momentum in quantum mechanics? Well, the angular momentum operators are L_{x} = -i(y(∂/∂_{z}) - z(∂/∂_{y})) L_{y} = -i(z(∂/∂_{x}) - x(∂/∂_{z})) L_{z} = -i(x(∂/∂_{y}) - y(∂/∂_{x})) and we actually want to look at L^{2} = L_{x}^{2}+L_{y}^{2}+L_{z}^{2} since we want a real number, not a vector, to get the squared orbital angular momentum. So we get a big thing with some derivatives and second derivatives that will take a function and spit out a function, and the functions that correspond to electrons with well-defined orbital angular momentum, i.e. the eigenfunctions, end up being the harmonic homogeneous polynomials, with squared total angular momentum being d(d+1) where d is the degree of the polynomial. How many homogeneous polynomials of a given degree are there? Well, lots. But how many independent harmonic homogeneous polynomials of degree d are there? Well, let's see. The number of independent polynomials of degree d in three variables is (d+2)(d+1)/2 (check this!) But you'll only want the harmonic polynomials. Recalling a previous post, every polynomial is a sum of terms of the form C^{k} times a harmonic piece, where C has degree 2, and so stuff from degree d-2 contributes non-harmonic parts to polynomials of degree d. So we remove them: (d+2)(d+1)/2 - (d-2+2)(d-2+1)/2 = 2d + 1 So in degree d, there are 2d+1 independent homogeneous harmonic polynomials. So for instance, in degree 2 we have xy, xz, yz, x^{2}-y^{2}, 2z^{2} - x^{2} - y^{2} as a set of independent homogeneous harmonic polynomials. Unfortunately, a polynomial function P can't be a wavefunction, not directly. Instead, we change coordinates so that P is written in terms of the radius r and angular coordinates φ and θ. P then decomposes into R(r)Y(φ, θ), and we can then replace R(r) with another function of r to get a genuine wavefunction; the exact replacement depends on what kind of physics we're trying to model and isn't really important. All the fun stuff is contained in Y, and in particular indicate very non-circular "orbits". The functions Y are called spherical harmonics, because we can compute them by taking a harmonic function and pretending that it only matters on the unit sphere. The point is that you can translate between homogeneous harmonic polynomials and wavefunctions corresponding to well-defined squared total angular momentum. So the same statement that there are 2d+1 independent homogeneous harmonic polynomials of degree d gives that there are 2d+1 independent wavefunctions with squared orbital angular momentum d(d+1). We can further ask for wavefunctions with well-defined angular momentum in the z direction, i.e. the eigenfunctions of L_{z}, and we'll get particular functions; often those particular wavefunctions are the ones that get pictured in the textbooks, even though actual electrons don't care which direction is the z direction unless you ask them nicely.
The Permut0hedra and the Associahedra Suppose you have an object A. There is one way to order the list {A}, i.e. as A. Draw a single point. Suppose you have two objects, A and B. There are two ways to order the list {A, B}, i.e. as AB and BA. Draw a point for each ordering, and connect them. You end up with a line segment. Suppose you have three objects, A, B and C. There are six ways to order the list {A, B, C}, i.e. as ABC, ACB, etc. Draw a point for each ordering, and connect a pair of points if the corresponding orderings differ by swapping two adjacent elements of the ordering, i.e. ABC connects to ACB and to BAC, but not to anything else,. You end up with a hexagon. Suppose you have four objects, A, B, C and D. There are twenty-four ways to order the list {A, B, C, D}. Draw a point for each ordering, and connect a pair of points if they differ by swapping two adjacent elements of the ordering, and fill in a. You end up with a shape that's usually called a truncated octahedron. Note that some of the faces of this shape are square and some are hexagonal. The squares represent swaps that are disjoint, i.e. a swap of the first two objects and a swap of the last two objects. We can go from ABCD to BADC either by stopping first at BACD or ABDC, without having to go through any other steps. Etc. Suppose you have n objects. There are n! ways to order the list of objects. Draw a point for each ordering, and connect the pairs whose orderings differ by a swap of two adjacent elements. If we fill in the surfaces and volumes and so on, we get a shape called the permutohedron on n objects. It tracks how one gets from one ordering of n objects to another ordering. If a k-dimensional face can be written as the Cartesian product of an m-dimensional face and a (k-m)-dimensional face, then it indicates that the relevant set of swaps for the edges of the k-dimensional face can be decomposed as swaps on a set of m+1 adjacent objects and a set of k-m+1 adjacent objects that's disjoint from the first set. So the shape of the faces tells us about disjoint suborderings. You can build a permutohedron on n objects in n-dimensional space by looking at all of the points whose coordinates are orderings of the first n positive integers and then connecting the appropriate edges. Honestly, though, I find this guy a little boring. Consider an ordered list of objects that belong to a set with a multiplication operation. We don't ask for the multiplication to be associative, so we need parentheses. If we have one object A, no big deal, our parenthesized list is just A. Draw a single point. Similarly, if we have two objects, our parenthesized list is just AB. Draw a single point. If we have three objects, we have two ways to put in parentheses: (AB)C and A(BC). We draw a point for each, and connect them by a line. If we have four objects, we have five ways to put in parentheses: ((AB)C)D, (A(BC))D, (AB)(CD), A((BC)D), and A(B(CD)). Draw a point for each, and connect two points if you can get from one parenthesization to the other by a move that looks like (XY)Z -> X(YZ), i.e. the associativity rule, where X, Y and Z can themselves be parenthesizations of lists. So we would connect ((AB)C)D to (A(BC))D and to (AB)(CD), but not to anything else. We end up with a pentagon; in particular, there are two ways to get from ((AB)C)D to A(B(CD)): ((AB)C)D -> (AB)(CD) -> A(B(CD)) ((AB)C)D -> (A(BC))D -> A((BC)D) -> A(B(CD)) If we have five objects, we have fourteen ways to put in parentheses. I'll let you write them out. We end up with a shape with six five-sided faces and three four-sided ones; there isn't a good way to embed this guy into 3-dimensional Euclidean space that has all the faces regular. If we have n objects in our list, we end up with what's called the associahedron on n objects. This is also called a Stasheff polytope, after the guy who (re)discovered them. I like the associahedra because they're a little weirder than the permutohedra. For instance, the number of vertices in the associahedron on n objects is given by what is called the (n-1)-st Catalan number, given by C_{n-1} = (2n-2 choose n-1)/n The combinatorics of other parts of the associahedra is similarly more complicated than the corresponding parts of the permutohedra. As noted above, you can't usually embed the associahedra in Euclidean space with the faces being regular-ish. I mostly approach these guys from a (higher)-algebraic standpoint, where we say "this product is commutative up to equivalence" or "this product is associative up to equivalence" and then the associated -hedra tell us how loose that notion of "up to equivalence" is. Suppose that AB = BA, i.e. we can shrink our 2-object permutohedron to a single point. Then for all higher permutohedra, we shrink their edge to points and we get a single point, i.e. the ordering doesn't matter, our product is commutative. Now suppose that while AB is not equal to BA, there is an map f_{A,B}: AB -> BA, and then the permutohedron on three objects measures the difference between f_{B,C}I_{A}◦I_{B}f_{A,C}◦f_{AB}I_{C} and I_{C}f_{A, B}◦f_{A, C}I_{B}◦I_{A}f_{B,C} where I_{X} means the identity on X; these are the two ways of going around the hexagon from ABC to CBA. Does your algebraic structure care about the difference between those two paths? So for instance, in our usual notion of a noncommutative multiplication, the permutohedron on 3 objects always yields the identity map, i.e. that the difference between ABC and CBA doesn't depend on how we get from ABC to CBA. This corresponds to flattening the hexagon to a single line going from ABC to CBA. Taking the permutohedron on 4 objects and flattening the A,B,C hexagons to single lines going from ABCD to CBAD and from DABC to DCBA, and flattening the A,B,D hexagons to single lines going from ABDC to DBAC and from CABD to CDBA, and so on. You get a funny-looking shape that's basically just a single line from ABCD to DCBA, with some lobes attached that you can ignore. So you get that there's only one way to get from ABCD to DCBA, i.e. no path dependence for four object products. And similarly no path dependence on for more objects. But what if there was a path dependence for three objects, but no path dependence for four objects? Then again we get no path dependence for five objects and so on. These are called "coherence" theorems, that if your -hedra on n objects all collapse to single lines, then so do all of your -hedra on >n objects. These kind of path-dependent algebraic properties shows up in topology and higher category theory and string theory and basically a lot of places that are trying to do algebra in a "higher-dimensional" setting. For me, personally, I focus on the analogues of Lie algebras, where the Jacobi identity doesn't hold exactly, and the discrepancy is controlled by various extra structures that can be integrated to get analogues of Lie groups; these are the actual automorphism objects of string theories, not Lie groups. Since it might come up: no, as far as I know, "exohedron" doesn't actually refer to anything other than me.
A formal power series is like a polynomial with infinite degree. In calculus class you often care about where and when a power series converges, but for formal power series you don't worry about convergence. Consider formal power series of the following type: C(x) = x + c_{1}x^{2} + c_{2}x^{3} + ... You can consider what happens when you plug one such formal power series into another. If C(x) and D(x) both have the above form, then their composition C(D(x)) also has the above form. You can also ask about the inverse with respect to composition: given C(x), find D(x) such that C(D(x)) = x How do you relate the coefficients of C and D? d_{1} = -c_{1} d_{2} = -c_{2} + 2c_{1}^{2} d_{3} = -c_{3} + 5c_{2}c_{1} - 5c_{1}^{3} d_{4} = -c_{4} + 6c_{3}c_{1} + 3c_{2}^{2} - 21c_{2}c_{1}^{2} + 14c_{1}^{4} Where do these numbers come from? Well, for a given coefficient d_{k}, the terms on the right side all have total index k, where we consider c_{1}^{m} to give total index m. Also the sign of a term is given by how many factors there are in the term. But what about the number itself? Consider A_{1} to be a point, A_{2} to be a line, A_{3} to be a pentagon, and in general, A_{k} to be the associahedron on k+1 objects. Then A_{1} is a point and has one point, i.e. one A_{1}. A_{2} is a line segment, and thus has one line, i.e. A_{2}, and two points; we view the points as A_{1}^{2} to get the correct total index. A_{3} is a pentagon, and thus has one pentagon A_{3}, five lines which we view as A_{2}A_{1}, and five points which we view as A_{1}^{3}, again multiplying by points to get the correct total index. A_{4} has one A_{3}, six pentagonal faces A_{3}A_{1}, three square faces A_{2}^{2}, twenty one lines A_{2}A_{1}^{2}, and fourteen points A_{1}^{4}. So you can see a match between the coefficients in D and the pieces of the associahedra. d_{k} = Σ_{k1 + ... + kl = k} (#of pieces of A_{k} of the form A_{k1}A_{k2}...A_{kl})(-c_{k1})...(-c_{kl})
Not exactly math but I figure that a bunch of people here would appreciate twocubes trying to make manga in LaTeX. Also twocubes has some interesting discussions about how to cut an infinite deck of cards, but that's somewhat less hilarious.
If you occasionally allow yourself digressions, I hope you'll allow me this one -- Vectors! Zombies!! https://ricochet.com/665296/fun-with-vectors-and-the-zombie-apocalypse/
Oh, sure, digressions are super allowed! I mean, the only reason it looks like I'm actually running this thread is that I get to change the name and I make the majority of the posts, but that's because I like...whatever is the text equivalent of the sound of my own voice.
Today a bunch of coworkers and I were joking about using your fingers to count in binary and why we didn't teach that to kids, and I mentioned that kids naturally count in base 1 rather than base 2. One of the group expressed shock at the notion of base 1. I'm going to have fun presenting next week.
How does... Can you represent 0 in base 1? Can you represent decimals? I mean, yes I know you're talking about kids but I imagine it's a properly defined base, right? Basically tallies? Like. I can see negative numbers and fractions but. Is it ever used for stuff that isn't natural numbers?
No, it''s not really a well-developed base all. It's just tallies. The issues are probably related to the fact that the field with 1 element doesn't exist.
Yeah, having a place value system at all is a fairly sophisticated idea. A young beginner's number system is just a one-to-one correspondence between the group to be counted and an ordered list of words. "Ten" has no special significance, and is just the number after "nine." I suppose if you think of the names of place values in base ten as the same sort of ordered list, you could roughly describe the basic beginner's number system as "base 1". But the only real utility of the term is to troll other nerds.
Speaking of fun stuff with fields, I was reading about the q-analog of the Moebius inversion formula when I was struck with the thought of "why don't they just call it inqlusion-exqlusion?"
Spent some time trying to prove from Euclid's axioms that a line that intersects a circle at one point without being tangent there must intersect the circle again at a different point. Using algebra is cheating. I've managed to prove that a line segment that intersects a triangle transversely at some point that isn't a vertex must intersect the triangle again. Circles are harder.
The Twin Primes Conjecture Prime numbers are the building blocks for multiplication, the fundamental objects for which there is no meaningful decomposition into smaller objects, and from which all other numbers can be made*. But the relationship between multiplication and addition is subtle and complicated, and in particular the way that the prime numbers interact with addition has spawned many a research career. One statement that relates prime numbers and addition is the twin prime conjecture. Some pairs of prime numbers, like 3 and 5 or 5 and 7, differ by 2, while others, like 7 and 11, have larger gaps between them. We call a pair of primes that differ by 2 a pair of "twin primes". So 3 and 5 make a pair of twin primes, and 5 and 7 make a pair of twin primes. The twin prime conjecture is that there are an infinite number of pairs of twin primes, i.e. that for any pair of twin primes, you can always find a larger pair if you look long enough. So after 5 and 7 we have 11 and 13, and then 17 and 19, and then 29 and 31, and... But while we can enumerate a lot of pairs, we still haven't actually proven that there are an infinite number of pairs of twin primes! There has been some recent progress, though! In 2013, Yitang Zhang made the first piece of significant progress in a while, by showing that there are an infinite number of pairs that differ by no more than 70 million. So that means that there is some finite number k such that there are an infinite number of primes that differ by k, where k is at most 70 million. This was done by encoding the primes as a function and examining the behavior of the function. Since then, people have pushed down the value of k further and further; I think it's currently sitting at 246, which means that for some k no larger than 26, there are an infinite number of pairs of primes that differ by k. More recently, however, we have an algebraic result! There is a nice analogy between integers and polynomials over finite fields. You can add polynomials and multiply polynomials, and you can define a notion of a prime polynomial as a polynomial that cannot be factored into a pair of nonconstant polynomials. Taking things a little further gives us a nice translation between statements about integers and statements about polynomials over finite fields, first outlined by Andre Weil and put to good use ever since. In particular, finite fields tend to be simple enough and well-behaved enough that questions about polynomials over finite fields tend to be easier to answer than the analogous questions about the integers. For instance, there is an analogy of the Riemann Hypothesis for finite fields that was proven back in the 40s**. So we can ask about gaps between prime polynomials: given a fixed polynomial f(x), how many pairs of prime polynomials g(x) and h(x) are there such that g(x) - h(x) = f(x)? A few weeks ago, two mathematicians Will Sawin and Mark Shusterman proved a formula that tells us, for a given prime p, a given positive integer d and a given polynomial f(x), how many such pairs of prime degree-d polynomials g(x) and h(x) over the finite field Z/pZ differ by f(x). If this formula spits out a positive number for an infinite number of degrees d, then we get that there are an infinite number of pairs of prime polynomials that differ by f(x)! The analogue for integers would be a formula where given integers a, b and k, we can say how many pairs of primes between a and b differ by exactly k. This would be even better than the twin prime conjecture, which just says that there's at least one pairs no matter how far up you go; this would tell us exactly how many pairs. But the analogy only goes so far; after all, the Riemann Hypothesis for integers is still open***. Only time will tell if this analogue of the twin prime conjecture will yield an analogous proof. * Allowing for multiplication by units and assuming that you allow the trivial product of 1. ** Technically for zeta functions of curves defined over finite fields. *** If there were a field with only one element, then we'd be able to turn the proof of the Riemann Hypothesis for (zeta functions of curves over) finite fields into a proof for the Riemann Hypothesis over the integers.
I know that technically the term is "partially-defined composition" but I always think "like multiplication but with failure".
If only there were a field with only one element Consider the real numbers. Suppose you had a function f defined on the real number line, and at some point p it vanishes, i.e. f(p) = 0. Then for any other function g, we have that the product of f and g gives a function fg that vanishes at p, i.e. (fg)(p) = f(p)g(p) = 0*g(p) = 0 Also, if g also vanishes at p, then the function f+g vanishes, i.e. (f+g)(p) = f(p) + g(p) = 0 + 0 = 0 We say that the set of functions that vanish at p is an ideal with respect to the ring of functions that are defined on the real line: the product of something in the ideal with anything else is also in the ideal, and the sum of two things in the ideal is in the ideal. We'll call this ideal I_{p}. Now if we had two points p and q, we could ask about the functions that vanish at both p and q. This also forms an ideal I_{{p,q}}, for the same reasons that the functions that vanish at p forms an ideal. And similarly, for any collection C of points, we could ask about the set of functions that vanish at those points, and again get an ideal I_{C}. But the ideal I_{p} is more special. Suppose we had two functions, g and h, and we know that gh vanishes at p. In other words, g(p)h(p) = 0 This tells us that either g(p) = 0 or h(p) = 0, i.e. either g or h is in the ideal I_{p}. Hence if the product of two functions is in the ideal I_{p}, then at least one of the two functions must be in I_{p}. This isn't true for the ideal I_{{p,q}}, since we could have that g(p) = 0 but g(q) = 1 and h(q) = 0 but h(p) = 1; now neither g nor h vanish at both p and q, and so neither of them is in I_{{p,q}}, but their product vanishes at both points and so is in I_{{p,q}}. Now let's talk about numbers, and in particular integers. Consider the set of integers that are multiples of 6. If you multiply a multiple of 6 by an integer, you get another multiple of 6, and if you add two multiples of 6, you get a multiple of 6. So the multiples of 6 form an ideal with respect to the integers. We could call it I_{6}. But consider 10 and 3. Neither is a multiple of 6, but their product 30 is a multiple of 6. So I_{6} isn't as special as I_{p}; it's more like I_{{p,q}}. But if we looked at 5, then we can look at the multiples of 5, I_{5}, and if we have two numbers a and b such that ab is a multiple of 5, then either a or b has to be a multiple of 5, because 5 is prime. So we might call the ideal I_{5} a prime ideal. And for the same reason, we would say that I_{p} is also a prime ideal. For technical reasons, I'm going to say that the ideal containing just the 0 function and the ideal containing all functions are not prime. So now we can associate a set of points to each integer: a prime corresponds to a single point, and each other integer corresponds to the set of its prime factors. We have to be a little bit careful about factors that appear multiple times; does the set for 9 just consist of the point for 3, or somehow include that point twice? But we nevertheless get some sort of 1-dimensional thing. Not really a line, per se, but a 1-dimensional object of some sort, for some appropriate notion of dimension*. Usually, we do this analogy not with the real line, but with 1-dimensional objects (i.e. curves) over finite fields. I'm not going to explain exactly what that means, but the relevant bit is that when we do this localizing, we get fields that are all extensions of the finite field, the way that the complex numbers are an extension of the real numbers. But wait, you might ask, if we're really being serious here, for the integers we get prime fields, and you can't extend a prime field to get a different prime field. Well, true. But! Suppose that F_{1} were a field! Then F_{p} would be an extension of F_{1}, and so would F_{q}, sort-of. Not literally, but close enough for geometry. In particular, we would get that Z is the ring of functions of a curve over F_{1}. Why do we care? Well, because there's a bunch of stuff that we know about rings of functions of curves over finite fields! For instance, the (analogue of the) twin-prime conjecture is true for rings of functions of curves over finite fields, and so is the (analogue of the) Riemann hypothesis! So if we were just allowed to say that Z is the ring of functions of a curve over a finite field, we'd be done with both of those problems instantly. But we can't, because F_{1} isn't a field. * The appropriate notion is Krull dimension: how long a chain of prime-ideal-contained-in-different-prime-ideal can you make.
Not a proof of the Riemann Hypothesis Suppose you have a polynomial P(x,y,z) which is homogeneous, i.e. there is some number n such that P(x,y,z) is a sum of terms of the form p_{a,b,c}x^{a}y^{b}z^{c} where p is a constant and a+b+c = n. The equation P(x,y,z) = 0 gives us a bunch of triples, and the reason we asked for homogeneity is that if P(X,Y,Z) = 0 for a triple of numbers (X, Y, Z), then for any constant r, P(rX, rY, rZ) = 0 as well. So we projectivize, and say that (X, Y, Z) and (rX, rY, rZ) are the same projective point for any nonzero r. Since we have three variables and one equation and we're also saying that scaling by a constant doesn't change what point we have, we end up with a 1-dimensional object, i.e. a curve, which we'll call C. So now we can ask about how many points this curve has. Wait a minute, you say, isn't it an infinite number? Since we can probably pick x and y randomly and then solve for z? Well, I haven't told you where x, y, and z live! So suppose that the coefficients of the polynomial live in some field K. Then we can look at C(K), by which we mean all the triples X, Y, Z in K that satisfy P(X,Y,Z) = 0. If K is infinite, then C(K) probably has an infinite number of points, but if K is finite, then C(K) has only a finite number of points. How many? Suppose that the coefficients live in F_{q}, i.e. a finite field with q elements, where q is a power of a prime number. Then for any n, we can look at the field F_{qn}, which contains F_{q}, so we can consider C(F_{qn}) for all n. We define the zeta function for C to be ζ(C, s) = Σ_{n=1} #(C(F_{qn}))q^{-ns}/n which is an infinite series. It might not converge, but we don't really care; we're just trying to put all of the values together into a single object. Andre Weil showed that there is polynomial p such that ζ(C, s) = p(q^{-s})/((1-q^{-s})(1-q^{1-s})) where p(T) = Π_{i=1}^{b} (1 - a_{i}T) for some integer b and some complex numbers a_{1}, ..., a_{b} where |a_{i}| = q^{1/2}. This tells us, for instance, that the number of points of C that live in F_{q} is approximately q + 1, plus or minus a term that's proportional to q^{1/2}. We also get that p(q^{-s}) is 0 implies that q^{-s} = 1/a_{i} for some i, so ζ(C, s) = 0 implies that s has real part 1/2. That might sound a little familiar. The Riemann zeta function ζ(s) has a bunch of zeros with real part 1/2, and there is a million dollars waiting for anyone who can prove that, other than s = -2n for integer n, all the solutions to ζ(s) = 0 have real part 1/2. And indeed, you can consider the integers to be a curve defined over the field F_{1}, and then its zeta function would indeed be the Riemann zeta function, and so would have zeros only at real part 1/2 (other than the pesky negative even integer points). Exercise: why doesn't the previous statement constitute a proof of the Riemann Hypothesis?
Zero-Knowledge Proof You have discovered a thing. A brilliant, important thing, and you want everyone to know! Except you don't want to reveal the thing, because someone will steal it. So how do you prove that you know a thing without telling anyone what it is? The classical parable here is Ali-Baba's cave: a cave entrance leads to a branching path that eventually closes up into a loop. But blocking way around the loop is a door that only opens if you speak the password. So you, being you, get chased into the cave. You pick a branch and head down it before your pursuers enter the cave. Your pursuer, being the brilliant Disney-esque henchman that they are, loudly declare which branch they're going to head down and then head down that path. If they picked the branch that wasn't the one you went down, you can just backtrack and head out of the cave. But if they picked the branch that you went into then you can't escape into the other branch unless you know the password! Now suppose that you've been thieving on the regular and you keep getting chased into this cave. Eventually the goon, or more likely their boss, is going to figure out that since you keep escaping you must either be able to guess which direction the goon is going to head, or you know the password. Since the goon picks randomly (the one thing the goon is good at), it is much more likely that you know the password. But neither the boss nor the goon ever learns anything about the password itself, only that you know it, and that they should probably stop trying to chase you into this stupid cave. As this is not the fleeing from possibly-lawful pursuit thread but is instead the math thread, let's talk some math. We give a zero-knowledge proof scenario as follows: a prover, say Peggy, wants to convince a verifier, Victor, that she knows the answer to a question without letting Victor know what that answer is. So instead Victor asks her to answer a related question, using some randomness to prevent her from simply figuring out the answer to the related question in advance. For example, suppose we have a group G and two group elements g and h, and Peggy claims that she knows a value x such that g^{x} = h. If she tells Victor the value of x then he can confirm that g^{x} = h, but then he knows x. So instead we have Schnorr's protocol: Peggy picks a random number r, computes C = g^{r}, and sends C to Victor. C is called the commitment. Victor picks a random number c and sends that to Peggy. c is called the challenge. Peggy computes a response y and sends that to Victor. This is called the response. Victor computes g^{y} and compares that to Ch^{c}. If Peggy knows x, then she can compute y = r+cx and Victor would confirm that g^{y} = g^{r+cx} = Ch^{c} However (depending on G) figuring out what y to send without knowing x is difficult. We can show that Victor doesn't learn anything about x: suppose that Victor had told Peggy the challenge before she computed the commitment. Then she could pick a random response s and computed the challenge as C = g^{s}h^{-c}. Then Victor would check g^{s} = g^{s}h^{-c}h^{c} = Ch^{c} and would accept. Since Peggy didn't need to know x to do this, we see that Victor ultimately can't have learned anything about x. So this proof is zero-knowledge, in that Victor learns that Peggy knows x but doesn't learn what x is. Of course, if Peggy had managed to guess the challenge before hand, she could do the same thing, picking a random response and crafting the commitment to fit. So there is always a chance that Peggy got lucky. But if they repeat this process a bunch of times, Peggy making a random commitment, Victor sending a challenge, and then Peggy sending a response, then Victor will eventually decide that the probability that Peggy guessed the challenge in advance is low enough that she probably just knows x instead. Zero-knowledge proofs are used to allow for authentication of secret things, like knowledge of a password, or identities or monetary values in anonymous settings like cryptocurrencies. A (cryptographic) digital signature is akin to a zero-knowledge proof, in that you're supposed to be able to create the signature only if you know a special key, but the signature itself shouldn't give away what the key is. There are ways to make it so that Victor doesn't have to send a challenge between the commitment and the response, requiring some computational hardness assumptions.