Or: Why you should forget about the vector cross product!
So titled because while both spoken about in tales and legends, neither unicorns nor cross products actually exist. Forget what you think you know about vector geometry!
A question for the beginner physics students: consider two vectors and . The coefficients have units of length. You know the dot product
which is a number, has units of area, and is the square of the length of one vector projected onto the other. You probably also know the other interesting operation, the cross product
which is a vector. Its length is the magnitude of the parallelogram defined by and . But its coefficients have units of area. How can it still be a vector in the same space as the other two if its coefficients have different units?
There has been a lot of excitement recently with the news that physicists at LIGO have directly detected gravitational waves. Many wonderful people have done popular science blog posts or videos about what gravitational waves are, but I haven’t really seen anyone talk much about the mathematics of it. For example, why do Einstein’s equations mean that gravitational waves exist? How did Einstein predict gravitational waves?
It turns out that starting with Einstein’s equations and a few simplifying assumptions, it’s relatively easy to derive the necessity of gravitational waves. That’s what this post will try and do.
Suppose we’re running a particle physics/neurobiology simulation, or solving some machine learning/data mining problem, and we need to do some linear algebra. Specifically, we want to multiply two real matrices , . To compute the -th entry of the product we go along the th row of , multiplying entrywise with the th column of , so that for each entry of we perform real multiplications (and real additions, but this makes no difference to the “average” number of operations needed, so we’ll be slack and gloss over such details until we formalize things in the next few sections). Since there are entries in the product, this gives a total of operations required to calculate the product, as one would intuitively expect. As a slightly unrealistic example, let us generously assume that our matrices are small, of dimension (which incidentally is just outside the largest matrix size allowed by GNU Octave on my computer), then assuming the ability to perform operations a second on a “standard” PC processor, one matrix multiplication would take on the order of seconds, or slightly over a day. In real-world applications such as high-performance computing or big data problems the dimension gets much bigger, so naive multiplication is still too slow, even given the extra processing power we gain by switching to supercomputers. Continue reading
The Cosmic Censorship Hypothesis (CSH) was put forward by Roger Penrose in 1969, and (roughly) states
“there are no naked singularities”.
The hypothesis proposes that whenever a singularity occurs, such as in the center of a black hole, it must occur behind an event horizon. A singularity outside of an event horizon is termed a naked singularity, and the CSH says that such singularites do not exist. Cosmic Censorship has pretty profound implications for fundamental physics. For instance, a failure of the Cosmic Censorship Hypothesis leads to a failure of determinism in classical physics, since one can’t predict the behaviour of spacetime in the causal future of a singularity. On the other hand, if the Cosmic Censorship Hypothesis holds then (outside the event horizon), the singularity does not affect determinism. In 1991, Stephen Hawking made a wager with physicists Kip Thorne and John Preskill that the Cosmic Censorship Hypothesis is true. Six years later in 1997, Hawking conceded the bet “on a technicality”, after computer calculations showed that naked singularities could exist, albeit only in exceptional (not physically realistic) circumstances. In this post, we’re going to be looking at extremal black holes, and their relationship to the Cosmic Censorship Hypothesis.
Today I’m going to be talking about an interesting little toy model in statistical mechanics – the Primon Gas.
Consider a physical system with a discrete energy spectrum
Each energy in the spectrum corresponds to a particle with that energy. If we second quantize this system, we obtain a creation operator for each of these particles. Using these operators, we can act on a vacuum state (zero energy state), denoted , to obtain new states. We get the following ‘tower’ of states with corresponding energies:
Two chess variants, played on the chess board.
I can’t lay claim to the first, but I’m pretty sure that the second is my own creation – I certainly came up with it myself, but I don’t know if anyone else has thought of it in this form before. There is mention of something similarly-inspired on the Chess Variants Pages, but the game is very different to mine, and you need special equipment!
These variants both use the same board and pieces as regular chess. The movement and capture rules are the same as chess. The difference lies in the geometry of the board, where we make edge identifications to alter the space the game is played on. The first variant is called Cylindrical Chess, the second is Möbius Chess.
I like sequences that have non-conventional definitions. For example, there is the very non-equational Look and Say Sequence made famous by Conway:
This starts with the seed value 1, and each successive term is generated by looking at the previous term, saying the numbers appearing there out loud, and writing down the numbers you find yourself saying. For instance, the term ‘111221’ becomes “Three ones, two twos, one one”, which transliterates to ‘312211’.
Despite this crazy generating rule, there is actually a lot of structure to be found in this sequence. Conway found that certain strings of digits, once created, never interacted with those to their left or right again, instead going through an internal ‘life cycle’, growing and changing until it reached a point where it was a string of several such atomic strings joined together; each of these then went off in their own life cycle like some strange numerical mitosis. Conway actually named these atomic strings after the elements, since he found 92 such atomic strings containing the numbers 1, 2, and 3 alone, and two ‘transuranic’ strings for each other natural number.
Conway also found that the ratio of the length of successive terms approaches a constant, and gave a degree-71 polynomial of which this constant is the only real root.
The Look and Say Sequence is surprisingly fruitful, given how non-mathematical its rule seems.
An ellipse is a set of points in the plane which satisfy a certain equation, namely,
In this sense, an ellipse appears to just be a squashed circle. But there are dozens of ways to think about ellipses – as conic sections, for example – and in this post the idea I’d like to use is that of foci:
You can make an ellipse with two pins and a piece of string tied in a circle. Stick the two pins into a page and loop the string over them. Stretch the string out tight with a pen and draw all around. If you make sure to keep the string taut, you will have drawn an ellipse.
It’s a common enough geometric construction, and I’m sure many readers would know of it. I myself remember my dad showing it to me when I was a little boy. What it tells us is that an ellipse can be though of as the set of points whose total distance from two distinct points, the foci, is constant (plus the distance between those two points, but that’s always constant).
Naturally, a mathematician asks: “How can I generalise this idea?”
Suppose we were to take the plane, , and turn it into a graph – not the function plotting kind, the edge and vertex kind.
We let every point in the plane be a vertex, and we draw an edge between two vertices if they are at a distance of exactly from each other (Euclidean metric. We could take other metrics or other base spaces, but that is perhaps a topic for a further post).
We’ve now got a pretty formidable object! Uncountably many vertices and uncountably many edges at each vertex! But this graph is amenable enough after a little thought. Three quick ideas:
Suppose we were to take a finite alphabet of letters, say , and we considered all the possible ‘words’ of a given length. For instance, the worlds of length 2 are:
and there are many of these. Let’s denote the set of length- words using the alphabet by .
What I am interested in now is “For a given and , what is the minimum length string of letters I can write down and still be sure that every word in appears somewhere in it as a substring?”. Why might I be interested in such a thing? Because I really should be working on my PhD, that’s why.
contains every length-2 word of 2 letters, in order 01, 11, 10, 00. It’s of optimal length, too, because if we start with just ’01’ and successively add ‘1’, ‘0’, and ‘0’, we find that at every stage the newly added letter creates a new sub-word not previously seen, and so every element of appears exactly once. We cannot delete a letter to make a shorter string without losing one of the words. Similarly,
are examples of optimal tours through and respectively, as they both have this property that each letter added after the first adds a previously unseen word.
Notice I’ve introduced the terminology tour for any string that contains all words in a given , with optimal tour being the tour that is of minimum possible length for that and .