# Unicorns of Geometry: Getting Cross About Products

Or: Why  you should forget about the vector cross product!

So titled because while both spoken about in tales and legends, neither unicorns nor cross products actually exist. Forget what you think you know about vector geometry!

A question for the beginner physics students: consider two vectors $u = u_x \hat{x} + u_y \hat{y}+ u_z \hat{z}$ and $v = v_x\hat{x} + v_y \hat{y} + v_z \hat{z}$. The coefficients $u_i, v_i$ have units of length. You know the dot product

$u \cdot v = u_xv_x + u_yv_y + u_x v_z$,

which is a number, has units of area, and is the square of the length of one vector projected onto the other. You probably also know the other interesting operation, the cross product

$u \times v = (u_yv_z - u_zv_y)\hat{x} + (u_zv_x - u_xv_z)\hat{y} + (u_xv_y - u_yv_x)\hat{z}$

which is a vector. Its length is the magnitude of the parallelogram defined by $u$ and $v$. But its coefficients have units of area. How can it still be a vector in the same space as the other two if its coefficients have different units?

# Predicting gravitational waves

There has been a lot of excitement recently with the news that physicists at LIGO have directly detected gravitational waves. Many wonderful people have done popular science blog posts or videos about what gravitational waves are, but I haven’t really seen anyone talk much about the mathematics of it. For example, why do Einstein’s equations mean that gravitational waves exist? How did Einstein predict gravitational waves?

It turns out that starting with Einstein’s equations and a few simplifying assumptions, it’s relatively easy to derive the necessity of gravitational waves. That’s what this post will try and do.

# Matrix multiplication is faster than you expect (part I)

Suppose we’re running a particle physics/neurobiology simulation, or solving some machine learning/data mining problem, and we need to do some linear algebra. Specifically, we want to multiply two real $n \times n$ matrices $A$, $B$. To compute the $i,j$-th entry of the product we go along the $i$th row of $A$, multiplying entrywise with the $j$th column of $B$, so that for each entry of $AB$ we perform $n$ real multiplications (and $n-1$ real additions, but this makes no difference to the “average” number of operations needed, so we’ll be slack and gloss over such details until we formalize things in the next few sections). Since there are $n^2$ entries in the product, this gives a total of $n^3$ operations required to calculate the product, as one would intuitively expect. As a slightly unrealistic example, let us generously assume that our matrices are small, of dimension $n \approx 10^5$ (which incidentally is just outside the largest matrix size allowed by GNU Octave on my computer), then assuming the ability to perform $10^{10}$ operations a second on a “standard” PC processor, one matrix multiplication would take on the order of $10^5$ seconds, or slightly over a day. In real-world applications such as high-performance computing or big data problems the dimension gets much bigger, so naive multiplication is still too slow, even given the extra processing power we gain by switching to supercomputers. Continue reading

# Primon Gas

Today I’m going to be talking about an interesting little toy model in statistical mechanics – the Primon Gas.

Consider a physical system with a discrete energy spectrum

$E = \{ \ln 2, \ln 3, \ln 5, ...\} = \{ \ln p : \text{p is prime} \}$

Each energy in the spectrum corresponds to a particle with that energy. If we second quantize this system, we obtain a creation operator $\alpha_{p}$ for each of these particles. Using these operators, we can act on a vacuum state (zero energy state), denoted $| 1 \rangle$, to obtain new states. We get the following ‘tower’ of states with corresponding energies:

$\begin{array}{rcl} \textbf{State} & \to & \textbf{Energy} \\ \alpha_2 | 1 \rangle & \to & \ln 2\\ \alpha_3 | 1 \rangle & \to & \ln 3 \\ \alpha_2 \alpha_2 | 1 \rangle & \to & \ln 4 \\ \alpha_5 | 1 \rangle & \to & \ln 5 \\ \vdots \end{array}$

# Fun With Sequences

I like sequences that have non-conventional definitions. For example, there is the very non-equational Look and Say Sequence made famous by Conway:

$\{1, 11, 21, 1211, 111221, 312211, 13112221, 1113213211, 31131211131211, \ldots \}$

This starts with the seed value 1, and each successive term is generated by looking at the previous term, saying the numbers appearing there out loud, and writing down the numbers you find yourself saying. For instance, the term ‘111221’ becomes “Three ones, two twos, one one”, which transliterates to ‘312211’.

Despite this crazy generating rule, there is actually a lot of structure to be found in this sequence. Conway found that certain strings of digits, once created, never interacted with those to their left or right again, instead going through an internal ‘life cycle’, growing and changing until it reached a point where it was a string of several such atomic strings joined together; each of these then went off in their own life cycle like some strange numerical mitosis. Conway actually named these atomic strings after the elements, since he found 92 such atomic strings containing the numbers 1, 2, and 3 alone, and two ‘transuranic’ strings for each other natural number.

Conway also found that the ratio of the length of successive terms approaches a constant, and gave a degree-71 polynomial of which this constant is the only real root.

The Look and Say Sequence is surprisingly fruitful, given how non-mathematical its rule seems.

# Higher Dimensional Ellipsoids

An ellipse is a set of points in the plane which satisfy a certain equation, namely,

$\frac{(x - x_0)^2}{r_1} + \frac{(y - y_0)^2}{r_2} = k^2$

In this sense, an ellipse appears to just be a squashed circle. But there are dozens of ways to think about ellipses – as conic sections, for example – and in this post the idea I’d like to use is that of foci:

You can make an ellipse with two pins and a piece of string tied in a circle. Stick the two pins into a page and loop the string over them. Stretch the string out tight with a pen and draw all around. If you make sure to keep the string taut, you will have drawn an ellipse.

It’s a common enough geometric construction, and I’m sure many readers would know of it. I myself remember my dad showing it to me when I was a little boy. What it tells us is that an ellipse can be though of as the set of points whose total distance from two distinct points, the foci, is constant (plus the distance between those two points, but that’s always constant).

Naturally, a mathematician asks: “How can I generalise this idea?”

# The Unit Plane Graph

Suppose we were to take the plane, $\mathbb{R}^2$, and turn it into a graph – not the function plotting kind, the edge and vertex kind.

We let every point in the plane be a vertex, and we draw an edge between two vertices if they are at a distance of exactly $1$ from each other (Euclidean metric. We could take other metrics or other base spaces, but that is perhaps a topic for a further post).

We’ve now got a pretty formidable object! Uncountably many vertices and uncountably many edges at each vertex! But this graph is amenable enough after a little thought. Three quick ideas:

# Complete Word Tours

Suppose we were to take a finite alphabet $\mathbb{A}_n$ of $n$ letters, say $\{ 0, 1, \ldots, n-1\}$, and we considered all the possible ‘words’ of a given length. For instance, the worlds of length 2 are:

$\{00, 01, \ldots, 0(n-1), 10, 11, \ldots, (n-1)(n-1)\}$

and there are $n^2$ many of these. Let’s denote the set of length-$m$ words using the alphabet $\mathbb{A}_n$ by $\mathbb{W}^m_n$.

What I am interested in now is “For a given $n$ and $m$, what is the minimum length string of letters I can write down and still be sure that every word in $\mathbb{W}^m_n$ appears somewhere in it as a substring?”. Why might I be interested in such a thing? Because I really should be working on my PhD, that’s why.

Example:

01100

contains every length-2 word of 2 letters, in order 01, 11, 10, 00. It’s of optimal length, too, because if we start with just ’01’ and successively add ‘1’, ‘0’, and ‘0’, we find that at every stage the newly added letter creates a new sub-word not previously seen, and so every element of $\mathbb{W}^2_2$ appears exactly once. We cannot delete a letter to make a shorter string without losing one of the words. Similarly,

0011101000

and

0201221100

are examples of optimal tours through $\mathbb{W}^3_2$ and $\mathbb{W}^2_3$ respectively, as they both have this property that each letter added after the first $m$ adds a previously unseen word.

Notice I’ve introduced the terminology tour for any string $\tau$ that contains all words in a given $\mathbb{W}^m_n$, with optimal tour being the tour $\tau^m_n$ that is of minimum possible length for that $m$ and $n$.

# Avian Flow: Continuum Boid Dynamics

I’m excited for this, if only because of the enormous amount of humorous nomenclature that will come of it. Ok, here we go.

Have you ever heard of Boids? Boids (‘Birds’ with a Brooklyn accent) refers to a certain type of algorithm for simulating the flocking of large numbers of animals – for example, the eponymous birds. The word refers both to the algorithm itself and to the individual elements being simulated, which are rendered onscreen and whose motion through simulated space evolves according to a set of three rules. The particular details of the algorithm may vary from implementation to implementation, but these three main features of the Boids’ are usually the same.

Boids, as entities, possess a position and a velocity, and obey the following physical principles:

1. Alignment. Boids wish to conform to the flight paths of their neighbours, and so will rotate to align their own velocity with the average velocity of neighbouring boids which are sufficiently close.
2. Cohesion. Boids like to fly in groups, and they like to be as close to the centre of the group as possible. For this reason they also steer towards the mean position of those sufficiently close neighbours.
3. Separation. Boids don’t, however, like to collide, so they will actively avoid any of their neighbours that come too close.

If you want to have a look at an actual implementation and some wonderful, in-depth visualisations of all of these things, then you can do no better than this post over at Harry Brundage’s blog. Seriously, go over there and have a play. I’ll wait here.

Neat, huh? But why should we care? Why am I talking about boids? Well, the first thing that came into my head when I learned of boids was:

Why not make this a continuous system?

# Sum-Product Cycles of Positive Integers Pt. 3

To recap, we are attempting to find general solutions to the system of $n$ coupled Diophantine equations with the form:

$x_1 + y_1 = x_2 y_2$

$x_2 + y_2 = x_3 y_3$

$\vdots$

$x_n + y_n = x_1 y_1$.

We have already found that a solution is either compound (the concatenation of two shorter solutions), or it is simple (no consecutive subset is a solution to any shorter cycle). We have a canonical simple cycle, which for any pair $(i, j)$ is the $(i - 1)(j - 1)$-cycle given by

$[(i, j)]_{Can} = [(i, j), (1, i+j), (1, i+j+1), \ldots, (1, ij-1)]$

As we shall see in this third (and final) post, the canonical cycle is very significant in the classification of simple solutions, and hence of all solutions.