# Category: Expository posts

# Unicorns of Geometry: Getting Cross About Products

**Or: Why you should forget about the vector cross product!**

So titled because while both spoken about in tales and legends, neither unicorns nor cross products actually exist. Forget what you think you know about vector geometry!

A question for the beginner physics students: consider two vectors and . The coefficients have units of length. You know the *dot* *product*

,

which is a number, has units of area, and is the square of the length of one vector projected onto the other. You probably also know the other interesting operation, the *cross* *product*

which is a vector. Its length is the magnitude of the parallelogram defined by and . But its coefficients have units of area. How can it still be a vector in the same space as the other two if its coefficients have different units?

# Matrix multiplication is faster than you expect (part I)

Suppose we’re running a particle physics/neurobiology simulation, or solving some machine learning/data mining problem, and we need to do some linear algebra. Specifically, we want to multiply two real matrices , . To compute the -th entry of the product we go along the th row of , multiplying entrywise with the th column of , so that for each entry of we perform real multiplications (and real additions, but this makes no difference to the “average” number of operations needed, so we’ll be slack and gloss over such details until we formalize things in the next few sections). Since there are entries in the product, this gives a total of operations required to calculate the product, as one would intuitively expect. As a slightly unrealistic example, let us generously assume that our matrices are small, of dimension (which incidentally is just outside the largest matrix size allowed by GNU Octave on my computer), then assuming the ability to perform operations a second on a “standard” PC processor, one matrix multiplication would take on the order of seconds, or slightly over a day. In real-world applications such as high-performance computing or big data problems the dimension gets much bigger, so naive multiplication is still too slow, even given the extra processing power we gain by switching to supercomputers. Continue reading