Category: Expository posts

Unicorns of Geometry: Getting Cross About Products

Or: Why  you should forget about the vector cross product!

So titled because while both spoken about in tales and legends, neither unicorns nor cross products actually exist. Forget what you think you know about vector geometry!

A question for the beginner physics students: consider two vectors u = u_x \hat{x} + u_y \hat{y}+ u_z \hat{z} and v = v_x\hat{x} + v_y \hat{y} + v_z \hat{z}. The coefficients u_i, v_i have units of length. You know the dot product

u \cdot v = u_xv_x + u_yv_y + u_x v_z,

which is a number, has units of area, and is the square of the length of one vector projected onto the other. You probably also know the other interesting operation, the cross product

u \times v = (u_yv_z - u_zv_y)\hat{x} + (u_zv_x - u_xv_z)\hat{y} + (u_xv_y - u_yv_x)\hat{z}

which is a vector. Its length is the magnitude of the parallelogram defined by u and v. But its coefficients have units of area. How can it still be a vector in the same space as the other two if its coefficients have different units?

Continue reading

Advertisements

Matrix multiplication is faster than you expect (part I)

Suppose we’re running a particle physics/neurobiology simulation, or solving some machine learning/data mining problem, and we need to do some linear algebra. Specifically, we want to multiply two real n \times n matrices A, B. To compute the i,j-th entry of the product we go along the ith row of A, multiplying entrywise with the jth column of B, so that for each entry of AB we perform n real multiplications (and n-1 real additions, but this makes no difference to the “average” number of operations needed, so we’ll be slack and gloss over such details until we formalize things in the next few sections). Since there are n^2 entries in the product, this gives a total of n^3 operations required to calculate the product, as one would intuitively expect. As a slightly unrealistic example, let us generously assume that our matrices are small, of dimension n \approx 10^5 (which incidentally is just outside the largest matrix size allowed by GNU Octave on my computer), then assuming the ability to perform 10^{10} operations a second on a “standard” PC processor, one matrix multiplication would take on the order of 10^5 seconds, or slightly over a day. In real-world applications such as high-performance computing or big data problems the dimension gets much bigger, so naive multiplication is still too slow, even given the extra processing power we gain by switching to supercomputers. Continue reading