# Tagged: Gauss’ complex multiplication

# Matrix multiplication is faster than you expect (part I)

Suppose we’re running a particle physics/neurobiology simulation, or solving some machine learning/data mining problem, and we need to do some linear algebra. Specifically, we want to multiply two real matrices , . To compute the -th entry of the product we go along the th row of , multiplying entrywise with the th column of , so that for each entry of we perform real multiplications (and real additions, but this makes no difference to the “average” number of operations needed, so we’ll be slack and gloss over such details until we formalize things in the next few sections). Since there are entries in the product, this gives a total of operations required to calculate the product, as one would intuitively expect. As a slightly unrealistic example, let us generously assume that our matrices are small, of dimension (which incidentally is just outside the largest matrix size allowed by GNU Octave on my computer), then assuming the ability to perform operations a second on a “standard” PC processor, one matrix multiplication would take on the order of seconds, or slightly over a day. In real-world applications such as high-performance computing or big data problems the dimension gets much bigger, so naive multiplication is still too slow, even given the extra processing power we gain by switching to supercomputers. Continue reading