This new method will be marginally faster if you ever need to multiply two numbers on a scale where writing down the bits fills every plank-length cube in the known universe.
Otherwise the “practical” benefit is limited to saving a few symbols when writing abstract math papers.
> This new method will be marginally faster if you ever need to multiply two numbers on a scale where writing down the bits fills every plank-length cube in the known universe.
If we're talking about physical implementations then you would also have to take the delay between those planck volumes into account
From last time this came up, the number of bits in the numbers involved is 2^(9^12). Storing a count of the number of bits takes about 930 billion digits. Actually storing the bits or digits of the numbers per se takes a whole lot more, to say the least.
Yes, this result will be used to save a few printed symbols each time the topic comes up in future mathematics papers, and the beauty of the slightly simpler mathematical expressions will spark joy in mathematicians reading them.
Karatsuba is a beautiful algorithm and as good as it gets for "small" numbers (on the order of a few thousand bits or less). But when you get bigger than that you need the FFT methods, which are impractical for "small" numbers. So the two algorithms complement each other nicely.
I admire the perseverance, grit, and intelligence needed to crack hard nuts like this.