> It is not generally known that to provide seven- decimal accuracy of the sine function, allowing third-order interpolation, only 15 entries are required [*] ...
The following commentary is interesting.
> In spite of these developments in table construction, it turned out two decades ago that the technology was such that table look-up was too slow for arithmetic. The stored-program type of machine displaced this approach. By this method the value of a function, such as sin x, is computed afresh every time. One cannot but wonder how many times such a number has been re-computed in the last decade, and inquire whether technological developments may make tables efficient again.
> Over the years the emphasis in machines has slowly changed from the central processing unit to memory. From the point of view of table look-up, the memory is the central feature of a machine. Indeed the central processing unit is unnecessary.
The [*] is Krawitz. E., "Proc. Industrial Comp. Seminar," IBM, p. 66, 1950. I cannot find this publication with only a simple search.
> It is not generally known that to provide seven- decimal accuracy of the sine function, allowing third-order interpolation, only 15 entries are required [*] ...
That got me thinking: is that so? how do you do that?
Luckily, scipy has all sorts of interpolation tools, so one can just experiment a little.
Third-order interpolation -> that means cubic splines. How many points do you need for a cubic spline approximation of the sine function to have 7 decimal place accuracy (in other words the error should be about 0.5e-8? Well, the sine function being anti-symmetric and periodic, you need to only approximate it on the interval [0, pi/2]. It turns out you need 42 points, not including the ends of the interval (where we don't need to tabulate the values of sine; it is simply 0 at 0 and 1 at pi/2).
import numpy as np
from scipy.interpolate import CubicSpline
xs = np.linspace(0,np.pi/2,44,endpoint=True)
sines = np.sin(xs)
spline=CubicSpline(xs, sines)
ts = np.linspace(0,1,10000,endpoint=True)\*np.pi/2
print("Max Error is ", np.max(np.abs(spline(ts)-np.sin(ts))))
>> Max Error is 5.02852541828247e-08
But, we can do better. If we tabulate values of sin, then we have for free the values of the cosine at the complementary angles ( cos(x) = sin(pi/2-x). But the derivative of sine is the cosine, so we can use a cubic spline that matches both the values and the derivatives at a number of points. This is called a Cubic Hermite Spline. Turns out we need 23 points.
from scipy.interpolate import CubicHermiteSpline
xs = np.linspace(0,np.pi/2,25,endpoint=True)
sines = np.sin(xs)
cosines = sines[::-1] #note that we reuse the sines,
#we don't invoke np.cos
spline=CubicHermiteSpline(xs, sines, dydx=cosines)
ts = np.linspace(0,1,10000,endpoint=True)*np.pi/2
print("Max Error is ", np.max(np.abs(spline(ts)-np.sin(ts))))
>> Max Error is 4.775709550042251e-08
Now that we got to do Hermite interpolation, why stop here? The second derivative of the sine function is the sine with the sign flipped (oh, man, how do you avoid this pun?). So, you don't need to tabulate any new values. In scipy you can use BPoly to get this higher order spline. Now, you only need to tabulate 4 values of the sine function.
xs = np.linspace(0,np.pi/2,6,endpoint=True)
sines = np.sin(xs)
cosines = sines[::-1]
spline = BPoly.from_derivatives(xs, np.vstack((sines,cosines,-sines)).T)
ts =np.linspace(0,np.pi/2,10000)
print("Max Error is ", np.max(np.abs(spline(ts)-
np.sin(ts))))
>> Max Error is 2.057940906574629e-08
Well, we can go all the way now. If we only use the values of the sine function and its first 4 derivatives only at the end of the [0,pi/2] interval, we can find a 9th degree polynomial that matches them.
spline = BPoly.from_derivatives([0,np.pi/2],
[[0,1,0,-1,0], [1,0,-1,0,1]])
PSpline = PPoly.from_bernstein_basis(spline)
P = Polynomial(PSpline.c.T[0][::-1])
print("Max Error is ", np.max(np.abs(P(ts)-
np.sin(ts))))
>>Max Error is 1.700470619869776e-08
And here's that 9th degree polynomial, if you are curious:
x −0.16666666666666785 x^3+8.881784197001252e-15 x^4+0.008331651428065356 x^5+5.167900831715144e-06 x^6−0.000204626778274708 x^7+3.5525642158307225e-06 x^8+1.8946044087406189e-06 x^9
In my prior comment I used equidistant points for the interpolation.
In scipy you can use smoothing splines to get something close to optimal approximating splines with non-equidistant points. If you aim for a maximum error of 5e-8 (so when you round you have 7 exact decimal places), then with cubic splines you need 33 knots (including the ends). Now, the sine values are always less than one, so the digit before the decimal place is 0. If you count that towards accuracy, then you might be looking for an error not to exceed 5e-7, in which case scipy finds that you need 17 knots including the ends, so 15 interior knots.
from scipy.interpolate import UnivariateSpline
xs = np.linspace(0,np.pi/2,1000,endpoint=True)
s = UnivariateSpline(xs,np.sin(xs),k=3,s=0.75e-11)
ts = np.linspace(0,np.pi/2,10000)
print("Max error: " , np.max(np.abs(s(ts)-
np.sin(ts))))
print("Number knots: ", len(s.get_knots()))
>> Max error: 4.761484727611176e-07
>> Number knots: 17
I wish I could find a published list of that table of 15 values.
In my search, I found https://www.jstor.org/stable/2002889?Search=yes&resultItemCl... which comments that an optimum interval table of sines and cosines for 7 place values has 2700 cards instead of 9000. "Each time this table is used, more than two hours of sorting and gang punching time is saved."
> This is the first time that such a table has ever been printed for use with a hand calculating machine. However, the computer will find that it is easier to use than an ordinary table which requires second difference interpolation, since no interpolating coefficients are needed; and it is not necessary to pay any attention to the irregular intervals.
One of the worked out examples is:
r^2 = 4.043172
F = (0.04686041 - 0.043172 x 0.01420874 ) x (-0.043172) + 0.124999851
= 0.12300327