I wish we could come up with a catchier name, but I LOVE the idea of calling this programming, because that is precisely what we do when we compose deep neural nets.
For example, here's how you compose a neural net consisting of two "dense" layers (linear transformations), using Keras's functional API, and then apply these two layers to some tensor x to obtain a tensor y:
f = Dense(n)
g = Dense(n)
y = f(g(x))
This looks, smells, and tastes like programming (in this case with a strong functional flavor), doesn't it?
Imagine how interesting things will get once we have nice facilities for composing large, complex applications made up of lots of components and subcomponents that are differentiable, both independently and end-to-end.
NN are _just_ transfer functions. Look up tables. Or really dense maps. So f(g(x)) make total sense. But I dont think these are the interesting combinations. I think giving one NN the training experience of another, plus the feedback on "correct inference" will be when on NN trains its replacement.
For example, here's how you compose a neural net consisting of two "dense" layers (linear transformations), using Keras's functional API, and then apply these two layers to some tensor x to obtain a tensor y:
This looks, smells, and tastes like programming (in this case with a strong functional flavor), doesn't it?Imagine how interesting things will get once we have nice facilities for composing large, complex applications made up of lots of components and subcomponents that are differentiable, both independently and end-to-end.
Andrej Karpathy has a great post about this: https://medium.com/@karpathy/software-2-0-a64152b37c35