1) any input vector can be represented by a "similar" another vector, e.g. [.4, -.3, .1] => [+1, -1, 0]
INIT:
2) generate a set of random input vectors using only {+1, 0, -1} numbers
3) for every neuron: compute a set of activations for every random input vector
4) combine those sets producing a large hash table where keys are random vectors and values are indices of active neurons
FEED-FORWARD:
5) for every input vector, find its nearest neighbor, e.g.: [.4, -.3, .1] => [+1, -1, 0]
6) use this vector as hash key to get indices of neurons
7) compute activations for only these neurons using only their weights and the input vector
8) repeat for all layers
etc.
This way, you avoid lots of computations, but the key is to have a good hash function and a lot of memory.
1) any input vector can be represented by a "similar" another vector, e.g. [.4, -.3, .1] => [+1, -1, 0]
INIT:
2) generate a set of random input vectors using only {+1, 0, -1} numbers
3) for every neuron: compute a set of activations for every random input vector
4) combine those sets producing a large hash table where keys are random vectors and values are indices of active neurons
FEED-FORWARD:
5) for every input vector, find its nearest neighbor, e.g.: [.4, -.3, .1] => [+1, -1, 0]
6) use this vector as hash key to get indices of neurons
7) compute activations for only these neurons using only their weights and the input vector
8) repeat for all layers
etc.
This way, you avoid lots of computations, but the key is to have a good hash function and a lot of memory.