I am curious why isn't the network always in an "interference" state, but sometimes it collapses into a "restricted" ONB?
> the neural networks we observe in practice are in some sense noisily simulating larger, highly sparse networks
This seems somewhat related to a point made by Ilya Sutskever here [1]: NNs can be though of an approximation to the Kolmogorov compressor. Speculating, one could say any network is a projection of the ideal compressor (which argualy perfectly represents all n-features in an n-dimensional ONB) into a lower dimentional space, hence the interference. But why is there not always such an interference?
> the neural networks we observe in practice are in some sense noisily simulating larger, highly sparse networks
This seems somewhat related to a point made by Ilya Sutskever here [1]: NNs can be though of an approximation to the Kolmogorov compressor. Speculating, one could say any network is a projection of the ideal compressor (which argualy perfectly represents all n-features in an n-dimensional ONB) into a lower dimentional space, hence the interference. But why is there not always such an interference?
[1] https://www.youtube.com/watch?v=AKMuA_TVz3A