Ah I see, that's quite interesting. So the idea of a system that could explain its own decision-making and inferences?
Neural networks definitely are black boxes, at least at an individual level. Sure the concept remains the same generally, but the internals are different and hidden from case to case
Neural networks definitely are black boxes, at least at an individual level. Sure the concept remains the same generally, but the internals are different and hidden from case to case