One of the "failures" of modern AI is "human-friendly explanation": how do you explain the process behind a decision when it was based on a very complex model? At least, without having to explain the entire model being employed.
It dosen't matter which model were used or which algorithm, what matters is based on which information about this particular user it was decided what to show. If you only have his IP then ok, but if you have his browsing history then you should disclose that your targeting was based on exact browsing history or any other data you have concerning this user that was used.
Yeah, but if the entire history (some large part) is available, then it is in the interests of advertisers to run their algorithm against that. It's not informative. Needless to say, there is a lot of work being done in academia on the problem of making AI decisions comprehensible to humans. It is not clear yet AFAIK whether any ML techniques (even the most prominent approaches) are ever going to be susceptible to a simple explanation.
Couldn't you just pick the top N paths (maybe ordered by how sensitive they are to changes in input) back through the neural net, then list the data sources they ultimately trace to?
"Visited aaa.example one day ago, .34; visited abc.example three days ago, .27; therefore display ad for zzz.example"
I think there was a recent paper that explored this.
Essentially, the AI was showing how it came to a conclusion by, for example, highlighting parts of the image based on how much they contributed to a conclusion (and showed that a husky was mistaken for a wolf because of surrounding snow) or words which lead to context (showing that an AI to identify atheist vs theist texts was using email headers instead of the body).
This doesn't of course explain the entire process but it means a doctor for example can see that the AI concluded it's not the flu because while symptoms A an B were present, C wasn't there was a symptom X that is in the "Not flu" category and the doctor can then easily apply human intuition to say it's the flu after all because the patient is according to the medical record prone to X anyway.
One of the "failures" of modern AI is "human-friendly explanation": how do you explain the process behind a decision when it was based on a very complex model? At least, without having to explain the entire model being employed.