The journal Behavioral and Brain Sciences will be publishing this paper, “Building Machines That Learn and Think Like People,” by Brenden Lake, Tomer Ullman, Joshua Tenenbaum, and Samuel Gershman. Here’s the abstract:
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
The journal solicited discussions, with the rule being that you say what you’re going to talk about and give a brief abstract of what you’ll say. I wrote the following:
What aspect of the target article or book you would anticipate commenting on:
The idea that a good model of the brain’s reasoning should use Bayesian inference rather than predictive machine learning.
Proposal for commentary:
Lake et al. in this article argue that atheoretical machine learning has limitations and they argue in favor of more substantive models to better simulate human-brain-like AI. As a practicing Bayesian statistician, I’m sympathetic to this view—but I’m actually inclined to argue something somewhat different: I’d claim that it could make sense to do AI via black-box machine learning algorithms such as the famous program that plays Pong, or various automatic classification algorithms, and then have the Bayesian model be added on, as a sort of “consciousness” or “executive functioning organ” that attempts to make sense of all these inferences. That seems to me to possibly be a better description of how our brains operate, and in some deeper level I think it is closer to fitting my view of how we learn from data.
The editors decided they didn’t have space for my comment so I did not write anything more. Making the call based on the abstract is an excellent, non-wasteful system, much better than another journal (which I will not name) where they requested I write an article for them on a specific topic, then I wrote the article, then they told me they didn’t want it. That’s just annoying, cos then I have this very specialized article that I can’t do anything with.
Anyway, I still find the topic interesting and important; I’d been looking forward to writing a longer article on it. In the meantime, you can read the above paragraph along with this post from a few months ago, “Deep learning, model checking, AI, the no-homunculus principle, and the unitary nature of consciousness.” And of course you can read the Lake et al. article linked to above.