Abstract: Contemporary Deep Learning models build on the neural network modeling framework introduced and applied to capturing aspects of human cognition, language processing, and memory in the 1980’s and 90’s. At the time they were often criticized as ‘toy’ models that would not scale up or as systems that lacked characteristics that had to be built in to succeed. Today, these models have surpassed my own expectations in many domains, including games like chess and go and natural language processing, and they are beginning to show signs of human-level capabilities (as well as shortcomings) in logical and analogical reasoning. Yet current models have many shortcomings and are clearly not human-like in many ways. In this talk I will discuss a framing of a central aspect of what we will need to do to capture human-like intelligence: We need to orient our models to what I will call ‘the level of thought’. I will discuss what I think thoughts are and steps that might be taken to helping our models achieve the ability to form, understand, retain, retrieve, express, and rely on thoughts in support of mathematical and scientific discovery as well as everyday problem solving and reasoning.
Noyce Conference Room
US Mountain Time
Our campus is closed to the public for this event.
Jay McClellandLucie Stern Professor in the Social Sciences, Stanford University
Melanie Mitchell, Arseny Moskvichev