Noyce Conference Room
  US Mountain Time
Kyle Mahowald

Our campus is closed to the public for this event.

Abstract: Today’s large language models (LLMs) generate coherent, grammatical text. This makes it easy to see them as “thinking machines”, capable of performing tasks that require abstract knowledge and reasoning. In some ways they are, but in some ways they aren't. In this talk, I will evaluate LLMs using a distinction between formal competence (knowledge of linguistic rules and patterns) and functional competence (understanding and using language in the world). I ground this distinction in human neuroscience, showing that these skills recruit different cognitive mechanisms. I argue that LLMs have achieved formal linguistic competence---a feat that has major implications for linguistic theory. But they remain interestingly uneven at functional linguistic tasks. 


Kyle MahowaldKyle Mahowald
SFI Host: 
Melanie Mitchell

More SFI Events