Language Models and Human Language Acquisition

titleLanguage Models and Human Language Acquisition
start_date2023/06/14
schedule17h-18h
onlineno
location_infoEn ligne
summaryChildren’s remarkable ability to learn language has been an object of fascination in science for millennia. In just the last few years, neural language models (LMs) have also proven to be incredibly adept at learning human language. In this talk, I discuss scientific progress that uses recent developments in natural language processing to advance linguistics—and vice-versa. My research explores this intersection from three angles: evaluation, experimentation, and engineering. Using linguistically motivated benchmarks, I provide evidence that LMs share many aspects of human grammatical knowledge and probe how this knowledge varies across training regimes. I further argue that—under the right circumstances—we can use LMs to test key hypotheses about language acquisition that have been difficult or impossible to evaluate with human subjects. As a proof of concept, I use LMs to experimentally test the long controversial claim that direct disambiguating evidence is necessary to acquire the structure dependent rule of subject-auxiliary inversion in English. Finally, I describe ongoing work to engineer learning environments and objectives for LM pretraining inspired by human development, with the goal of making LMs more data efficient and more plausible models of human learning.
responsiblesBernard