Sharon Goldwater and colleagues take a computational approach to understanding how infants learn language
Sharon Goldwater and colleagues use computationally-based modelling to gain deeper understanding of early language acquisition.
Collaborating with colleagues from the University of Maryland and Ecole Normale Supérieure in Paris, Professor of Computational Language Learning Sharon Goldwater has presented a computationally-based modelling approach to early language acquisition that facilitates deeper understanding of how infants acquire language. The team’s innovative approach shifts focus from what infants are learning about language to how they are learning it, leading to new evidence that challenges previous scientific accounts of early language learning.
The study introduces a quantitative modelling framework based on a large-scale simulation of the language learning process of infants. A paper of the study was published in the Proceedings of the National Academy of Sciences (PNAS).
Hypotheses about what is being learned by infants have traditionally driven researchers’ attempts to understand this surprising phenomenon.
We propose to start from hypotheses about how infants might learn.
Understanding how infants learn language, rather than what they learn
Infants become attuned to the sounds used in their native language(s) before they even speak. For example, between 6-8 months and 10-12 months infants learning American English get better at distinguishing between the sounds ‘r’ and ‘l’ (as in “rock” vs. “lock”) relative to infants learning Japanese, where the ‘r’ and ‘l’ sounds do not distinguish between words.
Researchers have traditionally assumed that this phenomenon occurs because infants learn to group sounds into phonetic categories, such as the vowels and consonants of their native language, at a very young age. The new study questions this traditional assumption by putting the focus on how, rather than what, infants are learning in order to gain the tools needed to speak their native language.
The study introduces a quantitative modelling framework based on large-scale simulation of the infant language-learning process, based on realistic input. The framework allows learning mechanisms to be systematically linked to testable predictions regarding infants’ attunement to their native language(s). From evidence obtained through this framework, the team has developed an account of infants’ attunement that challenges established theories of early language acquisition and what infants learn about language before they are able to speak.
Collaborative trans-national research
The project involves a multi-institutional team of cognitive scientists and computational linguists from the University of Maryland, Ecole Normale Supérieure in Paris and the School of Informatics, University of Edinburgh. The lead author of the study is Thomas Schatz, a postdoctoral associate in the University of Maryland Institute for Advanced Computer Studies; other authors include Naomi Feldman (University of Maryland); Xuân-Nga Cao and Emmanuel Dupoux (Ecole Normale Supérieure); and School of Informatics’ Sharon Goldwater.