University of Cambridge > Talks.cam > The Centre for Music and Science (CMS) > How Infants Learn Language Using Speech Rhythm and Neuronal Oscillations

How Infants Learn Language Using Speech Rhythm and Neuronal Oscillations

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Gabriela Pavarini.

Young children spontaneously develop awareness of “big” phonological (speech sound) units such as prosodic stress patterns, syllables and rhymes. By 7.5 months, infants can use prosodic rhythm (motifs of strong and weak syllables) to segment words from continuous speech. This is a complex feat of speech engineering, requiring the child to “hack” the acoustic signal for its implicit phonological structure. In this talk, I will present converging computational and experimental evidence which suggests that infants could perform this feat through speech-to-brain coupling. This a process by which endogenous neuronal oscillations in the cortex entrain to a temporally-matched hierarchy of rhythmic patterns in the speech signal. Nursery rhymes and other forms of infant-directed speech have an enhanced and exaggerated rhythmic architecture which provides a rich substrate for acoustic-phonological extraction by the infant brain. Finally, I will provide preliminary evidence that brain-to-brain coupling between adults and infants could provide an early neural mechanism for the development of joint attention, which plays a important social modulatory role in early language learning.

This talk is part of the The Centre for Music and Science (CMS) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity