University of Cambridge > Talks.cam > Cambridge University Linguistic Society (LingSoc) > The neural correlates of intonation

The neural correlates of intonation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Jamie Douglas.

Neurolinguistic studies of intonation have drawn widely diverging conclusions about the neural underpinnings of intonation. What seems particularly baffling – at least to the intonational phonologist – is that they appear to show that intonational information is processed quite unlike other types of linguistic information, since it shows more extensive activation of e.g. right hemisphere brain structures, while abstract linguistic information typically engages a predominantly left hemisphere system which includes temporal and frontal areas (observed for e.g. syntactic, morphological, and phonological information including lexical tone). This apparent discrepancy between intonation processing and other types of linguistic processing is difficult to reconcile with current theories of intonation analysis that assume that intonational information is in part phonological in nature, since they predict that there should be some parallels at least with other types of phonological processing (Autosegmental Metrical theory, in particular).

In this paper, I will present a series of experiments in which we investigated whether a linguistically-informed approach might help us to resolve the conundrum. We combined acoustic analysis, perception testing, functional Magnetic Resonance Imagining, and Event-Related Potentials to test the hypothesis that different types of intonational information have distinct neural correlates, depending on whether the information is phonological in nature, or phonetic (either para- or extra-linguistic). The former should engage a neural system that is typically involved in abstract linguistic processing, whereas the latter is more right-lateralised; this would also account for previous findings.

The results supported the hypothesis: Distinct brain areas are activated at different points in time as different aspects of the acoustic signal are being processed in the course of abstraction from the incoming signal, becoming more strongly left-lateralised in temporal and inferior frontal areas for ‘phonological intonation’ at later stages of processing. By contrast, ‘phonetic intonation’ (paralinguistic, here) predominantly recruits right hemisphere structures.

A key implication is that intonation is supported by two distinct cognitive and neural systems; the one supporting information that is encoded in the linguistic system, and the other reflecting perceptual processing more generally (e.g. as in Marslen Wilson and Tyler 2007; cf. Gussenhoven 2002).

This talk is part of the Cambridge University Linguistic Society (LingSoc) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity