University of Cambridge > Talks.cam > RCEAL Tuesday Colloquia > An efficiency theory of complexity and related phenomena

An efficiency theory of complexity and related phenomena

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Napoleon Katsos.

In this talk I discuss the relationship between efficiency and complexity in language structure and language use. The notion of complexity has received more attention than efficiency and there have been serious attempts to define metrics for it. I argue that these metrics are currently of limited value. They do not succeed in defining the structural preferences of language use and of grammars that they are designed to predict. Instead I argue that efficiency is the primary concept that we should be defining, I outline some of the different ways in which efficiency is achieved, and I link degrees of complexity in performance and grammars to this larger theory of efficiency.

Theories of linguistic complexity share the guiding intuition that “more [structural units/rules/representations] means more complexity”. This intuition has proved hard to define. See, for example, the lively debate in the papers of Linguistic Typology (2001) Vol.5-2/3 responding to McWhorter’s (2001) analysis of creoles as “the … simplest” linguistic systems. The discussion went to the heart of the fundamental question: what exactly is complexity? and how do we define it?

Some problems include: Trade-offs: simplicity in one part of the grammar often results in complexity in another; several illustrations will be given.

Overall complexity: the trade-offs make it difficult to give an overall assessment of complexity, resulting in unresolvable debates over whether some grammars are more complex than others, when there is no clear metric of overall complexity for deciding the matter.

Defining grammatical properties: the (smaller) structural units of grammars are often clearly definable, but the rules and representations are anything but and theories differ over whether they assume “simplicity” in their surface syntactic structures (see e.g. Culicover & Jackendoff 2005), or in derivational principles (as in Chomsky 1995), making quantification of complexity difficult in the absence of agreement over what to quantify.

Defining complexity itself: should our definition be stated in terms of rules or principles that generate the structures of each grammatical area (i.e. in terms of the “length” of the description or grammar, as discussed most recently and extensively in Dahl 2004), or in terms of the structures themselves (the outputs of the grammar)? Definitions of this latter kind are inherent in metrics such as Miller & Chomsky’s (1963) using non-terminal to terminal node ratios, and in Frazier’s (1985), Hawkins’ (1994/2004) and Gibson’s (1998) metrics. Do these rule-based and structure-based definitions give the same complexity ranking or not?

I argue that we can solve some of these problems if metrics of complexity are embedded in a larger theory of efficiency. Efficiency relates to the basic function of language, which is to communicate information from the speaker (S) to the hearer (H). I propose the following definition:

Communication is efficient when the message intended by S is delivered to H in rapid time and with minimal processing effort;

and the following hypothesis:

Acts of communication between S and H are generally optimally efficient; those that are not occur in proportion to their degree of efficiency.

Complexity metrics, by contrast, are defined on the grammar and structure of language. An important component of efficiency often involves structural and grammatical simplicity. But sometimes efficiency results in greater complexity. And it also involves additional factors that determine the speaker’s structural selections, leading to the observed preferences of performance, including:

speed in delivering linguistic properties in on-line processing;

fine-tuning structural selections to (i) frequency of occurrence and (ii) accessibility;

few on-line errors or garden paths.

These factors interact, sometimes reinforcing sometimes opposing one another. In Hawkins (1994, 2004) I have presented evidence that grammatical conventions across languages conventionalize these performance factors and reveal a similar interaction and competition between them. Comparing grammars in terms of efficiency, rather than complexity alone, gives us a more complete picture of the forces that have shaped grammars and of the resulting variation (including creoles). Cross-linguistic variation patterns also provide quantitative data that can be used to determine the relative strength of different factors and the manner of their interaction with one another.

References

Chomsky, N. (1995). The Minimalist Program. Cambridge, Mass.: MIT Press.

Culicover, P.W. & Jackendoff, R. (2005). Simpler Syntax. Oxford: Oxford University Press.

Dahl, Ö. (2004). The Growth and Maintenance of Linguistic Complexity. Amsterdam: John Benjamins.

Frazier, L. (1985). Syntactic complexity. In: D. Dowty, L. Karttunen & A. Zwicky, eds., Natural Language Parsing. Cambridge: Cambridge University Press.

Gibson, E. (1998). Linguistic complexity: Locality of syntactic dependencies. Cognition 68:1-76.

Hawkins, J.A. (1994). A Performance Theory of Order and Constituency. Cambridge: Cambridge University Press.

Hawkins, J.A. (2004). Efficiency and Complexity in Grammars. Oxford: Oxford University Press.

McWhorter, J. (2001). The world’s simplest grammars are creole grammars. Linguistic Typology 5: 125-166.

This talk is part of the RCEAL Tuesday Colloquia series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2017 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity