THE UNIVERSITY of  EDINBURGH · School of Philosophy, Psychology and Language Sciences
Linguistics and English Language
· contact · search · LEL home · Bob Ladd home · 

Simultaneous and sequential structure in language

D. R. Ladd

Individual research fellowship (January 2007 - July 2008) funded by the Leverhulme Trust

The following brief outlines of this project are taken from my original application to the Leverhulme Trust.

Abstract

We usually think of spoken sentences as strings of words and of words as strings of sounds, but many aspects of spoken language happen simultaneously, not sequentially (e.g. the same words can be spoken as a question or statement, angrily or happily). Languages use simultaneous signals in very different ways, which may mean that simultaneity and sequentiality are partially competing options in human language design. The goal of this project is to investigate the relation between simultaneous and sequential structure. This is important for current research on the origins of language and on the processing of speech in the brain.

Detailed research proposal

The goal of this research project is to write a monograph on simultaneous structure in human language. I have long been involved in some aspects of this topic (notably intonation, tone, and "gradience") , while other aspects have been stimulated by developments in neighbouring fields, including animal communication, neuroimaging, language evolution and human genetics, and sign language. The monograph will be addressed to specialists in all these fields. Its purpose is to clarify several long-standing issues that will be central to the way neighbouring fields investigate linguistic questions.

Several kinds of simultaneity can be distinguished, though the existence of borderline cases between every adjacent pair in the following list shows that the issue is far from clear-cut:

- communicative simultaneity: simulatenous transmission of propositional, paralinguistic (emotional, etc.), and indexical (speaker identity, etc.) information;

- prosodic simultaneity: transmission of cues to structure (e.g. phrasing) and/or pragmatic function (e.g. question intonation) simultaneously with sequentially-arranged words;

- morphological simultaneity: simultaneous transmission of elements of morphemes (as in e.g. sign language morphology, ablaut, grammatical tone);

- phonemic simultaneity: simultaneous transmission of phonemes (e. lexical tone and vowels);

- featural simultaneity: bundling of phonological features (e.g. the vowel /i/ in Turkish is simultaneously high, front and unrounded).

- gestural simultaneity: articulators work independently but in concert in speech production.

The first of these forms of simultaneity is very old in the primate lineage. Many primate call systems allow conspecifics to identify three types of information at once, which we may call "propositional" (e.g. aerial predator warning), "paralinguistic" (e.g. high urgency), and "indexical" (speaker identity) [cf. R. Seyfarth & D. Cheney (2003) Annual Review of Psychology 54:145-73]. There is evidence [P. Belin, R. Zatorre, et al. (2000) Nature 403:309-12] that these three kinds of vocal information are processed by distinct brain structures, similar to the three analogous kinds of visual information involved in face perception [V. Bruce & A. Young (1986) British Journal of Psychology 77:305-27]. This appears to warrant abstracting away from paralinguistic and indexical signalling when describing language structure, and treating the propositional content (the strictly linguistic, and therefore presumably uniquely human, aspect of communication) as one of three parallel streams of information.

However, there are many problems with this idealisation. These include the fact that many paralinguistic and indexical cues (esp. speech rate) are intrinsically bound up with the linguistic signal, and that all human languages have many purely linguistic devices to enhance the effective transmission of paralinguistic and indexical information (e.g. sociolects, honorifics). Such intermingling of information streams means that experiments looking for neurological correlates of paralinguistic and indexical processing risk either oversimplification or serious confounds. It is possible that a model based on the notion of modulation [H. Traunmüller, Phonetica 51:170-83] could be used to decompose the signal into mathematically tractable distinct information streams., but at the very least there is no simple equation between e.g. voice source characteristics and paralinguistic information. It is likely that Bolingerian "gradience" plays a role here: communicative and prosodic simultaneity involve gradient signalling (e.g. louder = more urgent) while morphological and phonemic simultaneity involve discrete categories (e.g. high vs. mid tone). Language acquirers must discover how cues such as pitch and voice quality are used in their language, and any brain centres involved in processing paralinguistic or indexical cues must be tuned during language acquisition.

That is one theoretical reason for not simply abstracting the paralinguistic and the indexical away from the propositional. The other is the existence of morphological, phonemic and featural simultaneity. Modern linguistics has generally idealised language structure as hierarchical and linear, i.e. as involving bracketed strings. Problems with this idealisation have been acknowledged but never resolved: simultaneous morphology has always sat uneasily with definitions of the morpheme [e.g. E. Nida (1948) Language 24:414-41] , and phonemic tone has always posed conceptual problems for phonologists. By treating tone as a feature of a vowel, orthodox Jakobsonian/generative phonology equated "featural simultaneity" with "phonemic simultaneity"; early autosegmental phonology [e.g. W. Leben 1971, Suprasegmental Phonology, MIT PhD thesis] groped toward a distinction between "autosegments" and "features", but quickly moved back to the more Jakobsonian notion when it began to treat assimilation and vowel harmony in the same way as tone.

The central idea I wish to explore in the monograph is that simultaneity and sequentiality are complementary tendencies of the human language faculty, with good reason to believe that simultaneity is phylogenetically older. Assuming that proto-language evolved from a primate call system, and given that "communicative simultaneity" predates the human lineage, then an obvious early strategy for adding complexity to spoken messages would be to develop categorically distinct modifications of calls (i.e. "morphological" and "phonemic" simultaneity). At the same time, the (independent?) development of the ability to produce complex bracketed strings would provide a competing strategy: in effect, structure can be built up or out. All languages, of course, now make use of bracketed strings, presumably because of perceptual limits on simultaneous discrete categories, but simultaneous morphology and phonology are still common, and regularly appear when circumstances are favourable (notably in sign language; see M. Aronoff et al. [2005, Language 81:301-44], who argue that sequential morphology requires many generations to develop).

As part of the research I will explore various typological hypotheses - for example, that languages making more use of simultaneous structure (e.g. tone, ablaut, phonemic voice quality, etc.) make less use of sequential structure (e.g. agglutinative morphology). Prima facie, ablaut and tone have nothing in common; if they tend to co-occur, it would represent evidence for thinking of simultaneity as a unified phenomenon. I will also explore hypotheses about both the cultural and genetic evolution of language, including patterns of tonogenesis and tone loss. The striking geographic concentrations of tone languages in sub-Saharan Africa and E and SE Asia, and of strongly non-tonal languages in much of the rest of Eurasia and North Africa and in Australia, demand explanation; there are several possibilities worth considering, even including genetic effects. (It has recently been shown that attachment preferences in ambiguous sentences like She saw the man with the binoculars are correlated with verbal working memory abilities [B. Swets, T. Desmet, D.Z. Hambrick, F. Ferreira Jnl of Exptl Psych: General, 2006]; it is therefore conceivable that some heritable difference in cognitive processing ability predisposes language acquirers to be better at sequential structure or at simultaneous structure.) However, no genetic hypothesis can stand up to scrutiny until we have ruled out language spread and cultural evolution as explanations, and - more importantly - until we have clarified the typological facts and cleared up the long-standing theoretical issues about the place of simultaneous structure in a predominantly sequential system.

posted February 2007


back to RESEARCH

HOME PAGE

PUBLICATIONS