21 March: Tillmann Vierkant


Communication or control? The role of expressive behaviours in Non-Gricean routes to language evolution

Tillmann Vierkant (Edinburgh)

Tuesday 7 February 2017, 11:00–12:00
1.17 Dugald Stewart Building

Orthodoxy has it that language evolution requires Gricean communicative intentions and therefore an understanding of nested metarepresentations. The problem with the orthodoxy is that it is hard to see how non-linguistic creatures could have such a sophisticated understanding of mentality. Philosophers like Bar-On (2013) have therefore recently attempted to develop a non-Gricean account of language acquisition building on the information-rich and subtle communicative powers of expressive behaviours. This paper aims to sketch an alternative (additional) account of why expressive behaviours might be crucial in language evolution. On this account, expressive behaviours are important because not only because of their role in animal communication but because they are enablers of early forms of mindshaping.

17 March: Yasamin Motamedi


Artificial sign language learning: A method for evolutionary linguistics

Yasamin Motamedi (MPI Nijmegen)

Tuesday 17 March 2017, 15:00–15:30
3.10 Dugald Stewart Building

Previous research in evolutionary linguistics has made wide use of artificial language learning (ALL) paradigms, where learners are taught artificial languages in laboratory experiments and are subsequently tested about the language they have learnt. The ALL framework has proved particularly useful in the study of the evolution of language, allowing the manipulation of specific linguistic phenomena that cannot be isolated for study in natural languages. Furthermore, using ALL in populations of learners, for example with iterated learning methods, has highlighted the importance of cultural evolutionary processes in the evolution of linguistic structure.

In my thesis, I present a novel methodology for studying the evolution of language in experimental populations. In the artificial sign language learning (ASLL) methodology I develop, participants learn manual signalling systems that are used to interact with other participants. The ASLL methodology combines features of previous ALL methods as well as silent gesture, where hearing participants must communicate using only gesture and no speech. However, ASLL provides several advantages over previous methods. Firstly, reliance on the manual modality reduces the interference of participants’ native languages, exploiting a modality with linguistic potential that is not normally used linguistically by hearing language users. Secondly, cultural evolutionary research in the manual modality offers comparability with the only current evidence of language emergence and evolution in natural languages: emerging sign languages that have evolved over the last century.

The implementation and development of ASLL in the present work provides an experimental window onto the cultural evolution of language in the manual modality. I detail a set of experiments that manipulate both linguistic features (investigating category structure and verb constructions) and cultural context, to understand precisely how the processes of interaction and transmission shape language structure. The findings from these experiments offer a more precise understanding of the roles that different cultural mechanisms play in the evolution of language, and further builds a bridge between data collected from natural languages in the early stages of their evolution and the more controlled environments of experimental linguistic research.

14 March: Kearsy Cormier


Pronouns, agreement, classifiers and role shift: What sign languages can tell us about linguistic diversity and linguistic universals

Kearsy Cormier (UCL)

Tuesday 14 March 2017, 11:00–12:30
1.17 Dugald Stewart Building

The search for linguistic universals (and understanding universals in the face of diversity) is one of the key issues in linguistics today. Yet the vast majority of the linguistic research has focused only on spoken languages. Sign languages constitute an important test case for theories on universals and diversity, since a language “universal” only deserves this name if it holds both for signed and spoken languages, and languages in a different modality surely have much to teach us about the full range of diversity within human language. In this talk I consider four morphosyntactic/discourse phenomena found in sign languages that have traditionally been assumed to be the same as spoken languages but which, on closer inspection, reveal some fundamental differences relating to particular affordances of the visual-spatial modality. In order to understand these differences in more detail, linguists must consider the multimodal nature of human language (including gesture) rather than just the classic linguistic characteristics which are the exclusive focus of much work in mainstream approaches to the study of language.

7 March: Anna Jon-And


Modeling the role of acquisition in contact-induced language change

Anna Jon-And (Stockholm University)

Tuesday 7 March 2017, 11:00–12:30
1.17 Dugald Stewart Building

Accelerated language change in contact settings, especially language shift, has commonly been attributed to innovations during the second language acquisition process. Negative correlations have also been attested between proportions of non-native speakers and morphosyntactic complexity in cross-linguistic data. At the same time, cultural evolution experiments and computational models have revealed learnability as a general constraint in language evolution, suggesting that more learnable features, such as morphological simplicity, would be favored by all language acquisition and not only by second language acquisition. Here, I use agent-based computational simulations in order to test if diffusion of linguistic innovation in a language shift setting may result from a general acquisition effect reinforced by large proportions of learners, or if special weight needs to be attributed to second language acquisition. The simulations are informed by chronological demographic and linguistic data from the ongoing language shift from Bantu languages to Portuguese in Maputo, Mozambique. Parameters are set to proportions of native and non-native speakers over time and the model’s predictions are compared to variation in verbal morphology and use of locative prepositions in Portuguese.

28 February: Alex Papadopoulos-Korfiatis


An autopoietic approach to cultural transmission chains

Alex Papadopoulos-Korfiatis (Edinburgh)

Tuesday 28 February 2017, 11:00–12:30
1.17 Dugald Stewart Building

One of the problems of autopoiesis as a biological, bottom-up, non-representational theory of cognition is that it struggles with scaling up to high-level cognitive behaviour such as language. The Iterated Learning model, a theory of language evolution based on its transmission from agent to agent in cultural chains, is a promising candidate in providing the first step towards a non-representational account of language; our goal in this work is the combination of these two approaches. In order to do that, we introduce a minimal joint action “left/right dancing” task that can be solved in multiple ways. Through individual episodes of reinforcement learning between simulated robotic agents, we show that an initial expert agent’s behaviour persists in cultural transmission chains; we investigate the conditions under which these chains break down and re-emerge, drawing interesting parallels to existing Iterated Learning research.

21 February: Fredrik Jansson


Modelling the evolution of creoles

Fredrik Jansson (Stockholm University)
Joint work with Mikael Parkvall and Pontus Strimling

Tuesday 21 February 2017, 11:00–12:30
1.17 Dugald Stewart Building

We are interested in the contact situation, where several existing languages converge to one, a Creole. Various theories have been proposed regarding the origin of Creole languages. Describing a process where only the end result is documented involves several methodological difficulties. In this paper we try to address some of the issues by using a novel mathematical model together with detailed empirical data on the origin and structure of Mauritian Creole. Our main focus is on whether Mauritian Creole may have originated only from a mutual desire to communicate, without targeted learning, and we show that a minimal model can generate good predictions. With a confirmation bias towards learning from successful communication, the model predicts Mauritian Creole better than any of the input languages, including the lexifier French, thus providing a compelling and specific hypothetical model of how creoles emerge. The results also show that it may be possible for a creole to develop quickly after first contact, and that it was created mostly from material found in the input languages, but without inheriting their morphology.

14 February: Jon W. Carr


Informativeness: A review of work by Regier and colleagues

Jon W. Carr (Edinburgh)

Tuesday 14 February 2017, 11:00–12:30
1.17 Dugald Stewart Building

A growing body of work from Terry Regier’s lab at Berkeley suggests that semantic variation is grounded in efficient communication: well-adapted semantic systems should be both simple and informative. This has parallels with work done here at the Centre for Language Evolution, although we typically use the words ‘compressible’ and ‘expressive’ to refer to roughly the same ideas.

In their view, a language is simple if it uses few words or rules; for us, a language is compressible if structure inherent to the system allows for a compressed cognitive representation. Whatever we choose to call it, this pressure for a compact representation is countered by a pressure to be, in their words, informative or, in ours, expressive; for a language to be communicatively useful, it must be able to make useful meaning distinctions. Regier and colleagues define ‘informativeness’ in terms of how effectively a meaning can be transmitted from one individual to another: how much information will be lost every time a meaning is transmitted. Our framework, on the other hand, defines expressivity as the number of words available to interlocutors to make meaning distinctions.

In this talk I will synthesize the findings from several of their papers with a view to highlighting the similarities and differences between their work and ours. In particular I want to focus on an iterated learning experiment they have conducted (Carstensen et al., 2015), and I will also describe their information-theoretic model of informativeness and the predictions I believe it should make. I want to suggest that a fruitful way forward could be to combine their formalization of informativeness with our formalization of compressibility. Finally, I’ll top this off with some experiments we have conducted that look at the differences between two ways of partitioning a space into categories.

31 January: Thom Scott-Phillips


The evolution of ostensive communication (or: Why chimpanzees tend to fail the object choice task (maybe))

Thom Scott-Phillips (Durham)
Work with Christophe Heintz

Tuesday 31 January 2017, 11:00–12:30
1.17 Dugald Stewart Building

How can an intention to inform another individual be satisfied? One possible answer is to provide directly perceptible evidence of the content. Think, for instance, of a primate display of size and strength: standing on two legs, thumping chest, etc. Another way is to provide not direct evidence for the content itself, but rather evidence for the intention to express the content. If, for instance, Mary eats berries, and suffers no ill-consequences, she provides Peter with directly perceptible evidence that the berries are edible. Alternatively, however, Mary might just mime eating berries. In and of itself, miming provides no directly perceptible evidence for the content – for Mary does not eat the berries. Miming does however provide evidence of Mary’s intention that Peter believe that the berries are edible. This change, from evidence for the content to evidence for the intention, allows for an extremely dynamic form of communication, the existence of which helps to explain why humans are such inveterate communicators, and why the size and complexity of human cultures is different to that of any other species by several orders of magnitude. It was first described by Paul Grice, and receives its most precise and cognitively plausible description in the work of Dan Sperber and Dierdre Wilson.

A key question, particularly from ontogenetic and phylogenetic perspectives, is: In what ecological conditions might communication of this sort emerge? Building on my previous work, I will highlight the important role that cooperative ecologies play in facilitating the emergence of this type of communication. In so doing, I will propose a possible explanation of chimpanzee performance in pointing tasks; and also describe an important but under-discussed feature of ostensive communication that could be profitably used in future experimental studies, particularly those focused on the question of whether any non-human species ever communicates in this way.

24 January: Isabelle Dautriche


Weaving an ambiguous lexicon

Isabelle Dautriche (Edinburgh)

Tuesday 24 January 2017, 11:00–12:30
1.17 Dugald Stewart Building

Modern cognitive science of language concerns itself with (at least) two fundamental questions: how do humans learn language? — the learning problem — and why do the world’s languages exhibit some properties and not others? — the typology problem. In this dissertation, I attempt to link these two questions by looking at the lexicon, the set of word-forms and their associated meanings, and ask why do lexicons look the way they are? And can the properties exhibited by the lexicon be (in part) explained by the way children learn their language?

One striking observation is that the set of words in a given language is highly ambiguous and confusable. Words may have multiple senses (e.g., homonymy, polysemy) and are represented by an arrangement of a finite set of sounds that potentially increase their confusability (e.g., minimal pairs). Lexicon bearing such properties present a problem for children learning their language who seem to have difficulty learning similar sounding words and resist learning words having multiple meanings. Using lexical models and experimental methods in toddlers and adults, I present quantitative evidence that lexicons are, indeed, more confusable than what would be expected by chance alone. I then present empirical evidence suggesting that toddlers have the tools to bypass these problems given that ambiguous or confusable words are constrained to appear in distinct context. Finally, I submit that the study of ambiguous words reveal factors that were currently missing from current accounts of word learning.

Taken together this research suggests that ambiguous and confusable words, while present in the language, may be restricted in their distribution in the lexicon and that these restrictions reflect (in part) how children learn languages.

17 January: Andy Wedel


Signal evolution within the word

Andy Wedel (University of Arizona)

Tuesday 17 January 2017, 11:00–12:30
1.17 Dugald Stewart Building

Languages have been shown to optimize their lexicons over time with respect to the amount of signal allocated to words relative to their informativity: words that are on average less predictable in context tend to be longer, while those that are on average more predictable tend to be shorter (Piantadosi et al 2011, cf. Zipf 1935). Further, psycholinguistic research has shown that listeners are able to incrementally process words as they are heard, progressively updating inferences about what word is intended as the phonetic signal unfolds in time. As a consequence, phonetic cues early in the signal for a word are more informative about word-identity because they are less constrained by previous segmental context. This suggests that languages should not only optimize the total amount of signal allocated to different words, but optimize the distribution of that information across the word. Specifically, words that are on average less predictable in context should preferentially target highly informative phonetic cues early in the word, while preserving a ‘long tail’ of redundant cues later in the word. In this talk I will review recent evidence that this is the case in English. Further, languages show a strong tendency to develop phonological patterns which support phonetic cue informativity at the beginnings of words, while reduce cue informativity later in words. I will argue that this typological tendency plausibly arises from this word-level phenomenon.