18 October: Marieke Woensdregt


The co-evolution of language and mindreading

Marieke Woensdregt (Edinburgh)

Tuesday 18 October 2016, 11:00–12:30
1.17 Dugald Stewart Building

The hypothesis that mindreading (theory of mind) and language have facilitated or pushed for each other’s evolution has been put forward in several different ways on several different grounds. The evolution of explicit mindreading skills may have been important for the evolution of language because it allows us to express and recognize communicative intentions. The evolution of linguistic conventions may have been important for the evolution of mindreading because it allows us to make our mental states explicit, and transmit our understanding of mental states to others. The same arguments have been made on the developmental timescale – based on studies correlating mindreading skills with word learning ability, and studies showing that certain linguistic input can facilitate the development of mindreading.

In this talk I will present an agent-based model that explores under which circumstances language and mindreading can co-evolve – where language is implemented as lexicon-learning, and mindreading as the ability to learn another agent’s perspective on the world. Agents can learn both the lexicon and the perspective of another agent through Bayesian inference, but these two skills crucially bootstrap each other during development – meaning that one skill could not be learned if there was no ability to learn the other.

Over the course of iterated learning this co-development results in interesting evolutionary dynamics. Populations of agents that develop in this way are only able to establish an informative language if there is either a strong pressure in favour of expressivity, or a weak pressure that favours successful perspective-takers as cultural parents. Given such a weak pressure in favour of perspective-taking, an informative language can be established even when learners have a strong cognitive bias in favour of simple (compressible) languages. I will discuss these simulation results in the light of theories about the evolution of Gricean communication and social coordination.

14 October: Rachael Bailes (pre-viva talk)


An adaptationist psycholinguistic approach to the pragmatics of reference

Rachael Bailes (Edinburgh)

Tuesday 14 October 2016, 10:00–10:30
1.17 Dugald Stewart Building

This thesis formalises two broadly polarised approaches to the question of context integration in linguistic comprehension, and makes explicit the adaptationist particulars that each mechanistic account may imply. The Social Adaptation Hypothesis (SAH) holds that linguistic comprehension is performed by relevance-oriented inferential mechanisms that have been selected for by a social environment (i.e. inference-using conspecifics). In particular, the SAH holds that linguistic conventions are attended to in the same way as other ostensive stimuli, and comprehended on the basis of contextual information. The Linguistic Adaptation Hypothesis (LAH) holds that linguistic comprehension is performed by specialised cognition that has been selected for by a cultural, linguistic environment (i.e. language-using conspecifics). The LAH holds that linguistic conventions may constitute a privileged domain of input for the comprehension system, and in particular that the nature of linguistic representations can support comprehension without mediation by inferential cognition or contextual integration. The remainder of the thesis investigates referential comprehension with a series of four reaction time experiments, using a conversational precedent paradigm, relevant to the contrastive predictions of these two adaptationist accounts. Two additional production experiments measured the effect of visual context on whether speakers maintained their linguistic precedents. The broad question that covers all of these experiments is: how sensitive is the comprehension process to linguistic input qua linguistic input, relative to various other grades of contextual information? In light of the evidence presented and its limitations, I conclude that there is empirical support for the LAH, and that this alternative account should be considered as part of the unfolding conversation on pragmatics in evolutionary linguistics. More broadly, the thesis attempts to demonstrate that psycholinguistic processing is a valuable object of study for evolutionary linguistics, and that evolutionary theory can a useful conceptual tool in psycholinguistics.

11 October: Dan Dediu


Vocal tract anatomy and language: Non-linguistic factors may influence language diversity and evolution

Dan Dediu (MPI Nijmegen)

Tuesday 11 October 2016, 11:00–12:30
1.17 Dugald Stewart Building

Lately, there are several strands of evidence emerging that seems to suggest that language is also shaped by non-linguistic factors, and I will explore in this talk several such proposals, focusing on a particular subtype, namely biases with a biological component. I will give a glimpse of work-in-progress exploring the link between vocal tract anatomy and physiology and phonetic/phonological diversity and discuss the relevance of such “biased cultural evolution” for understand language evolution, language change and the present-day linguistic diversity.

4 October: Nick Barton


Limits to the accumulation of information by natural selection

Nick Barton (Institute of Science and Technology Austria)

Tuesday 4 October 2016, 11:00–12:30
1.17 Dugald Stewart Building

How complex an organism can evolve, and how much complexity can be maintained? How should we measure complexity? I will discuss the fundamental constraints set by mutation rate and population size, and also (more speculatively) the corresponding constraints on evolution of language.

Click here for a preprint of the forthcoming paper in Heredity

27 September: Raquel Alhama


What do Neural Networks need in order to generalize?

Raquel Alhama (Universiteit van Amsterdam)

Tuesday 27 September 2016, 11:00–12:30
1.17 Dugald Stewart Building

In an influential paper, reporting on a combination of artificial language learning experiments with babies, computational simulations and philosophical arguments, Marcus et al. (1999) claimed that connectionist models cannot account for human success at learning tasks that involved generalization of abstract knowledge such as grammatical rules. This claim triggered a heated debate, centered mostly around variants of the Simple Recurrent Network model (Elman, 1990).

In our work, we revisit this unresolved debate and analyze the underlying issues from a different perspective. We argue that, in order to simulate human-like learning of grammatical rules, a neural network model should not be used as a tabula rasa, but rather, the initial wiring of the neural connections and the experience acquired prior to the actual task should be incorporated into the model. We present two methods that aim to provide such initial state: a manipulation of the initial connections of the network in a cognitively plausible manner (concretely, by implementing a “delay-line” memory), and a pre-training algorithm that incrementally challenges the network with novel stimuli. We implement such techniques in an Echo State Network (Jaeger, 2001), and we show that only when combining both techniques the ESN is able to succeed at the grammar discrimination task suggested by Marcus et al.

22 September: James Winters (pre-viva talk)


Context, cognition and communication in language

James Winters (MPI Jena)

Thursday 22 September 2016, 16:00–16:30
1.17 Dugald Stewart Building

Questions pertaining to the unique structure and organisation of language have a long history in the field of linguistics. In recent years, researchers have explored cultural evolutionary explanations, showing how language structure emerges from weak biases amplified over repeated patterns of learning and use. One outstanding issue in these frameworks is accounting for the role of context. In particular, many linguistic phenomena are said to be context-dependent; interpretation does not take place in a void, and requires enrichment from the current state of the conversation, the physical situation, and common knowledge about the world. Modelling the relationship between language structure and context is therefore crucial for developing a cultural evolutionary approach to language. One approach is to use statistical analyses to investigate large-scale, cross-cultural datasets. However, due to the inherent limitations of statistical analyses, especially with regards to the inadequacy of these methods to test hypotheses about causal relationships, I argue that experiments are better suited to address questions pertaining to language structure and context. From here, I present a series of artificial language experiments, with the central aim being to test how manipulations to context influence the structure and organisation of language. Experiment 1 builds upon previous work in iterated learning and referential communication games through demonstrating that the emergence of optimal communication systems is contingent on the contexts in which languages are learned and used. The results show that language systems gradually evolve to only encode information that is informative for conveying the intended meaning of the speaker — resulting in markedly different systems of communication. Whereas Experiment 1 focused on how context influences the emergence of structure, Experiments 2 and 3 investigate under what circumstances do manipulations to context result in the loss of structure. While the results are inconclusive across these two experiments, there is tentative evidence that manipulations to context can disrupt structure, but only when interacting with other factors. Lastly, Experiment 4 investigates whether the degree of signal autonomy (the capacity for a signal to be interpreted without recourse to contextual information) is shaped by manipulations to contextual predictability: the extent to which a speaker can estimate and exploit contextual information a hearer uses in interpreting an utterance. When the context is predictable, speakers organise languages to be less autonomous (more context-dependent) through combining linguistic signals with contextual information to reduce effort in production and minimise uncertainty in comprehension. By decreasing contextual predictability, speakers increasingly rely on strategies that promote more autonomous signals, as these signals depend less on contextual information to discriminate between possible meanings. Overall, these experiments provide proof-of-concept for investigating the relationship between language structure and context, showing that the organisational principles underpinning language are the result of competing pressures from context, cognition, and communication.

19 July: Thomas Brochhagen


Learning biases may prevent the lexicalization of pragmatic inferences: A case study combining iterated learning and replicator dynamics

Thomas Brochhagen (Universiteit van Amsterdam)

Tuesday 19 July 2016, 11:00–12:30
1.17 Dugald Stewart Building

In linguistics, it is common to draw a distinction between semantics and pragmatics. However, the information conveyed by expressions is seldom if ever solely determined by semantics but rather by their pragmatic enrichment. From a functional perspective, the distinction between semantics and pragmatics then raises the challenge to justify the former’s structure in light of the latter.

My aim for this talk is twofold. First, I will lay out a model that integrates iterated learning in the replicator-mutator dynamics commonly used in evolutionary game theory. The innovation of the model lies in its combination of functional pressure on successful communication via the replicator dynamics, effects of learning biases on iterated Bayesian language learning, and probabilistic models of language use in a population with distinct lexica. Second, I will discuss an application of model to the analysis of a lack of upper-bounds in the literal meaning of (weak) scalar expressions. This application showcases how a learning bias towards simpler semantics in tandem with functional pressure may prevent the lexicalization of pragmatic inferences.

12 July: Ashley Micklos


Innovating a communication system interactively: Negotiation for conventionalization

Ashley Micklos (UCLA)

Tuesday 12 July 2016, 11:00–12:30
1.17 Dugald Stewart Building

The study I will present demonstrates how interaction – specifically negotiation and repair – can facilitate the emergence, evolution, and conventionalization of a silent gesture communication system (Goldin-Meadow et al, 2008; Schouwstra, 2012). In a modified iterated learning paradigm (Kirby, Cornish, & Smith, 2008), partners communicated noun-verb meanings using only silent gesture. The need to disambiguate similar noun-verb pairs (e.g. “a hammer” and “hammering”) drove these “new” language users to develop a morphology that allowed for quicker processing, easier transmission, and improved accuracy. The specific morphological system that emerged came about through a process of negotiation within the dyad. Negotiation involved reusing elements of prior gestures, even if temporally distant, to communicate a meaning. This is complementary to the same phenomenon that occurs in speech produced over multiple turns (Goodwin, 2013). The face-to-face, contingent interaction of the experiment allows participants to build from one another’s prior gestures as a means of developing systematicity over generations. Once a gesture has been performed, it is available for future use and manipulation. Transformative operations on prior gestures can emerge through repair as well. Immediate modification on a gesture can involve a reference to the gesture space or a particular element of the gesture. We see examples of this in other-initiated repair sequences (Jefferson, 1974) within the communication game. Over simulated generations, participants modified and systematized prior gestures to conform to emergent conventions in the silent gesture system. By applying a discourse analytic approach to the use of repair in an experimental methodology for language evolution, we are able to determine not only if interaction facilitates the emergence and learnability of a new communication system, but also how interaction affects such a system.

6 July: Mark Atkinson (pre-viva talk)


Sociocultural determination of linguistic complexity

Mark Atkinson (Stirling)

Wednesday 6 July 2016, 16:00–16:30
Seminar Room 6, Chrystal McMillan Building

Languages evolve, adapting to pressures arising from their learning and use. As these pressures may be different in different sociocultural environments, non-linguistic factors relating to the group structure of the people who speak a language may influence features of the language itself.

My thesis focuses on two key hypotheses which connect group structure to complex language features and evaluates them through a series of experiments. Firstly, I consider the claim that languages spoken by greater numbers of people are morphologically less complex than those employed by smaller groups, and assess two candidate mechanisms by which population size could have such an effect. Secondly, I assess a claim that esoteric communication — that carried out by smaller groups in which large amounts of information is shared and in which adult learning is absent — leads to the generation and maintenance of more complex language features.

In this talk, I will summarise the main findings of my thesis, and briefly introduce some of the experiments to support those findings. I conclude that adult language learning is the most plausible explanation for how morphological complexity is determined by population size, but that native speaker accommodation to adult learners may be a crucial linking mechanism, and that more esoteric communicative contexts leads to the development of more opaque lexical items.

21 June: Piera Filippi


From emotional arousal to interactive communication: A comparative approach to prosody in language evolution and acquisition

Piera Filippi (Vrije Universiteit Brussel)

Tuesday 21 June 2016, 11:00–12:30
1.17 Dugald Stewart Building

Writing over a century ago, Darwin hypothesized that vocal emotional expression had ancient evolutionary roots, perhaps dating back to some of our earliest terrestrial ancestors. This suggests that fundamental characteristics of vocal emotional expressions are widely shared among terrestrial vertebrates. Recent studies support this possibility, showing that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. In a recent empirical study, we showed that human participants use specific acoustic correlates (differences in fundamental frequency and spectral center of gravity) to judge the emotional content of vocalizations of nine vertebrate species: hourglass treefrog, American alligator, black-capped chickadee, common raven, domestic pig, giant panda, African elephant, Barbary macaque, and human. These species represent three different biological classes – amphibia, reptilia (non-aves and aves), mammalia – that diverge in size, ecological habitat, and social structure. These results suggest that fundamental mechanisms of vocal emotional expression are widely shared among vertebrates and could represent an ancient signaling system. But what’s the evolutionary link between the ability to interpret emotional arousal across vertebrate species and the ability for human linguistic communication? I suggest to identify this link in the ability to actively modulate emotional sounds within communicative interactions. Specifically, within a comparative approach to sound modulation in human and nonhuman vocal communication systems, I propose a new perspective on the ability for interactional prosody (AIP) which includes the following processes: (i) to actively control and modulate frequency, tempo and amplitude of vocalizations; (ii) to coordinate sound production with one or more individuals; (iii) to express or evoke emotional information. I hypothesize that AIP paved the way for the evolution of language and continues to play a vital role in the acquisition of language. In support of this hypothesis, I review empirical studies on the adaptive value of AIP in nonhuman primates and mammals and on the beneficial effects of AIP in scaffolding verbal language acquisition. I emphasize the key role of the social and interactive aspect of AIP in relation to the evolution and ontogenetic development of language. Finally, I describe recent empirical data on humans, showing that the prosodic modulation of the voice is dominant over verbal content and faces in emotion communication. This finding aligns with the hypothesis that prosody is evolutionarily older than the emergence of segmental articulation, and might have paved the way to its origins.