17 October: Alexander Martin

13 October 2017  •  Svenja Wagner

Biases in phonological processing and learning

Alexander Martin (University of Edinburgh)

Tuesday 17 Oct 2017, 11:00–12:30
G32 7 George Square

During speech perception, listeners are biased by a great number of factors, including cognitive limitations such as memory and attention and linguistic limitations such as their native language. In this talk, I will present my PhD work addressing two of these factors: processing bias during word recognition, and learning bias during the transmission process. These factors are combinatorial and can, over time, affect the way languages evolve. First, I will detail a study focusing on the importance of phonological features in word recognition, at both the perceptual and lexical levels, and discuss how speakers integrate information from these different sources. Second, I will present a series of experiments addressing the question of learning bias and its implications for the linguistic typology. Specifically, I will present artificial language learning experiments showing better learning of the typologically common pattern of vowel harmony compared to the exceedingly rare, but logically equivalent pattern of vowel disharmony. I will also present a simple simulation of the transmission of these patterns over time, showing better survival of harmonic patterns compared to disharmonic ones.

10 October: Andres Karjus


Topical advection as a baseline for corpus-based evolutionary dynamics

Andres Karjus (University of Edinburgh)

Tuesday 10 October 2017, 11:00–12:30
G32 7 George Square

Distinguishing genuine linguistic change (selection) from neutral evolution, and linguistic changes from those stemming from language-external factors (cultural drift) remains an important and interesting question in evolutionary lexical dynamics. A commonly used proxy to the popularity or selective fitness of a linguistic element is its corpus frequency. However, a number of recent works have pointed out that raw frequencies can often be misleading, as they are affected by shifting discourse topics and cultural trends of different periods. In other words, the changing frequency of some elements might simply be the result of the rise or fall of its associated topic. In this talk, I cover the basics of the topical-cultural advection model, a computationally simple method designed to control for topical drift and serve as a baseline in models of linguistic variant selection. Initial results show that the method is capable of describing a considerable amount of variability in word frequency changes over time. The talk will be accompanied by examples from diachronic corpora, a simulation of language change, and a datset on cultural evolution.

5 September: Savithry Namboodiripad


Language contact and language evolution: Integrating approaches

Savithry Namboodiripad (University of California, San Diego)

Tuesday 5 September 2017, 11:00–12:30
1.17 Dugald Stewart Building

What explains the typological distributions observed in the world’s languages? Approaches from language evolution rely on cognitive biases such as processing preferences as a causal factor in language creation and change (e.g. Christiansen & Chater). In this talk, I argue for the inclusion of language contact in models of language evolution, and discuss ways in which language contact can enhance current methodologies in language evolution, as well as help to explain the typological distributions seen in the world’s languages.
The empirical domain of this talk is “flexible” languages, in which every logical ordering of the major constituents — subject, object and verb — has the same truth-conditional meaning. I present formal acceptability judgment experiments which show similarities in how experience with English reduces speakers’ flexibility in Malayalam (Dravidian) and Korean, even under different circumstances of contact. I describe post-colonial contact between English and Malayalam (Dravidian) in India as an illustrative (and relatively common) example of a case in which contact cannot be sidelined. Finally, I discuss an upcoming experiment designed to model these outcomes in the lab, with the intention of comparing groups of participants with L1s that differ in terms of flexibility.

29 August: Stella Frank


Modelling L1-L2 speaker interactions

Stella Frank (University of Edinburgh)

Tuesday 29 August 2017, 11:00–12:30
1.17 Dugald Stewart Building

This talk will present ongoing work on modelling the interaction of L1 and L2 speakers in a Bayesian framework. In particular, we’re interested in whether native speakers accommodating to the non-native speaker can drive language change, i.e., serve as a mechanism for the types of population-level correlations between language complexity and L2 speakers found by (Lupyan & Dale 2010, Bentz & Winter 2013). Accommodation requires the speaker to have a “Theory of Language” analogous to a “Theory of Mind” regarding their interlocutors. In our model, this means that agents reason about the likely linguistic knowledge of their partner, and they update their beliefs after encountering evidence (e.g. words spoken by their partner). First results show that accommodation in interaction can lead to more regular languages in our model, but under only slightly different circumstances can also lead to higher variation.

22 August: William Hamilton


Negativity and Semantic Change

William L. Hamilton (Stanford University)

Tuesday 22 August 2017, 11:00–12:30
1.17 Dugald Stewart Building

It is often argued that natural language is biased towards negative differentiation, meaning that there is more lexical diversity in negative affectual language, compared to positive language. However, we lack an understanding of the diachronic linguistic mechanisms associated with negative differentiation. In this talk, I will review key concepts related to negative differentiation and discuss how I am using diachronic word embeddings to test whether negative lexical items are more semantically unstable than positive ones. Preliminary results suggest that rates of semantic change are faster for negative affectual language, compared to positive language. I will finish my talk by discussing some practical consequences of this positive/negative asymmetry for sentiment analysis tools.

15 August: Molly Flaherty


Multi-verb descriptions to describe single events

Molly Flaherty (University of Edinburgh)

Tuesday 15 August 2017, 11:00–12:30
1.17 Dugald Stewart Building

Though communication in the manual modality allows for iconically motivated descriptions of physical events, sign languages, like spoken languages, employ conventionalized units and conventions for their combination to relate events in the world. In my previous work on Nicaraguan Sign Language (NSL), I observed a verb construction that breaks an event into two components: a primary verb depicting the action from the agent’s perspective (i.e. TICKLE), paired with a secondary verb from the patient’s perspective (i.e. GET-TICKLED). These constructions were found in signers of a variety of ages, each exposed to Nicaraguan Sign Language at a different point in its development since its first emergence 30 years ago. Further, these constructions appeared more often in events with animate patients than in events with inanimate patients. Deaf Nicaraguans not exposed to NSL, homesigners, also evidenced this construction to a lesser degree.

In my current work with Simon we are exploring the learning and use of these constructions among hearing gesturers in the lab. In our study, we will expose participants to silent gesture languages containing these constructions in different proportions (i.e. 2/3 multi-verb descriptions to describe animate patient events vs. 1/3 multi-verb descriptions for animate patient events) to see if patient animacy affects the ease of learning of multi-single verb constructions. I’d love input on study design from the group.

4 July: James Winters & Thomas Müller


Asynchronous information transfer as a constraint on the emergence of graphic codes

James Winters & Thomas Müller (MPI Jena)

Tuesday 4 July 2017, 11:00–12:30
1.17 Dugald Stewart Building

Humans commit information to graphic symbols for three basic reasons: as a memory aid, as a recording device, and as a means of communication. Yet, despite the benefits afforded by transmitting information graphically, writing stands out as a unique and compelling mystery: it emerged relatively late in human evolution, and it is the only graphic code which matches the power, precision, and versatility of signed and spoken languages. We argue in this talk that the difficulty of arriving at a graphic code like writing is because asynchronous communication imposes hard constraints on information transfer: access to shared contextual information is circumscribed and recourse to conversational repair mechanisms is removed. To investigate this claim, we present two referential communication experiments. The first experiment shows that graphic codes only reach a stable, accurate and optimal state when used for synchronous communication. By contrast, codes fail to emerge for asynchronous communication, with the systems becoming stuck in a unstable, inaccurate, and sub-optimal configuration.The second experiment singles out the aspect of shared perceptual context from the general characteristics of synchronous communication, and demonstrates its importance for accurate graphic codes. Taken together, these results suggest that the paucity and late-arrival of stable, powerful, and accurate graphic codes in human history is (partly) due to strong constraints on information transfer.

27 June: Cathleen O’Grady


The dot perspective task revisited: Do we automatically process what other people see?

Cathleen O’Grady (Edinburgh)

Tuesday 27 June 2017, 11:00–12:30
1.17 Dugald Stewart Building

The ability to reason about other individuals’ mental states (“mindreading”) is thought to be a central component of social cognition in humans, and particularly essential for language. In order for mindreading to be useful in social interaction, it seems necessary that it is also highly efficient. Samson et al.’s (2010) dot perspective task (DPT) provides evidence that taking another individual’s visual perspective (a very simple form of mindreading) is both rapid and involuntary.

However, variants of the DPT suggest that the task’s headline effect is due not to perspective-taking, but rather to simpler processes that do not entail mindreading. In this talk, I will discuss these competing explanations, and present a new variant of the task that replicates the central finding of the DPT, but suggests that involuntary perspective-taking is not the best explanation for this effect. I will argue that the non-mentalistic account of the DPT may still be useful for understanding the apparent role of mindreading in communication.

13 June: Myrte Vos


Word order, naturalness and conventionalisation: Evidence from silent gesture

Myrte Vos (University of Amsterdam)

Tuesday 13 June 2017, 11:00–12:30
1.17 Dugald Stewart Building

Of the six possible ways to order Subject, Object and Verb, two – SOV and SVO – account for the constituent word order in nearly 80% of the world’s languages. Why? The pragmatic principle ‘Agent first’ accounts for the dominance of S-initial word orders; and recent work in word order typology, creoles, emerging sign languages, and improvised silent gesture suggests that SOV is the natural ‘default’ in nonverbal event representation and early language structure. But if that is so, why is SVO word order nearly as prominent as SOV?

One improvised silent gesture study, from Schouwstra & de Swart (2014), suggests that in improvised communication, the usage of SOV versus SVO is conditioned on the semantic content of the verb. Another study, by Marno, Langus and Nespor (2015) posits that SVO is preferred by the syntax-governing ‘computational system’ of cognition, and that while improvised communication favours SOV, access to a lexicon frees up the cognitive resources needed to employ syntax, and “consequently SVO, the more efficient word order to express syntactic relations, emerges.” In their improvised silent gesture task, wherein half the participants had to improvise their gesturing of simple transitive events and the other half were first taught a gesture lexicon before being asked to communicate, participants trained on a lexicon did indeed favour SVO. We replicated this experiment with stimuli restricted to event-types found to elicit SOV, as well as running an adapted condition using a lexicon of randomly assigned, arbitrary gestures, to further investigate Marno et al.’s hypothesis.

23 May: Sean Roberts


Language adapts to interaction

Sean Roberts (Bristol)

Tuesday 23 May 2017, 11:00–12:30
1.17 Dugald Stewart Building

Language appears to be adapted to constraints from many domains such as production, transmission, memory, processing and acquisition. These adaptations and constraints have formed the basis for theories of language evolution, but arguably the primary ecology of language is face-to-face conversation. Taking turns at talk, repairing problems in communication and organising conversation into contingent sequences seem completely natural to us, but are in fact highly organised, tightly integrated systems which are not shared by any other species.

In this talk I discuss how one might link features of real time interaction to different levels of language evolution: the evolution of a capacity for language; the initial emergence of linguistic systems; and the ongoing cultural evolution of languages. I will illustrate the links in each level using computational models, lab experiments and corpus analyses. I argue that a full explanation of the origin and structures of languages needs to take into account the ecology in which language is used: face to face interactive communication.