29 August: Stella Frank

Modelling L1-L2 speaker interactions

Stella Frank (University of Edinburgh)

Tuesday 29 August 2017, 11:00–12:30
1.17 Dugald Stewart Building

This talk will present ongoing work on modelling the interaction of L1 and L2 speakers in a Bayesian framework. In particular, we’re interested in whether native speakers accommodating to the non-native speaker can drive language change, i.e., serve as a mechanism for the types of population-level correlations between language complexity and L2 speakers found by (Lupyan & Dale 2010, Bentz & Winter 2013). Accommodation requires the speaker to have a “Theory of Language” analogous to a “Theory of Mind” regarding their interlocutors. In our model, this means that agents reason about the likely linguistic knowledge of their partner, and they update their beliefs after encountering evidence (e.g. words spoken by their partner). First results show that accommodation in interaction can lead to more regular languages in our model, but under only slightly different circumstances can also lead to higher variation.

22 August: William Hamilton

Negativity and Semantic Change

William L. Hamilton (Stanford University)

Tuesday 22 August 2017, 11:00–12:30
1.17 Dugald Stewart Building

It is often argued that natural language is biased towards negative differentiation, meaning that there is more lexical diversity in negative affectual language, compared to positive language. However, we lack an understanding of the diachronic linguistic mechanisms associated with negative differentiation. In this talk, I will review key concepts related to negative differentiation and discuss how I am using diachronic word embeddings to test whether negative lexical items are more semantically unstable than positive ones. Preliminary results suggest that rates of semantic change are faster for negative affectual language, compared to positive language. I will finish my talk by discussing some practical consequences of this positive/negative asymmetry for sentiment analysis tools.

15 August: Molly Flaherty

Multi-verb descriptions to describe single events

Molly Flaherty (University of Edinburgh)

Tuesday 15 August 2017, 11:00–12:30
1.17 Dugald Stewart Building

Though communication in the manual modality allows for iconically motivated descriptions of physical events, sign languages, like spoken languages, employ conventionalized units and conventions for their combination to relate events in the world. In my previous work on Nicaraguan Sign Language (NSL), I observed a verb construction that breaks an event into two components: a primary verb depicting the action from the agent’s perspective (i.e. TICKLE), paired with a secondary verb from the patient’s perspective (i.e. GET-TICKLED). These constructions were found in signers of a variety of ages, each exposed to Nicaraguan Sign Language at a different point in its development since its first emergence 30 years ago. Further, these constructions appeared more often in events with animate patients than in events with inanimate patients. Deaf Nicaraguans not exposed to NSL, homesigners, also evidenced this construction to a lesser degree.

In my current work with Simon we are exploring the learning and use of these constructions among hearing gesturers in the lab. In our study, we will expose participants to silent gesture languages containing these constructions in different proportions (i.e. 2/3 multi-verb descriptions to describe animate patient events vs. 1/3 multi-verb descriptions for animate patient events) to see if patient animacy affects the ease of learning of multi-single verb constructions. I’d love input on study design from the group.

4 July: James Winters & Thomas Müller

Asynchronous information transfer as a constraint on the emergence of graphic codes

James Winters & Thomas Müller (MPI Jena)

Tuesday 4 July 2017, 11:00–12:30
1.17 Dugald Stewart Building

Humans commit information to graphic symbols for three basic reasons: as a memory aid, as a recording device, and as a means of communication. Yet, despite the benefits afforded by transmitting information graphically, writing stands out as a unique and compelling mystery: it emerged relatively late in human evolution, and it is the only graphic code which matches the power, precision, and versatility of signed and spoken languages. We argue in this talk that the difficulty of arriving at a graphic code like writing is because asynchronous communication imposes hard constraints on information transfer: access to shared contextual information is circumscribed and recourse to conversational repair mechanisms is removed. To investigate this claim, we present two referential communication experiments. The first experiment shows that graphic codes only reach a stable, accurate and optimal state when used for synchronous communication. By contrast, codes fail to emerge for asynchronous communication, with the systems becoming stuck in a unstable, inaccurate, and sub-optimal configuration.The second experiment singles out the aspect of shared perceptual context from the general characteristics of synchronous communication, and demonstrates its importance for accurate graphic codes. Taken together, these results suggest that the paucity and late-arrival of stable, powerful, and accurate graphic codes in human history is (partly) due to strong constraints on information transfer.

27 June: Cathleen O’Grady

The dot perspective task revisited: Do we automatically process what other people see?

Cathleen O’Grady (Edinburgh)

Tuesday 27 June 2017, 11:00–12:30
1.17 Dugald Stewart Building

The ability to reason about other individuals’ mental states (“mindreading”) is thought to be a central component of social cognition in humans, and particularly essential for language. In order for mindreading to be useful in social interaction, it seems necessary that it is also highly efficient. Samson et al.’s (2010) dot perspective task (DPT) provides evidence that taking another individual’s visual perspective (a very simple form of mindreading) is both rapid and involuntary.

However, variants of the DPT suggest that the task’s headline effect is due not to perspective-taking, but rather to simpler processes that do not entail mindreading. In this talk, I will discuss these competing explanations, and present a new variant of the task that replicates the central finding of the DPT, but suggests that involuntary perspective-taking is not the best explanation for this effect. I will argue that the non-mentalistic account of the DPT may still be useful for understanding the apparent role of mindreading in communication.

13 June: Myrte Vos

Word order, naturalness and conventionalisation: Evidence from silent gesture

Myrte Vos (University of Amsterdam)

Tuesday 13 June 2017, 11:00–12:30
1.17 Dugald Stewart Building

Of the six possible ways to order Subject, Object and Verb, two – SOV and SVO – account for the constituent word order in nearly 80% of the world’s languages. Why? The pragmatic principle ‘Agent first’ accounts for the dominance of S-initial word orders; and recent work in word order typology, creoles, emerging sign languages, and improvised silent gesture suggests that SOV is the natural ‘default’ in nonverbal event representation and early language structure. But if that is so, why is SVO word order nearly as prominent as SOV?

One improvised silent gesture study, from Schouwstra & de Swart (2014), suggests that in improvised communication, the usage of SOV versus SVO is conditioned on the semantic content of the verb. Another study, by Marno, Langus and Nespor (2015) posits that SVO is preferred by the syntax-governing ‘computational system’ of cognition, and that while improvised communication favours SOV, access to a lexicon frees up the cognitive resources needed to employ syntax, and “consequently SVO, the more efficient word order to express syntactic relations, emerges.” In their improvised silent gesture task, wherein half the participants had to improvise their gesturing of simple transitive events and the other half were first taught a gesture lexicon before being asked to communicate, participants trained on a lexicon did indeed favour SVO. We replicated this experiment with stimuli restricted to event-types found to elicit SOV, as well as running an adapted condition using a lexicon of randomly assigned, arbitrary gestures, to further investigate Marno et al.’s hypothesis.

23 May: Sean Roberts

Language adapts to interaction

Sean Roberts (Bristol)

Tuesday 23 May 2017, 11:00–12:30
1.17 Dugald Stewart Building

Language appears to be adapted to constraints from many domains such as production, transmission, memory, processing and acquisition. These adaptations and constraints have formed the basis for theories of language evolution, but arguably the primary ecology of language is face-to-face conversation. Taking turns at talk, repairing problems in communication and organising conversation into contingent sequences seem completely natural to us, but are in fact highly organised, tightly integrated systems which are not shared by any other species.

In this talk I discuss how one might link features of real time interaction to different levels of language evolution: the evolution of a capacity for language; the initial emergence of linguistic systems; and the ongoing cultural evolution of languages. I will illustrate the links in each level using computational models, lab experiments and corpus analyses. I argue that a full explanation of the origin and structures of languages needs to take into account the ecology in which language is used: face to face interactive communication.

9 May: Kenny Smith

Acquiring variation in artificial languages

Kenny Smith (Edinburgh)

Tuesday 9 May 2017, 11:00–12:30
1.17 Dugald Stewart Building

I will present 4 experiments using artificial language learning paradigms to study the ability of adults and children to acquire conditioned and unconditioned variation, in particular looking at their ability and predisposition to condition variation on social and semantic cues.

2 May: Olga Fehér

The effect of semantic cues on the regularisation of unpredictable variation

Olga Fehér (Edinburgh)

Tuesday 2 May 2017, 11:00–12:30
1.17 Dugald Stewart Building

Variation in natural language is constrained: languages tend to lose competing variants over time, and where variation persists its use is conditioned on linguistic or sociolinguistic context. When learners acquire languages that exhibit unpredictable variation (unnatural, unconditioned probabilistic variation), they often eliminate the variation by regularising to one of the competing variants or conditioning on context. We previously found that, in addition to individual learning and transmission, interaction can lead to regularisation through convergence and priming between interlocutors. In this experiment, we investigated the influence of semantic cues on regularisation and conditioning during interaction and transmission. We had adult participants learn and communicate with artificial languages which exhibited unconditioned variation in plural marking. The languages described images that belonged to one or two semantic categories. We found that interacting Dyads regularised in the one category condition by eliminating one of the markers. However, in the two category condition, Dyads maintained variation but without conditioning it on semantic categories. Semantic conditioning occurred only in Singles, gradually during episodes of transmission. The lack of conditioning in Dyads was probably due to strong priming between communicating partners that was present within and across semantic categories. This suggests that the pattern of restricted, conditioned variation in natural language reflects the combined influences of biases in learning, recall and interaction.