5 December: Jennifer Culbertson


Children’s sensitivity to phonological and semantic cues during noun class learning: evidence for a phonological bias

Jennifer Culbertson (University of Edinburgh)

Tuesday 5 December 2017, 11:00–12:30
1.17 Dugald Stewart Building

Previous research on natural language acquisition of noun classification systems, such as grammatical gender, has shown that child learners appear to rely disproportionately on phonological cues (e.g., Gagliardi & Lidz, 2014; Karmiloff-Smith, 1981). Surprisingly, this occurs even when competing semantic cues are more reliable predictors of class. Culbertson, Gagliardi & Smith (2017) present evidence from artificial language learning experiments with adults suggesting that the over-reliance on phonology may be due to the fact that phonological cues are generally available earlier than semantic cues; learners acquire early representations of phonological dependencies (e.g., between a gendered determiner and a noun) before acquiring the semantic referents of nouns. In other words, Culbertson et al. (2017) suggest there is no a priori bias in favor of phonological cues to noun class. In this talk, I will present follow-up work investigating whether our results hold for child learners. In a series of experiments, we show that two cues–one semantic and one phonological–which children are equally sensitive to in isolation, are in fact treated differently when they are in conflict. In particular, unlike adults, children prioritize phonological cues regardless of when cues are available. This suggests the possibility that children are in fact biased to attend to phonological cues when acquiring noun classification systems.

21 November: Fiona Kirton


Visual saliency and word order in improvised gesture

Fiona Kirton (University of Edinburgh)

Tuesday 21 November 2017, 11:00–12:30
G32, 7 George Square

A commonly cited observation is that the distribution of the basic word orders across the world’s languages is highly non-uniform. Although all six possible orders are attested, around 88% of languages with a dominant order use either SOV or SVO. In recent years there has been increasing interest in the improvised gesture paradigm as a way of investigating this asymmetry. In one of the earliest studies of this kind Goldin-Meadow et al. (2008) argued that SOV is the default order used in developing communication systems and suggested that other orders emerge later in response to some pressure or combination of pressures. More recent studies suggest a more complex picture: SOV is the default order only for certain types of event and structural choices in improvised gesture are influenced by properties of the participating entities and actions and/or the relations between them. In a recently published study, Meir et al. (2017) argue that saliency is a key determiner of constituent order in improvised gesture such that more salient entities, typically human agents, tend to be mentioned first.

In this talk, I will present an improvised gesture study that investigates the role of saliency in more detail. Results from this study suggest that manipulating the visual saliency of the Agent influences the relative order of the Patient and the Action. I will propose that the relative visual saliency of the Agent and the Patient affects the way participants mentally construe events, which in turn determines their choice of constituent order.

14 November: Inbal Arnon


More than words: developmental and psycholinguistic investigations of the building blocks of language

Inbal Arnon (Hebrew University of Jerusalem) 

Tuesday 14 November 2017, 11:00–12:30
G32, 7 George Square

Why are children better language learners than adults despite being worse at a range of other cognitive tasks? Many accounts focus on the cognitive or neurological differences between children and adults. Here, I focus on the way prior knowledge impacts the building blocks children and adults use. I explore the role of multiword sequences in explaining L1–L2 differences in learning and language use more generally to argue that children are more likely than adults to rely on them in learning. While words are often seen as the basic building blocks of language (e.g., Pinker, 1991), there is  growing theoretical interest and empirical evidence for the role of multiword units in language. I draw on developmental, psycholinguistic and computational findings to show that children use multiword units in learning; that such units are facilitative for learning certain grammatical relations; and that adult learners rely on them less, a pattern that can explain some of the differences between child and adult language learning. I will then present findings on the emergence of structure in child and adult learners and discuss implications for models of L1 and L2 learning.

7 November: Christine Cuskley


Gamifying language evolution

Christine Cuskley (University of Edinburgh)

Tuesday 7 November, 11:00 – 12:30
G32, 7 George Square
 
At the heart of language are shared conventions: from the rules that we use and how we inflect them, to the vast lexicon we use to describe the world, language works because conventions are shared across a population of speakers. Thus, a crucial question for language evolution is how we come to have shared conventions: how these emerge, how they change over time, and how they decline. In this talk, I will present some early studies of a novel virtual signal modality called Ferro, which allows for truly “alien” artificial language learning. Early results suggest that while Ferro are difficult to learn, they share some interesting features with linguistic articulation spaces which may have strong effects on learning biases. The end goal of the Ferro palette is multi-player game called FerroCell, which will allow us to see conventions emerging and evolving in player populations with realistic interaction networks. I’ll outline what FerroCell will look like, and what we hope to learn in the first “petri-dish” experimental game in language evolution.

 

31 October: Jon Carr


Simplicity priors and conceptual structure

Jon Carr (University of Edinburgh)

Tuesday 31 October 2017, 11:00–12:30
G32, 7 George Square

Languages are shaped by competing pressures from learning and communication. Learning favours simple languages, while communication favours informative ones, giving rise to the simplicity–informativeness tradeoff.

In this talk I will pay special attention to the simplicity part of this tradeoff. I argue that learning is best viewed as a model selection problem in which a simplicity prior plays an essential role in allowing agents to reason about unseen items and to avoid overfitting noise in the data stream.

I show that simple, structured, learnable concepts can emerge from this very general principle in a Bayesian iterated learning model. And I show that an experimental analogue of this model returns strikingly similar results.

Finally, I consider another hypothesis that could explain the results – that learners have a prior bias for informativeness – and I show why this explanation is unlikely.

17 October: Alexander Martin


Biases in phonological processing and learning

Alexander Martin (University of Edinburgh)

Tuesday 17 October 2017, 11:00–12:30
G32 7 George Square

During speech perception, listeners are biased by a great number of factors, including cognitive limitations such as memory and attention and linguistic limitations such as their native language. In this talk, I will present my PhD work addressing two of these factors: processing bias during word recognition, and learning bias during the transmission process. These factors are combinatorial and can, over time, affect the way languages evolve. First, I will detail a study focusing on the importance of phonological features in word recognition, at both the perceptual and lexical levels, and discuss how speakers integrate information from these different sources. Second, I will present a series of experiments addressing the question of learning bias and its implications for the linguistic typology. Specifically, I will present artificial language learning experiments showing better learning of the typologically common pattern of vowel harmony compared to the exceedingly rare, but logically equivalent pattern of vowel disharmony. I will also present a simple simulation of the transmission of these patterns over time, showing better survival of harmonic patterns compared to disharmonic ones.

10 October: Andres Karjus


Topical advection as a baseline for corpus-based evolutionary dynamics

Andres Karjus (University of Edinburgh)

Tuesday 10 October 2017, 11:00–12:30
G32 7 George Square

Distinguishing genuine linguistic change (selection) from neutral evolution, and linguistic changes from those stemming from language-external factors (cultural drift) remains an important and interesting question in evolutionary lexical dynamics. A commonly used proxy to the popularity or selective fitness of a linguistic element is its corpus frequency. However, a number of recent works have pointed out that raw frequencies can often be misleading, as they are affected by shifting discourse topics and cultural trends of different periods. In other words, the changing frequency of some elements might simply be the result of the rise or fall of its associated topic. In this talk, I cover the basics of the topical-cultural advection model, a computationally simple method designed to control for topical drift and serve as a baseline in models of linguistic variant selection. Initial results show that the method is capable of describing a considerable amount of variability in word frequency changes over time. The talk will be accompanied by examples from diachronic corpora, a simulation of language change, and a datset on cultural evolution.

5 September: Savithry Namboodiripad


Language contact and language evolution: Integrating approaches

Savithry Namboodiripad (University of California, San Diego)

Tuesday 5 September 2017, 11:00–12:30
1.17 Dugald Stewart Building

What explains the typological distributions observed in the world’s languages? Approaches from language evolution rely on cognitive biases such as processing preferences as a causal factor in language creation and change (e.g. Christiansen & Chater). In this talk, I argue for the inclusion of language contact in models of language evolution, and discuss ways in which language contact can enhance current methodologies in language evolution, as well as help to explain the typological distributions seen in the world’s languages.
The empirical domain of this talk is “flexible” languages, in which every logical ordering of the major constituents — subject, object and verb — has the same truth-conditional meaning. I present formal acceptability judgment experiments which show similarities in how experience with English reduces speakers’ flexibility in Malayalam (Dravidian) and Korean, even under different circumstances of contact. I describe post-colonial contact between English and Malayalam (Dravidian) in India as an illustrative (and relatively common) example of a case in which contact cannot be sidelined. Finally, I discuss an upcoming experiment designed to model these outcomes in the lab, with the intention of comparing groups of participants with L1s that differ in terms of flexibility.

29 August: Stella Frank


Modelling L1-L2 speaker interactions

Stella Frank (University of Edinburgh)

Tuesday 29 August 2017, 11:00–12:30
1.17 Dugald Stewart Building

This talk will present ongoing work on modelling the interaction of L1 and L2 speakers in a Bayesian framework. In particular, we’re interested in whether native speakers accommodating to the non-native speaker can drive language change, i.e., serve as a mechanism for the types of population-level correlations between language complexity and L2 speakers found by (Lupyan & Dale 2010, Bentz & Winter 2013). Accommodation requires the speaker to have a “Theory of Language” analogous to a “Theory of Mind” regarding their interlocutors. In our model, this means that agents reason about the likely linguistic knowledge of their partner, and they update their beliefs after encountering evidence (e.g. words spoken by their partner). First results show that accommodation in interaction can lead to more regular languages in our model, but under only slightly different circumstances can also lead to higher variation.

22 August: William Hamilton


Negativity and Semantic Change

William L. Hamilton (Stanford University)

Tuesday 22 August 2017, 11:00–12:30
1.17 Dugald Stewart Building

It is often argued that natural language is biased towards negative differentiation, meaning that there is more lexical diversity in negative affectual language, compared to positive language. However, we lack an understanding of the diachronic linguistic mechanisms associated with negative differentiation. In this talk, I will review key concepts related to negative differentiation and discuss how I am using diachronic word embeddings to test whether negative lexical items are more semantically unstable than positive ones. Preliminary results suggest that rates of semantic change are faster for negative affectual language, compared to positive language. I will finish my talk by discussing some practical consequences of this positive/negative asymmetry for sentiment analysis tools.