23 April: Fausto Carcassi


The semantic structure of gradable adjectives: an experiment and a Bayesian model

Fausto Carcassi (CLE, University of Edinburgh)

Tuesday, April 23
11:00am – 12:00pm
Room 1.17, DSB

We describe a systematic difference between the semantic structure of nouns and of gradable adjectives in their bare use in terms of Gardenfors’ conceptual spaces theory. We propose that the difference can be explained in terms of a difference in structure of the conceptual spaces underlying nouns and gradable adjectives. We present an experiment to support this proposal. The data did not support the hypothesis. We then present a cognitive Bayesian model of learning that encodes the theoretical proposal, and nest it within a hierarchical Bayesian model to do exploratory analysis.

 

4 April: Douwe Kiela


Grounded Multi-Agent Language Games

Douwe Kiela (Facebook AI Research)

Thursday, April 4,
11:00am – 12:30pm
4.31, Informatics Forum

I will talk about recent work done at FAIR on novel directions for natural language processing research. While a lot of progress has recently been made in natural language understanding, e.g. by using (contextualized) word and sentence embeddings, big challenges remain. I will discuss fresh perspectives on natural language learning, in the shape of grounded multi-agent language games: While Wittgenstein is often invoked as the godfather of the distributional hypothesis, I argue that he has rather different lessons to teach us. This leads to a new research program for true natural language understanding, centering around active language usage in “grounded multi-agent language games”. I will give some examples of research we have done at FAIR that goes in that direction.

2 April: Mora Maldonado and Jenny Culbertson


Person of interest: Learnability and naturalness of person systems

Mora Maldonado and Jenny Culbertson (CLE, University of Edinburgh)

Tuesday, April 2
11:30am – 12:30pm
G.32, 7 George Square

Person systems—typically exemplified in pronoun paradigms (e.g. me, you, us)—describe how languages categorize entities as a function of their role in speech context (i.e., speaker(s), addressee(s), other(s)). Like other linguistic category systems (e.g. color and kinship terms), not all ways of partitioning the person space into different forms are equally likely cross-linguistically. Indeed, while some partitions are extremely frequent, others are very rare or do not occur at all (Cysouw 2003).

Morpho-semantic approaches to person systems have aimed to provide an inventory of person features that generates all and only the attested partitions (Harley & Ritter 2002, Harbour 2016, Ackema & Neelman 2018, among others). One potential problem with allthese accounts is that the typological data they rely on is rather weak: not only the sample of languages is quite small, but also there are often inconsistencies in the way paradigms are classified.

To make up for this issue, we aim to provide a method for investigating person systems experimentally. Such an approach allow us not only to extend the presently sparse typological data, but also to test whether typologically attested partitions are more natural and easier to learn than unattested ones.

In this talk, we will present a series of artificial language learning experiments where we test whether typological frequency correlates with learnability of person paradigms.

We will start by focusing on first person systems (e.g., ‘I’ and ‘we’ in English), and test the general predictions of theories that posit a universal set of features to capture this space. Our results provide the first experimental evidence for feature-based theories of person systems. We will then present some ongoing research where we take a similar approach to investigate potential asymmetries between the first, second and third person(s).

 

1 April: Marieke Woensdregt (pre-viva talk)


Co-evolution of language and mindreading: A computational exploration

Marieke Woensdregt (CLE, University of Edinburgh)

Monday April 1
10:00-10:30am
Room S1, 7 George Square

Language relies on mindreading (a.k.a. theory of mind), as language users have to entertain and recognise communicative intentions. Mindreading skills in turn profit from language, as language provides a means for expressing mental states explicitly, and for talking about mental states. Given this interdependence, it has been hypothesised that language and mindreading have co-evolved. I will present an agent-based model to formalise this hypothesis, which combines referential signalling with perspective-taking.

This model treats communicative behaviour as an outcome of an interplay between the context in which communication occurs, the agent’s individual perspective on the world, and the agent’s lexicon. However, each agent’s perspective and lexicon are private mental representations, not directly observable by other agents. Language learners are therefore confronted with the task of jointly inferring both the lexicon and the perspective of their cultural parent. Simulation results show that Bayesian learners can solve this task by bootstrapping one from the other, but only if the speaker uses a lexicon that is at least somewhat informative.

This leads to the question under what circumstances a population of agents can evolve such an informative lexicon from scratch. In this talk I will explore the effects of two different selection pressures: a pressure for successful communication and a pressure for accurate perspective-inference. I will also compare two different types of agents: literal communicators and pragmatic communicators. Pragmatic speakers optimise their communication behaviour by maximising the probability that their interlocutor will interpret their signals correctly. Iterated learning results show that populations of literal agents evolve an informative lexicon not just when they’re under a pressure to communicate, but also when they’re under a pressure to infer each other’s perspectives. Populations of pragmatic agents show similar evolutionary dynamics, except that they can achieve improvements in communication and perspective-inference while maintaining more ambiguous lexicons.

 

26 March: Arturs Semenuks


When are simpler languages easier to learn?

Arturs Semenuks (University of California San Diego)

Tuesday March 26
11:00am -12:00pm
Lecture Theatre 2 (room no. G.07, ground floor), Appleton Tower

Languages with more L2 speakers tend to be morphologically simpler. The evidence for this comes mostly from qualitative and quantitative analyses of typological and diachronic data, as well as computational modelling. More experimental work, however, is necessary to (i) make causal claims about the influence of non-native speakers on language structure and (ii) fully understand the mechanism of that influence, i.e. how exactly does L2 speaker presence in the population lead to language simplification down the line.
One frequently entertained explanation assumes that morphological simplification is caused primarily by, in the words of Peter Trudgill, “the lousy language learning abilities” of adults and that languages adapt to become more learnable for L2 speakers. In the talk I will present results from four experiments testing common assumptions of this hypothesised mechanism. In experiment 1, we find that imperfect learning does amplify the erosion of complex features in an iterated artificial language learning setup. In experiments 2-4, we test the assumption that descriptively simpler languages are also more learnable using artificial language learning. Surprisingly, we don’t find evidence for that being the case, except when the participants’ L1 structure matches the artificial language structure. I discuss the seeming tension between the results of the experiments, argue that descriptively simpler languages are not always easier to learn and propose some conjectures for when they are.

19 March: Sharon Goldwater


Do infants really learn phonetic categories?

Sharon Goldwater (University of Edinburgh) (joint work with Naomi Feldman, Thomas Schatz, Emmanuel Dupoux, Xuan-Nga Cao)

Tuesday March 19
11:00am -12:30pm
G.32, 7 George Square

Early changes in infants’ ability to perceive native and non-native speech sound contrasts is typically attributed to their developing knowledge of phonetic categories. I will argue, however, that there is little direct evidence of early category knowledge, and that alternative accounts of early perceptual changes should be considered. I will propose a general account, unsupervised representation learning, that draws on approaches standardly used in machine learning. I will then describe a specific model within this framework that successfully simulates the different developmental trajectories of Japanese-learning and American English-learning infants with respect to the [r]-[l] contrast. Nevertheless, the representations learned by this model lack several necessary conditions of phonetic categories. These results demonstrate that observed changes in infant perception could occur in the absence of phonetic categories, prompting a potential re-examination of the timeline of early language acquisition.

26 February: Jia Loy


Adaptation may depend on perceived linguistic knowledge: Evidence from priming with native and nonnative interlocutors

Jia Loy (CLE, University of Edinburgh)

Tuesday, February 26
11:30am – 12:30pm
G.32, 7 George Square

It has been proposed that languages with more nonnative speakers are simpler due to native speakers’ adjustments towards nonnative interlocutors. However, experimental evidence of the adaptive mechanisms at play in natural language is lacking. In this talk I present a set of experiments investigating the degree of adaptation in native English speakers towards their nonnative conversation partner. I discuss two mechanisms that have been attributed to speaker adaptation — priming, which emphasises an automatic, unconscious tendency to repeat recent information; and listener-oriented processes, which propose that speakers strategically adapt to specific interlocutors. Our results suggest that native speakers exhibit greater adaptation towards nonnative interlocutors only when the communicative context induces an inference about their partner’s linguistic ability. I discuss the implications of these results with respect to the two mechanisms.

12 February: Chris Cummins


Efficient meanings for numerals

Chris Cummins (University of Edinburgh)

Tuesday February 12
11:30am -12:30pm
G.32, 7 George Square

The use of number in natural language gives rise to various ambiguities that are difficult to characterise precisely: should reference to “200 people” be understood to invoke an exact interpretation, a lower bound, an upper bound, an approximate interpretation, or some combination of these? In practical terms, this is potentially consequential because of how numerical quantity information feeds into our decision-making. In this talk I aim to explore how subtleties of number interpretation bear upon our subsequent reasoning, but also what governs our interpretative decisions at a more abstract level: does the meaning of number reflect rational principles about how we should use simple signals to convey complex information?

 

5 February: Tamar Johnson

3 February 2019  •  Andres Karjus

Assessing Integrative Complexity as a Measure of Morphological Learning

Tamar Johnson (Centre for Language Evolution, University of Edinburgh)

Tuesday 5 February
11:30am -12:30pm
G.32, 7 George Square

Morphological paradigms differ widely across languages: some feature relatively few contrasts, and others, dozens. A key question in understanding the broad variation exhibited by morphological paradigms cross-linguistically, is what makes them learnable. Recent work on morphological complexity has argued that certain features of even very large paradigms make them easy to learn and use. Specifically, Ackerman & Malouf, 2013 propose an information-theoretic measure, i-complexity, which captures the extent to which forms in one part of a paradigm predict each other, and show that languages which differ widely in surface complexity exhibit similar i-complexity; in other words, morphological paradigms with many contrasts reduce the learnability challenge for learners by having predictive relationships between inflections. This study presents a set of artificial language learning experiments testing whether i-complexity in fact predicts learnability of paradigms inflecting for noun class and number. Results reveal only weak evidence that low i-complexity paradigms are easier to learn. We suggest that alternative measures of complexity likely have a much larger impact on learning.

29 January: Christine Cuskley


Frequency, stability, and regularity in language evolution

Christine Cuskley (Centre for Language Evolution, University of Edinburgh)

Tuesday 29 January
11:30am -12:30pm
G.32, 7 George Square

Highly frequent linguistic units are more stable over time: for example, highly frequent words are more robust against change than lower frequency words. This trend has a functional explanation: forms with high usage frequency are less free to vary because this is more likely to cause communicative failure. This is analogous to the dynamics of purifying and stabilizing selection in biology. Traits with acute survival relevance show strong selection against deleterious alleles (purifying selection), resulting in less variation across the population. This talk will focus on analogous frequency-stability dynamics in language using agent based models, and some experiments which examine (ir)regularisation behaviours in native and non-native speakers of English. Stability in linguistic form across a population is favoured particularly for high frequency meanings, but that the strength of this effect is mediated by dynamic properties of the population.