June 17: Adam King


The lexicon is shaped for incremental processing

Adam King, University of Arizona

Monday, June 17
11:00am – 12:30pm
DSB, room 1.17

In this talk, I will present data to show the lexicon is shaped for efficient word recognition and ask how this shaping came to pass. A cornerstone in the study of language as an efficient communication system is Zipf’s law of abbreviation: probable words are shorter, less probable words are longer. On one hand, short, probable words benefit the speaker while long, less probable words benefit the listener as listeners likely need more information from the acoustics of less probable word to accurate identify it. However, not all parts of a word contribute equal disambiguating information to word identification. Spoken word processing is incremental and competitive, meaning that sounds that distinguish a particular word from many competitors are qualitatively more informative.
From a diverse set of languages, I will show that less probable words contain qualitatively higher information sounds and that these sounds are positioned where they contribute most to word identification, i.e., early. In addition, I will present simulation data that show the lexicon can develop the patterns mentioned above from simple generation-to-generation changes based on the words themselves and not from a lexicon-wide optimization to a global maximum.

June 11: Jonas Nölle


Why left/right rather than uphill/downhill? An experimental approach to the evolution of spatial referencing

Jonas Nölle (CLE, University of Edinburgh)

Tuesday, June 11
11:00am – 12:00pm
DSB, room 1.17

There is considerable variation in how languages express spatial relations between objects. Strikingly, in many globalized and WEIRD societies (hence “GEIRD”), an egocentric system is preferred to express figure-ground relations (e.g., “the ball is to the left of the car”), while many non-GEIRD societies prefer perspective-independent geocentric systems that are often directly grounded in the environment (e.g., “the ball is uphill of the car”), even for expressing relations on a smaller scale such as in tabletop configurations. These strategies are associated with different underlying conceptualizations and there has been a considerable debate about their origin. More recent fieldwork lends support to the idea that spatial language could be an example of linguistic adaptation, where linguistic features are motivated by the social or physical environment. However, there are many confounding factors that are hard to disentangle, such as topography, language contact, subsistence style etc., making it difficult to uncover straightforward causal relationships from field-work data alone. For a more mechanistic understanding of how spatial referencing strategies emerge and evolve, I propose to complement this line of research with laboratory experiments that allow isolating contributing variables. I will present two virtual reality (VR) experiments where we tested participants’ preference for egocentric/geocentric strategies in large-scale VR environments such as a mountain slope or a dense forest. Experiment 1 showed that using their native language (that has a preference for egocentric solutions such as left/right), dyads solving a spatial coordination game were more likely to produce geocentric utterances (such as uphill/downhill) in a VR environment that afforded geocentric solutions strongly, suggesting that language is potentially adaptive to such salient cues, which could give rise to geocentric systems. By contrast, one of the reasons for the relative success of egocentric systems could be their flexibility. Experiment 2 thus tested whether, when switching VR environments, dyads where more likely to abandon a geocentric strategy in favour of an egocentric strategy than vice versa. However, the results did not support this prediction. This might have been due to there being no cost for establishing a new system as participants had both strategies readily available in their native language. I will discuss the design of a third experiment (currently at the piloting stage) that tries to overcome this issue by using an Experimental Semiotics approach in which strategies have to be grounded first. We predict that introducing this cost will enable us to observe differences in the flexibility of egocentric and geocentric systems in the lab.

May 21: Angeliki Lazaridou


Multi-agent language games for language learning

Angeliki Lazaridou (Deepmind)

Tuesday, May 21
11:00am – 12:30pm
Room 1.17, DSB

Distributional models and other supervised models of language focus on the structure of language and are an excellent way to learn general statistical associations between sequences of symbols. However, they do not capture the functional aspects of communication, i.e., that humans have intentions and use words to coordinate with others and make things happen in the real world. In this talk, I will present my research program on using multi-agent language games towards achieving data-efficient (functional) natural language learning.

Bio: Angeliki Lazaridou is a senior research scientist at DeepMind. She obtained her PhD from the University of Trento under the supervision of Marco Baroni, where she worked on predictive grounded language learning. Currently, she is working on interactive methods for language learning that rely on multi-agent communication, as means of minimizing the use of supervised language data.

May 14: Greville G. Corbett


Categorisation: what languages – and linguists – do with it

Greville G. Corbett
with Sebastian Fedden, Mike Franjieh, Alexandra Grandison & Erich Round
(Surrey Morphology Group, University of Surrey)

Tuesday, May 14
11:00am – 12:30pm
1.17 DSB

Fascinating new systems of nominal classification keep being found, but the tools for analysis have not kept pace. We therefore propose a typology of nominal classification, encompassing gender and classifier systems of categorisation. Earlier it made sense to oppose gender and classifiers (Dixon 1982), but the opposition cannot be maintained. Miraña has characteristics of gender and of classifiers (Seifart 2005); Reid’s (1997) account of Ngan’gityemerri provides further evidence against a sharp divide, since classifiers can grammaticalize into gender, through intermediate types. Relinquishing the opposition of gender vs classifiers allows a clearer picture of the possibilities. We pull apart traditional gender characteristics, and traditional classifier characteristics, and see that these characteristics combine in many ways. This motivates a canonical perspective: we define the notion of canonical gender, and use this idealization as a baseline from which to calibrate the theoretical space of nominal classification. This allows us to situate the interesting combinations we find.

Against this typological background, we may approach the origin and nature of gender. Here the possessive classifier systems of Oceanic languages can provide a unique insight. Typically, a noun can occur with different classifiers, depending on how the possessed item is used by the possessor. But we also find, in marked contrast, languages like North Ambrym (Vanuatu), where particular nouns typically occur with a given classifier (Franjieh 2016). We argue that North Ambrym’s innovative system resembles a gender system: a noun must occur with a particular classifier regardless of contextual interactions. We seek to establish empirically whether gender systems can indeed emerge from possessive classifiers in this way. We must also uncover how and why languages would relinquish a useful, meaningful classificatory system, and adopt a rigid, apparently unmotivated gender system.

We have designed and will run seven novel experiments to compare possessive classifier systems in six Oceanic languages of Vanuatu and New Caledonia. Each of these six languages has a different inventory size of classifiers — from a simple two-way distinction to a more complex inventory of twenty-three. This combination of typology with psycholinguistics promises to shed new light on the development and functioning of systems of nominal classification. We are keen to have feedback before a round of psycholinguistic experiments in the field this summer. The Oceanic data obtained so far suggest that, in this instance, we find an interesting parallelism: diachronic change is running in the direction of canonicity.

May 7: Isabelle Dautriche


Some constraints on the lexicons of human languages have cognitive roots present in baboons

Isabelle Dautriche (CLE, University of Edinburgh)

Tuesday, May 7
11:00am-12:00pm
DSB, room 1.17

There are constraints on what a lexical element may denote: there is no word for ‘cat or parrot’, intuitively because this would lump together two “separate” classes of objects. I will present you experiments that show that, even in non-linguistic settings, human and non-human animals tend to group objects into classes following a “connectedness constraint”. This result suggests that the cognitive roots responsible for (at least some) regularities across the lexicons of human languages are present in a similar form in other species.

30 April: Serhii Havrylov


Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols

Serhii Havrylov (Institute for Language, Cognition and Computation, University of Edinburgh)

Tuesday, April 30
11:00am – 12:30pm
Room 1.17, DSB

Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI. We study a setting where two agents engage in playing a referential game and, from scratch, develop a communication protocol necessary to succeed in this game. Unlike previous work, we require that messages they exchange, both at train and test time, are in the form of sequences of discrete symbols. We compare a reinforcement learning approach and one using a differentiable relaxation. We also observe that the protocol we induce by optimising the communication success exhibits a degree of compositionality and variability (i.e. the same information can be phrased in different ways), both properties characteristic of natural languages. As the ultimate goal is to ensure that communication is accomplished in natural language, we also perform experiments where we inject prior information about natural language into our model and study properties of the resulting protocol.

23 April: Fausto Carcassi


The semantic structure of gradable adjectives: an experiment and a Bayesian model

Fausto Carcassi (CLE, University of Edinburgh)

Tuesday, April 23
11:00am – 12:00pm
Room 1.17, DSB

We describe a systematic difference between the semantic structure of nouns and of gradable adjectives in their bare use in terms of Gardenfors’ conceptual spaces theory. We propose that the difference can be explained in terms of a difference in structure of the conceptual spaces underlying nouns and gradable adjectives. We present an experiment to support this proposal. The data did not support the hypothesis. We then present a cognitive Bayesian model of learning that encodes the theoretical proposal, and nest it within a hierarchical Bayesian model to do exploratory analysis.

 

4 April: Douwe Kiela


Grounded Multi-Agent Language Games

Douwe Kiela (Facebook AI Research)

Thursday, April 4,
11:00am – 12:30pm
4.31, Informatics Forum

I will talk about recent work done at FAIR on novel directions for natural language processing research. While a lot of progress has recently been made in natural language understanding, e.g. by using (contextualized) word and sentence embeddings, big challenges remain. I will discuss fresh perspectives on natural language learning, in the shape of grounded multi-agent language games: While Wittgenstein is often invoked as the godfather of the distributional hypothesis, I argue that he has rather different lessons to teach us. This leads to a new research program for true natural language understanding, centering around active language usage in “grounded multi-agent language games”. I will give some examples of research we have done at FAIR that goes in that direction.

2 April: Mora Maldonado and Jenny Culbertson


Person of interest: Learnability and naturalness of person systems

Mora Maldonado and Jenny Culbertson (CLE, University of Edinburgh)

Tuesday, April 2
11:30am – 12:30pm
G.32, 7 George Square

Person systems—typically exemplified in pronoun paradigms (e.g. me, you, us)—describe how languages categorize entities as a function of their role in speech context (i.e., speaker(s), addressee(s), other(s)). Like other linguistic category systems (e.g. color and kinship terms), not all ways of partitioning the person space into different forms are equally likely cross-linguistically. Indeed, while some partitions are extremely frequent, others are very rare or do not occur at all (Cysouw 2003).

Morpho-semantic approaches to person systems have aimed to provide an inventory of person features that generates all and only the attested partitions (Harley & Ritter 2002, Harbour 2016, Ackema & Neelman 2018, among others). One potential problem with allthese accounts is that the typological data they rely on is rather weak: not only the sample of languages is quite small, but also there are often inconsistencies in the way paradigms are classified.

To make up for this issue, we aim to provide a method for investigating person systems experimentally. Such an approach allow us not only to extend the presently sparse typological data, but also to test whether typologically attested partitions are more natural and easier to learn than unattested ones.

In this talk, we will present a series of artificial language learning experiments where we test whether typological frequency correlates with learnability of person paradigms.

We will start by focusing on first person systems (e.g., ‘I’ and ‘we’ in English), and test the general predictions of theories that posit a universal set of features to capture this space. Our results provide the first experimental evidence for feature-based theories of person systems. We will then present some ongoing research where we take a similar approach to investigate potential asymmetries between the first, second and third person(s).

 

1 April: Marieke Woensdregt (pre-viva talk)


Co-evolution of language and mindreading: A computational exploration

Marieke Woensdregt (CLE, University of Edinburgh)

Monday April 1
10:00-10:30am
Room S1, 7 George Square

Language relies on mindreading (a.k.a. theory of mind), as language users have to entertain and recognise communicative intentions. Mindreading skills in turn profit from language, as language provides a means for expressing mental states explicitly, and for talking about mental states. Given this interdependence, it has been hypothesised that language and mindreading have co-evolved. I will present an agent-based model to formalise this hypothesis, which combines referential signalling with perspective-taking.

This model treats communicative behaviour as an outcome of an interplay between the context in which communication occurs, the agent’s individual perspective on the world, and the agent’s lexicon. However, each agent’s perspective and lexicon are private mental representations, not directly observable by other agents. Language learners are therefore confronted with the task of jointly inferring both the lexicon and the perspective of their cultural parent. Simulation results show that Bayesian learners can solve this task by bootstrapping one from the other, but only if the speaker uses a lexicon that is at least somewhat informative.

This leads to the question under what circumstances a population of agents can evolve such an informative lexicon from scratch. In this talk I will explore the effects of two different selection pressures: a pressure for successful communication and a pressure for accurate perspective-inference. I will also compare two different types of agents: literal communicators and pragmatic communicators. Pragmatic speakers optimise their communication behaviour by maximising the probability that their interlocutor will interpret their signals correctly. Iterated learning results show that populations of literal agents evolve an informative lexicon not just when they’re under a pressure to communicate, but also when they’re under a pressure to infer each other’s perspectives. Populations of pragmatic agents show similar evolutionary dynamics, except that they can achieve improvements in communication and perspective-inference while maintaining more ambiguous lexicons.