22 August: William Hamilton

16 August 2017  •  Svenja Wagner

Negativity and Semantic Change

William L. Hamilton (Stanford University)

Tuesday 22 08 2017, 11:00–12:30
1.17 Dugald Stewart Building

It is often argued that natural language is biased towards negative differentiation, meaning that there is more lexical diversity in negative affectual language, compared to positive language. However, we lack an understanding of the diachronic linguistic mechanisms associated with negative differentiation. In this talk, I will review key concepts related to negative differentiation and discuss how I am using diachronic word embeddings to test whether negative lexical items are more semantically unstable than positive ones. Preliminary results suggest that rates of semantic change are faster for negative affectual language, compared to positive language. I will finish my talk by discussing some practical consequences of this positive/negative asymmetry for sentiment analysis tools.

15 August: Molly Flaherty


Multi-verb descriptions to describe single events

Molly Flaherty (University of Edinburgh)

Tuesday 15 August 2017, 11:00–12:30
1.17 Dugald Stewart Building

Though communication in the manual modality allows for iconically motivated descriptions of physical events, sign languages, like spoken languages, employ conventionalized units and conventions for their combination to relate events in the world. In my previous work on Nicaraguan Sign Language (NSL), I observed a verb construction that breaks an event into two components: a primary verb depicting the action from the agent’s perspective (i.e. TICKLE), paired with a secondary verb from the patient’s perspective (i.e. GET-TICKLED). These constructions were found in signers of a variety of ages, each exposed to Nicaraguan Sign Language at a different point in its development since its first emergence 30 years ago. Further, these constructions appeared more often in events with animate patients than in events with inanimate patients. Deaf Nicaraguans not exposed to NSL, homesigners, also evidenced this construction to a lesser degree.

In my current work with Simon we are exploring the learning and use of these constructions among hearing gesturers in the lab. In our study, we will expose participants to silent gesture languages containing these constructions in different proportions (i.e. 2/3 multi-verb descriptions to describe animate patient events vs. 1/3 multi-verb descriptions for animate patient events) to see if patient animacy affects the ease of learning of multi-single verb constructions. I’d love input on study design from the group.

4 July: James Winters & Thomas Müller


Asynchronous information transfer as a constraint on the emergence of graphic codes

James Winters & Thomas Müller (MPI Jena)

Tuesday 4 July 2017, 11:00–12:30
1.17 Dugald Stewart Building

Humans commit information to graphic symbols for three basic reasons: as a memory aid, as a recording device, and as a means of communication. Yet, despite the benefits afforded by transmitting information graphically, writing stands out as a unique and compelling mystery: it emerged relatively late in human evolution, and it is the only graphic code which matches the power, precision, and versatility of signed and spoken languages. We argue in this talk that the difficulty of arriving at a graphic code like writing is because asynchronous communication imposes hard constraints on information transfer: access to shared contextual information is circumscribed and recourse to conversational repair mechanisms is removed. To investigate this claim, we present two referential communication experiments. The first experiment shows that graphic codes only reach a stable, accurate and optimal state when used for synchronous communication. By contrast, codes fail to emerge for asynchronous communication, with the systems becoming stuck in a unstable, inaccurate, and sub-optimal configuration.The second experiment singles out the aspect of shared perceptual context from the general characteristics of synchronous communication, and demonstrates its importance for accurate graphic codes. Taken together, these results suggest that the paucity and late-arrival of stable, powerful, and accurate graphic codes in human history is (partly) due to strong constraints on information transfer.

27 June: Cathleen O’Grady


The dot perspective task revisited: Do we automatically process what other people see?

Cathleen O’Grady (Edinburgh)

Tuesday 27 June 2017, 11:00–12:30
1.17 Dugald Stewart Building

The ability to reason about other individuals’ mental states (“mindreading”) is thought to be a central component of social cognition in humans, and particularly essential for language. In order for mindreading to be useful in social interaction, it seems necessary that it is also highly efficient. Samson et al.’s (2010) dot perspective task (DPT) provides evidence that taking another individual’s visual perspective (a very simple form of mindreading) is both rapid and involuntary.

However, variants of the DPT suggest that the task’s headline effect is due not to perspective-taking, but rather to simpler processes that do not entail mindreading. In this talk, I will discuss these competing explanations, and present a new variant of the task that replicates the central finding of the DPT, but suggests that involuntary perspective-taking is not the best explanation for this effect. I will argue that the non-mentalistic account of the DPT may still be useful for understanding the apparent role of mindreading in communication.

13 June: Myrte Vos


Word order, naturalness and conventionalisation: Evidence from silent gesture

Myrte Vos (University of Amsterdam)

Tuesday 13 June 2017, 11:00–12:30
1.17 Dugald Stewart Building

Of the six possible ways to order Subject, Object and Verb, two – SOV and SVO – account for the constituent word order in nearly 80% of the world’s languages. Why? The pragmatic principle ‘Agent first’ accounts for the dominance of S-initial word orders; and recent work in word order typology, creoles, emerging sign languages, and improvised silent gesture suggests that SOV is the natural ‘default’ in nonverbal event representation and early language structure. But if that is so, why is SVO word order nearly as prominent as SOV?

One improvised silent gesture study, from Schouwstra & de Swart (2014), suggests that in improvised communication, the usage of SOV versus SVO is conditioned on the semantic content of the verb. Another study, by Marno, Langus and Nespor (2015) posits that SVO is preferred by the syntax-governing ‘computational system’ of cognition, and that while improvised communication favours SOV, access to a lexicon frees up the cognitive resources needed to employ syntax, and “consequently SVO, the more efficient word order to express syntactic relations, emerges.” In their improvised silent gesture task, wherein half the participants had to improvise their gesturing of simple transitive events and the other half were first taught a gesture lexicon before being asked to communicate, participants trained on a lexicon did indeed favour SVO. We replicated this experiment with stimuli restricted to event-types found to elicit SOV, as well as running an adapted condition using a lexicon of randomly assigned, arbitrary gestures, to further investigate Marno et al.’s hypothesis.

23 May: Sean Roberts


Language adapts to interaction

Sean Roberts (Bristol)

Tuesday 23 May 2017, 11:00–12:30
1.17 Dugald Stewart Building

Language appears to be adapted to constraints from many domains such as production, transmission, memory, processing and acquisition. These adaptations and constraints have formed the basis for theories of language evolution, but arguably the primary ecology of language is face-to-face conversation. Taking turns at talk, repairing problems in communication and organising conversation into contingent sequences seem completely natural to us, but are in fact highly organised, tightly integrated systems which are not shared by any other species.

In this talk I discuss how one might link features of real time interaction to different levels of language evolution: the evolution of a capacity for language; the initial emergence of linguistic systems; and the ongoing cultural evolution of languages. I will illustrate the links in each level using computational models, lab experiments and corpus analyses. I argue that a full explanation of the origin and structures of languages needs to take into account the ecology in which language is used: face to face interactive communication.

9 May: Kenny Smith


Acquiring variation in artificial languages

Kenny Smith (Edinburgh)

Tuesday 9 May 2017, 11:00–12:30
1.17 Dugald Stewart Building

I will present 4 experiments using artificial language learning paradigms to study the ability of adults and children to acquire conditioned and unconditioned variation, in particular looking at their ability and predisposition to condition variation on social and semantic cues.

2 May: Olga Fehér


The effect of semantic cues on the regularisation of unpredictable variation

Olga Fehér (Edinburgh)

Tuesday 2 May 2017, 11:00–12:30
1.17 Dugald Stewart Building

Variation in natural language is constrained: languages tend to lose competing variants over time, and where variation persists its use is conditioned on linguistic or sociolinguistic context. When learners acquire languages that exhibit unpredictable variation (unnatural, unconditioned probabilistic variation), they often eliminate the variation by regularising to one of the competing variants or conditioning on context. We previously found that, in addition to individual learning and transmission, interaction can lead to regularisation through convergence and priming between interlocutors. In this experiment, we investigated the influence of semantic cues on regularisation and conditioning during interaction and transmission. We had adult participants learn and communicate with artificial languages which exhibited unconditioned variation in plural marking. The languages described images that belonged to one or two semantic categories. We found that interacting Dyads regularised in the one category condition by eliminating one of the markers. However, in the two category condition, Dyads maintained variation but without conditioning it on semantic categories. Semantic conditioning occurred only in Singles, gradually during episodes of transmission. The lack of conditioning in Dyads was probably due to strong priming between communicating partners that was present within and across semantic categories. This suggests that the pattern of restricted, conditioned variation in natural language reflects the combined influences of biases in learning, recall and interaction.

25 April: Rebecca Morley


Representational considerations in models of sound change

Rebecca Morley (Ohio State)

Tuesday 25 April 2017, 11:00–12:30
1.17 Dugald Stewart Building

One view of phoneme split takes it to be the result of divergent phonetic variants (e.g., Janda and Joseph 2003). Closely tied to this view is the hypothesis of iterativity: socially motivated phonetic exaggeration accumulating over successive generations (e.g., Labov 1972, Guy 1980), or progressive reduction of frequent words over time (Phillips 1984, Bybee 2002). Iterativity is often assumed to be an inherent property of exemplar models. In a typical scenario production starts with the selection of a token from the desired category. The token is then reduced, lenited, or otherwise altered in some way, resulting in a new phonetic token. The new token is added back to the cloud of stored tokens, and the process starts over again (see Pierrehumbert (2001)). Via this production-perception loop words can be reduced two or more times with respect to the originating token. As more frequent words are more often produced, the chances of multiply reduced tokens are higher. However, contrary to expectation, the mechanism described does not consistently result in shorter word lengths for high-frequency vs. low-frequency words. If frequency of occurrence is expressed in number of tokens, and sampling for production is random, then producing a less reduced token is also more likely in high, than low, frequency categories. And regardless of whether tokens decay, or are replaced, the low-frequency category will eventually ‘catch up’ with the higher-frequency category, and all words will achieve some optimal length. In fact, the production side of this model makes even more problematic predictions. If phoneme-level tokens are selected at random from a phonetically detailed exemplar cloud then egregious mismatch is possible; e.g., an [æ] originally followed by an [m] being selected for a pre-[b] context. It is the same at the word level: a word token originally produced in a frequent collocation, selected for a low-frequency context, etc. Indexing exemplar clouds with all the necessary contextual information, however, results in an explosion of categories, and a depletion of category members. In the limit, each category would contain a single member.

Developing a model for the interaction of synchronic variation and diachronic change requires resolving these and other representational issues, some of which only surface when the entire trajectory of change is considered. Thus, while existing models can capture category shift and merger (Pierrehumbert 2001), or contrast stability and dispersion (Garrett and Johnson 2013, Wedel 2004), there are few that can capture both. The model of Sóskuthy (2013) can generate phoneme split, no-change, and no-split with phonetic shift, as the result of vowel lengthening before voiced obstruents. However, these outcomes require a representational structure in which vowel categories contain at least two sub-categories: pre voiced-obstruent, and pre voiceless-obstruent. Crucially, these subcategories are semi-permeable, and greater frequency of occurrence can cause one sub-category to subsume the other. This scenario raises another unresolved question in exemplar modeling: the interaction between higher and lower level categories. Most models work exclusively at one level, and assume the others. But the process by which the necessary categories at the sub-word level are generated from the word level (or vice versa) is non-trivial, and may not be consistent with model assumptions. A category as abstract as “vowels occurring in environments followed by a voiced obstruent” requires a massive amount of generalization over words with different syllable structures, over obstruents at different places of articulation, etc. And if speakers create categories such as this, then they can be expected to create categories such as “vowels before coronals”, etc. It is not at all clear that existing models will be able to ‘scale up’ adequately under this added complexity.

This work gives a formal account of the representational commitments and assumptions of a range of models, and an assessment of their self-consistency. The claim is that the resolution of outstanding problems lies in determining the division between representations and processes. I argue, on the one hand, that phonetic effects such as “vowel lengthening”, or “vowel nasalization” are not processes themselves, but reside at the representational level. On the other hand, speaking rate must be able to apply after exemplar selection to compress or expand tokens as necessary to match speed of production. I consider prosodic effects, such as phrase-final lengthening to be necessarily processual as well. The ramifications of these representational choices are discussed with respect to the necessary constraints on a model deriving categorical sound change from existing synchronic variation.