May 21: Angeliki Lazaridou

Multi-agent language games for language learning

Angeliki Lazaridou (Deepmind)

Tuesday, May 21
11:00am – 12:30pm
Room 1.17, DSB

Distributional models and other supervised models of language focus on the structure of language and are an excellent way to learn general statistical associations between sequences of symbols. However, they do not capture the functional aspects of communication, i.e., that humans have intentions and use words to coordinate with others and make things happen in the real world. In this talk, I will present my research program on using multi-agent language games towards achieving data-efficient (functional) natural language learning.

Bio: Angeliki Lazaridou is a senior research scientist at DeepMind. She obtained her PhD from the University of Trento under the supervision of Marco Baroni, where she worked on predictive grounded language learning. Currently, she is working on interactive methods for language learning that rely on multi-agent communication, as means of minimizing the use of supervised language data.