Language in Reinforcement Learning

ICML 2020 Workshop, 18 July 2020

The workshop will take place virtually due to COVID-19 pandemic. There is no workshop specific registration, you will be able to attend LaReL by registering for ICML.

Machine reading.

Language is one of the most impressive human accomplishments and is believed to be the core to our ability to learn, teach, reason and interact with others (Gopnik and Meltzoff, 1987; Spelke and Kinzler, 2007; Shusterman et al., 2011). It is hard to imagine teaching a child any complex task or skill without, at some point, relying on language to communicate. Written language has also given humans the ability to store information and insights about the world and pass it across generations and continents. Yet, current state-of-the-art reinforcement learning agents are unable to use or understand human language.

Practically speaking, the ability to integrate and learn from language, in addition to rewards and demonstrations, has the potential to improve the generalization, scope and sample efficiency of agents. For example, agents that are capable of transferring domain knowledge from textual corpora might be able to much more efficiently explore in a given environment or to perform zero or few shot learning in novel environments. Furthermore, many real-world tasks, including personal assistants and general household robots, require agents to process language by design, whether to enable interaction with humans, or simply use existing interfaces.

To support this emerging field of research, we are interested in fostering the discussion around:

The aim of the first workshop on Language in Reinforcement Learning is to steer discussion and research of these problems by bringing together researchers from several communities, including reinforcement learning, robotics, NLP, computer vision and developmental psychology. Through this workshop, we look to identify the main challenges, exchange ideas among and lessons learned from the different research threads, as well as establish requirements for evaluation benchmarks for approaches that integrate language with sequential decision making.