The present position is tied to the research project ‘Interpreting and Grounding Pre-trained Representations for Natural Language Processing’. The project is a collaboration between Linköping University, Chalmers University of Technology, and Recorded Future AB.
Building computers that understand human language is one of the central goals in artificial intelligence. A recent breakthrough on the way towards this goal is the development of neural models that learn deep contextualized representations of language. However, while these models have substantially advanced the state of the art in natural language processing (NLP) for a wide range of tasks, our understanding of the learned representations and our repertoire of techniques for integrating them with other knowledge representations and reasoning facilities remain severely limited. To address these gaps, this project will develop new methods for the interpretation, grounding, and integration of deep contextualized representations of language.