Linguistics and English Language

Language evolution seminar

Speaker: R. Tom McCoy (Department of Cognitive Science, Johns Hopkins University)

Title: How do neural networks represent compositional symbolic structure?

Abstract: Neural networks excel at processing language, yet their inner workings are poorly understood. One particular puzzle is how these models can represent compositional structures (e.g., sequences or trees) within the continuous vectors that they use as their representations. We introduce an analysis technique called DISCOVER and use it to show that, when neural networks are trained to perform symbolic tasks, their vector representations can be closely approximated using a simple, interpretable type of symbolic structure. That is, even though these models have no explicit compositional representations, they still implicitly implement compositional structure. We verify the causal importance of the discovered symbolic structure by showing that, when we alter a model’s internal representations in ways motivated by our analysis, the model's output changes accordingly.

Contact

Seminars are organised by the Centre for Language Evolution

Lauren Fletcher

Centre for Language Evolution

Sep 28 2021 -

Language evolution seminar

2021-09-28: How do neural networks represent compositional symbolic structure?

Online via link invitation