Linguistics and English Language

Language evolution seminar

Speaker: Raquel Alhama (Universiteit van Amsterdam)

Topic: What do Neural Networks need in order to generalize?

Abstract: In an influential paper, reporting on a combination of artificial language learning experiments with babies, computational simulations and philosophical arguments, Marcus et al. (1999) claimed that connectionist models cannot account for human success at learning tasks that involved generalization of abstract knowledge such as grammatical rules. This claim triggered a heated debate, centered mostly around variants of the Simple Recurrent Network model (Elman, 1990).

In our work, we revisit this unresolved debate and analyze the underlying issues from a different perspective. We argue that, in order to simulate human-like learning of grammatical rules, a neural network model should not be used as a tabula rasa, but rather, the initial wiring of the neural connections and the experience acquired prior to the actual task should be incorporated into the model. We present two methods that aim to provide such initial state: a manipulation of the initial connections of the network in a cognitively plausible manner (concretely, by implementing a “delay-line” memory), and a pre-training algorithm that incrementally challenges the network with novel stimuli. We implement such techniques in an Echo State Network (Jaeger, 2001), and we show that only when combining both techniques the ESN is able to succeed at the grammar discrimination task suggested by Marcus et al.

Contact

Seminars are organised by the Centre for Language Evolution

Jon Carr

Centre for Language Evolution

Sep 27 2016 -

Language evolution seminar

27 Sep 2016: What do Neural Networks need in order to generalize?

Room 1.17, Dugald Stewart Building, 3 Charles Street, Edinburgh, EH8 9AD