Speaker: Asli Ozyurek (Donders Institute for Brain, Cognition and Behavior/Radboud University/Max Planck Institute for Psycholinguistics)
Title: Multimodality as a design feature of human language capacity
Abstract: One of the unique aspects of human language is that in face-to- face communication it is universally multimodal (e.g., Holler and Levinson, 2019; Perniss, 2018). All hearing and deaf communities around the world use vocal and/or visual modalities (e.g., hands, body, face) with different affordances for semiotic and linguistic expression (e.g., Goldin-Meadow and Brentani, 2015; Vigliocco et al., 2014; Özyürek and Woll, 2019). Visible articulators in both cospeech gesture and sign, unlike speech, have unique affordances for visible iconic, indexical (e.g., pointing) and simultaneous representations due to use of multiple articulators. Such expressions have been considered in traditional linguistics as being “external” to the language system. I will however argue and show evidence for the fact that both spoken languages and sign languages combine such modality-specific expressions with arbitrary, categorical and sequential expressions in their language structures in cross-linguistically different ways (e.g.,Kita and Özyürek, 2003; Özyürek, 2018; 2021). Furthermore they modulate language processing, interaction and dialogue (Rasenberg, Özyürek, and Dingemanse, 2020) and language acquisition (e.g., Furman, Kuntay, Özyürek,2014), suggesting that they are part a design feature of a unified multimodal language system. I will end my talk with discussion on how a multimodal (but not unimodal one ) view can actually explain the dynamic, adaptive and flexible aspects of our language system enabling optimally to bridge the human biological, cognitive and learning constraints to the interactive, culturally varying communicative requirements of face-to-face context.