Edinburgh Psychology Keynote Talk
CANCELLED (Bank Holiday)
Speaker: Adam Steel (Dartmouth College)
Title: Separation and integration of scene perception and visuospatial memory
Abstract: Effective real-world behaviors entail integrating sensory input with our memories: as we explore our world, memory of the environment that is currently out of sight informs what we will see next as we move our bodies and shift our gaze. Yet, the neural mechanisms allowing memory to influence perception are poorly understood. How can sensory and mnemonic representations interact, while also avoiding interference? Here, I address this question in the context of scene perception using fine-grained individual participant fMRI. In the first study, participants learned a set of real-world visuospatial environments in virtual reality in which we manipulated the extent of visuospatial context associated with a scene image, spanning from a single field-of-view to a city street. We then investigated which brain areas support memory of visuospatial context during recall and perception. Across the whole brain, three patches of cortex represented visuospatial context, each located immediately anterior to one of the three scene perception areas of high-level visual cortex. These anterior patches corresponded to the place memory areas (Steel et al., 2021), which selectively respond when visually recalling personally familiar places. In the second study, we investigated the mechanism underpinning the interaction between the scene perception and place memory areas. Using population receptive field (pRF) mapping, we found that mnemonic areas exhibit retinotopic coding and share visual field representations with their paired perceptual areas, as would be predicted if retinotopy provides a structure for information exchange. Strikingly, the pRF populations in perceptual and mnemonic cortex exhibit opponent responses during top-down memory recall and bottom-up visual perception, suggesting a push-pull dynamic between these areas. Thus, retinotopy may provide a common reference frame aligning visual and memory representations, thereby scaffolding their dynamic interplay. Together, these studies show that perceptual and mnemonic representations are maintained in distinct, spatially separated populations, but in a shared reference frame that enables functional interaction. This organization may allow predictive signals from memory to bias perception towards probable scene views to enable efficient visually-guided actions.
For further information please contact Edward Silson.