Our percept of the space around us is both stable and dynamic. We see how objects or people move through space but when we move by ourselves the world appears at rest even though its image in the eye is actually moving. How is a stable and consistent percept of the world achieved in a dynamic visual context? We study this question by analyzing visual illusions that break the stable and consistent percept of the world. A particular question is the perception of space during saccadic eye movements, the quick changes of gaze that align the axis of sight with objects of interest. Every saccade changes the image of the world on the retina. We perform more than a hundred thousand saccades every day but never experience the world to move. However, during some tens of milliseconds before and during an eye movement this spatial stability is broken. Visual stimuli that are briefly flashed during this time are seen at grossly distorted positions. These effects show the dynamic process of space representation at work. We study the role of visual references, attention, and trans-saccadic memory in this process.
The link between saccadic eye movements and perception of space is also seen when one manipulates the eye movements. Saccades can be made artificially larger by a process called saccadic adaptation. Visual space perception then becomes plastic and changes along with the change of the saccade length. Visual size perception is also affected. We study the mechanisms of saccadic adaptation and its effects on spatial perception. The methods we use include high-speed eye tracking, psychophysics, neurocomputational modelling, neurophysiology, and investigations of psychiatric and neurological patients.
Current Financial Support:
- DFG-ANR Cooperation Project La 952-8 ‘EyeSee: eye movements and vision: coupling between saccadic adaptation and visuo-spatial perception’ (2016-2019)
- EU MSCA-RISE Project Platypus ‘Plasticity of perceptual space under sensorimotor interactions’ (2017-2021)
- Denis Pelisson, INSERM, Lyon
- Patrizia Fattori, University of Bologna
- Michele Rucci, University of Rochester
- Tamara Watson, Western Sydney University
- Frank Bremmer, Phillipps-Universität Marburg
- Rebekka Lencer, Universitätsklinikum Münster
- Adam Morris, Monash University, Melbourne
- Katharina Rifai, ZEISS Vision Science Lab, Tübingen
Langbehn, E., Steinicke, F., Lappe, M., Welch, G. F., Bruder, G. (2018) In the Blink of an Eye - Leveraging Blink-Induced Suppression for Imperceptible Position and Orientation Redirection in Virtual Reality. ACM Trans. Graph (2018) 37(4)
Zimmermann, E., Lappe, M. (2016). Visual space constructed by saccade motor maps. Front. Hum. Neurosci., 2016.00225
Bosco, A., Lappe, M., & Fattori, P. (2015) Adaptation of saccades and perceived size after trans-saccadic changes of object size. Journal of Neuroscience 35(43):14448-14456
Havermann, K., Cherici, C., Rucci, M., & Lappe, M. (2014). Fine-scale plasticity of microscopic saccades. The Journal of Neuroscience, 34(35):11665-11672.
Havermann, K., & Lappe, M. (2010). The influence of the consistency of postsaccadic visual errors on saccadic adaptation. Journal of Neurophysiology 103(6):3302-3310.
Zimmermann, E., & Lappe, M. (2010). Motor signals in visual localization. Journal of Vision 10(6):2:1–11.
Hamker, F. H., Zirnsak, M., Calow, D. & Lappe, M. (2008). The peri-saccadic perception of objects and space. PLoS Comp. Biol. 4(2):e31,1-15
M. Kaiser & M. Lappe (2004). Perisaccadic Mislocalization Orthogonal to Saccade Direction. Neuron, 41 (2): 293-300.
H. Awater, D. Burr, M. Lappe, M. C. Morrone & M. E. Goldberg (2005). The effect of saccadic adaptation on the localization of visual targets. Journal of Neurophysiology, 93: 3605-3614.
M. Lappe, H. Awater & B. Krekelberg (2000). Postsaccadic visual references generate presaccadic compression of space. Nature, 403:892-985
How do we perceive and control our movements within the environment? When we walk around, ride a bike, or drive a car the image of the world that we see is moving. The pattern of image motion experienced during self-motion is called optic flow. Optic flow is used by the visual system to control our movement through space. We try to understand how optic flow is analyzed by the visual system and how it is used to estimate self-motion. We combine a computer model of how populations of neurons in the brain process and analyze complex flow patterns with experimental studies that look at the perception of optic flow by human subjects. We have looked at how our brain combines the many visual, motor, and vestibular cues to self-motion that it has available. A specific questions is the relationship between optic flow and eye movements, i.e., how optic flow induces eye movements, how eye movements affect the structure of the optic flow that arrives in the eye, and how neurons may exploit the statistical structure of the flow to optimally extract self-motion. We also study how travel distance can be estimated from the visual input during self-motion. Next to our interest in the underlying mechanisms of optic flow analysis in the visual system we also try to put these mechanisms to work in technical settings of computer vision and virtual reality.
A second type of retinal motion that we experience every day is the motion produced by the movement of other people. The recognition of the actions and movements of people is among the most important tasks of vision. But it is also one of the most difficult, because the movement of the body has many degrees of freedom and is non-rigid. Yet, the brain has developed exquisite capabilities to recognize this 'biological motion'. It is possible to infer actions, gender or even identity of a person from the movement of only a few points on the body. How can such a rich description be obtained from so little information? We study how biological motion perception is achieved in the visual system and develop a computer model using similar strategies. A particular question is whether biological motion perception is derived from motion or from form signals. We have developed a variant of Gunnar Johannsson's 'point-light display' which prohibits the direct use of motion signals. This stimulus demonstrates that dynamic form cues alone can support the perception of biological motion. We have developed a neurocomputational model that shows how biological motion can be inferred from form cues in a sequence of body postures. Our model assumes that the visual system matches incoming visual information about body posture against body shape templates, presumably contained in areas of the cortical form pathway. The distribution of activity over these template neurons indicates body posture and orientation. The perception of body movement is achieved in a second stage in which the temporal sequence of body postures is analyzed.
We study the differences between optic flow and biological motion, both in terms of their neurobiological pathways and their computational properties, and how both types of motion processing interact when they are encountered together, as, for example, when walk along a crowded street. The methods we use include psychophysical and behavioral experiments, virtual reality, neurocomputational modeling, neurophysiology, and fMRI.
Current Financial Support:
- La 952-7 ‘Visual ecology of motion’ (2016-2021)
- La 952-4-3 ‘Interactive locomotion user interfaces for real walking through virtual worlds - from perception to application ‘ (2016-2019)
- RGC-DAAD Cooperation Project ‘The contribution of information beyond optic flow to object motion perception during self-motion’
- Li Li, NYU Shanghai
- Frank Steinicke, Universität Hamburg
- Frank Bremmer, Phillipps-Universität Marburg
- Diederick Niehorster, Lund University
- Gerd Bruder, University of Central Florida
Mayer, K.M., Riddell, H., Lappe, M. (2019) Concurrent processing of optic flow and biological motion. Journal of Experimental Psychology: General (in press)
Masselink, J. & Lappe, M. (2015) Translation and articulation in biological motion perception Journal of Vision 15(11):10, 1–14
Theusner, S., de Lussanet, M., & Lappe, M. (2014). Action recognition by motion detection in posture space. The Journal of Neuroscience, 34(3):909-921.
Bruder, G., Steinicke, F., Bolte, B., Wieland, P., Frenz, H., & Lappe, M. (2013). Exploiting Perceptual Limitations and Illusions to Support Walking through Virtual Environments in Confined Physical Spaces, Displays, 35(8):1847-1871.
Wittinghofer, K., de Lussanet, M. H. E., & Lappe, M. (2010). Category-specific interference of object recognition with biological motion perception. Journal of Vision 10(13):16:1–11.
Steinicke, F., Bruder, G., Jerald, J., Frenz, H., & Lappe, M. (2010). Estimation of detection thresholds for redirected walking techniques. IEEE Transactions on Visualization and Computer Graphics 16(1):17-27.
M. Lappe, M. Jenkin, L. R. Harris (2007). Travel distance estimation from visual motion by leaky Path integration. Exp. Brain Res. 180:35-48.
J. Lange & M. Lappe (2006). A model of biological motion perception from configural form cues. Journal of Neuroscience, 26(11), 2894-2906.
J. A. Beintema & M. Lappe (2002). Perception of biological motion without local image motion. Proceedings of the National Academy of Sciences, 99, 5661-5663.
M. Lappe, F. Bremmer & A. V. van den Berg (1999). Perception of self-motion from visual flow. Trends in Cognitive Sciences, 3:329-336.
Lappe, M., & Rauschecker, J. P. (1994). Heading detection from optic flow. Nature 369:712-713.
Lappe, M., & Rauschecker, J. P. (1993). A neural network for the processing of optic flow from egomotion in man and higher mammals. Neural Computation 5:374-391.