JOURNAL OF VISION

Spatiotemporal predictability of saccades modulates postsaccadic feature interference
Chiu TY, Jaen I and Golomb JD
Spatial attention and eye movements jointly contribute to efficient sampling of visual information in the environment, but maintaining precise spatial attention across saccades becomes challenging due to the drastic retinal shifts. Previous studies have provided evidence that spatial attention may remap imperfectly across saccades, incurring systematic feature inference with ongoing perception, yet the role of saccade predictability remains largely untested. In the current study, we investigated whether spatiotemporal predictability of saccades influences postsaccadic remapping and feature perception. In two preregistered experiments, we implemented the postsaccadic feature report paradigm and manipulated spatiotemporal predictability of saccades. Experiment 1 manipulated spatial and temporal saccade predictability together, whereas Experiment 2 dissociated the roles of spatial and temporal predictability in separate conditions. In addition to spatial and temporal saccade predictability both improving general task performance, we found that spatial saccade predictability specifically modulated postsaccadic feature interference. When saccades were spatially unpredictable, "swap errors" occurred at the early postsaccadic time point, where participants misreported the retinotopic color instead of the spatiotopic target color. However, the swapping errors were reduced when saccades were made spatially predictable. These results suggest that systematic feature interference associated with postsaccadic remapping is malleable to expectations of the upcoming saccade target location, highlighting the role of predictions in maintaining perceptual stability across saccades.
Saccade direction modulates the temporal dynamics of presaccadic attention
Kwak Y, Hanning NM and Carrasco M
Presaccadic attention enhances visual perception at the upcoming saccade target location. While this enhancement is often described as obligatory and temporally stereotyped, recent studies indicate that its strength varies depending on saccade direction. Here, we investigated whether the time course of presaccadic attention also differs across saccade directions. Participants performed a two-alternative forced-choice orientation discrimination task during saccade preparation. Tilt angles were individually titrated in a fixation baseline condition to equate task difficulty across the upper and lower vertical meridians. Sensitivity was then assessed at different time points relative to saccade onset and cue onset, allowing us to characterize the temporal dynamics of attentional enhancement. We found that presaccadic attention built up faster and reached higher levels preceding downward than upward saccades. Linear model fits revealed significant slope differences but no differences in intercepts, suggesting that the observed asymmetries reflect differences in attentional deployment during saccade preparation rather than preexisting differences in sensitivity. Saccade parameters did not account for these asymmetries. Our findings demonstrate that the temporal dynamics of presaccadic attention vary with saccade direction, which may be a potential mechanism underlying previously observed differences in presaccadic benefit at the upper and lower vertical meridians. This temporal flexibility challenges the view of a uniform presaccadic attention mechanism and suggests that presaccadic attentional deployment is shaped by movement goals. Our results provide new insights into how the visual and oculomotor systems coordinate under direction-specific demands.
Emergence of form-independent direction selectivity in human V3A and MT
Hong SW and Tong F
A fundamental challenge in motion perception lies in the fact that the global motion of any translating object will generate a heterogeneous set of local motion signals that vary depending on the oriented contour information within each local region. This so-called aperture problem illustrates how the visual system must integrate diverse types of motion signals to attain direction selectivity that is invariant to visual form. Here, we investigated how form-invariant motion selectivity emerges across the human visual pathway by using functional magnetic resonance imaging (fMRI) to measure direction-selective responses to drifting gratings and random-dot motion, and then testing for reliable generalization across stimulus types. All visual areas of interest showed highly reliable direction-selective classification performance for a given stimulus type, but early areas V1 and V2 showed chance-level generalization between motion types. Indeed, V1 responses tended to confuse random-dot motion as resembling orthogonal grating motion, implying that drifting random dots generated oriented motion-streak responses in V1. By contrast, motion-sensitive areas MT+ and V3A showed reliable cross-generalization performance in our fMRI experiments that tested both linear and spiral motion trajectories. Our findings provide compelling evidence that motion direction selectivity becomes more invariant to stimulus form in higher visual areas, particularly along the dorsal visual pathway.
Temporal dynamics and readout latency in perception and iconic memory
Matic K, Tafech I, König P and Haynes JD
After the offset of complex visual stimuli, rich stimulus information remains briefly available to the observer, reflecting a rapidly decaying iconic memory trace. Here we found that even if the cues are presented in the final stage of the stimulus presentation, the reportable information already starts decaying. Using closely spaced readout cues and a theoretical model of information availability, we observed that a cue has to be presented around 10 to 30 milliseconds before stimulus offset to access the full sensory information. We suggest that this does not reflect an early loss in sensory encoding, but instead it is a consequence of a latency in the processing of the cue that postpones the readout of the sensory representation by 10 to 30 milliseconds. Our analysis also shows that spatial proximity of items in complex arrays impacts sensory representation during both perceptual encoding and initial memory decay. Overall, these results provide a theoretical and empirical characterization of the readout from visual representations and offer a detailed insight into the transition from perception into iconic memory.
Allocentric spatial representations dominate when switching between real and virtual worlds
McManus M, Seifert F, Schütz I and Fiehler K
After removing a virtual reality headset, people can be surprised to find that they are facing a different direction than expected. Here, we investigated if people can maintain spatial representations of one environment while immersed in another. In the first three experiments, stationary participants were asked to point to previously seen targets in one environment, either the real world or a virtual environment, while in the other environment. We varied the amount of misalignment between the two environments (detectable or undetectable), the virtual environment itself (lab or kitchen), and the instructions (general or egocentric priming). Pointing endpoints were based primarily on the locations of objects in the currently seen environment, suggesting a strong reliance on allocentric cues. In the fourth experiment, participants moved in virtual reality while keeping track of an unseen real-world target. We confirmed that the pointing errors were due to a reliance on the currently seen environment. It appears that people hardly ever keep track of object positions in a previously seen environment and instead primarily rely on currently available spatial information to plan their actions.
Disentangling decision uncertainty and motor noise in curved movement trajectories
Chapman WG and Ludwig CJH
When a manual reaching target is selected from a number of alternatives, decision uncertainty can often result in curvature of movement trajectories toward a nonchosen alternative. This curvature in the two-dimensional object plane is typically attributed to competitive interactions between different movement goals. Several models of action selection assume an explicit link between the momentary position of the hand and the state of the underlying decision process. Under this assumption, tracking the position of the hand can be used to infer the temporal evolution of the decision. However, even without a selection requirement, movements show variable amounts of curvature due to motor noise. We assessed the relative contributions of decision uncertainty and motor noise to the variability in curvature in naturalistic reach-to-grasp actions. Participants had to pick up one of two blocks (the brighter/dimmer block) and we manipulated decision uncertainty by varying the luminance difference between the two blocks. Single target baseline reaches were included to model the variability in curvature without a choice requirement. We assessed to what extent this baseline model can account for the curvature distributions observed under choice conditions, and tested several modifications of the model to capture any effects of decision uncertainty. The best model of the curvature distributions under choice conditions involved a mixture of the baseline component along with a separate choice component. The weight of this choice component and analysis of the likelihood of observed reaches under the choice/baseline components, suggest that the majority of reaches were unaffected by decision uncertainty and were compatible with the natural variability in movement trajectories due to motor noise. Unless the variability induced by factors unrelated to the decision process is adequately accounted for, the role of decision uncertainty may be overstated when it is inferred from reach trajectories.
Contrast negation increases face pareidolia rates in natural scenes
Balas B, Morton M, Setchfield M, Roshau L and Westrick E
Face pareidolia, the phenomenon of seeing face-like patterns in non-face images, has a dual nature: Pareidolic patterns are experienced as face-like, even while observers can recognize the true nature of the stimulus (Stuart et al., 2025). Although pareidolic faces seem to result largely from the canonical arrangement of eye spots and a mouth, we hypothesized that competition between veridical and face-like interpretations of pareidolic patterns may constrain face pareidolia in natural scenes and textures. Specifically, we predicted that contrast negation, which disrupts multiple aspects of mid- to high-level recognition, may increase rates of face pareidolia in complex natural textures by weakening the veridical, non-face stimulus interpretation. We presented adult participants (n = 27) and 5- to 12-year-old children (n = 67) with a series of natural images depicting textures such as grass, leaves, shells, and rocks. We asked participants to circle any patterns in each image that looked face-like, with no constraints on response time or pattern size, position, and orientation. We found that, across our adult and child samples, contrast-negated images yielded more pareidolic face detections than positive images. We conclude that disrupting veridical object and texture recognition enhances pareidolia in children and adults by compromising half of the dual nature of a pareidolic pattern.
Dynamic contrast sensitivity during human locomotion
Szekely B and MacNeilage PR
Locomotion poses a challenge to clear and stable vision. Reflexive head and eye movements act to stabilize the retinal image, but these do not act perfectly, so retinal image motion is increased during walking compared with standing. We nevertheless perceive the world as clear and stable during locomotion, suggesting that the visual system is well-adapted to meet the challenges posed by locomotion. To better understand these processes, we assessed dynamic contrast sensitivity during locomotion by presenting brief (24 ms) foveal Gabor targets (6°, 11 cpd) at threshold contrast to observers walking on a treadmill in an otherwise darkened room. Head and ankle motion were tracked, and presentation time was randomized, which allowed post hoc binning of responses according to stride-cycle timing to investigate how sensitivity is impacted by head motion and stride-cycle timing. Contrast sensitivity was improved during walking compared with standing over large portions of the stride cycle, except for epochs aligned with heel strikes, which drive large and unpredictable perturbations. This resulted in periodicity in contrast sensitivity at two cycles per stride, with additional oscillations observed at four and six cycles per stride. Pupil size was found to be moderately larger during walking compared with standing and also exhibited periodic fluctuations that were phase-locked to the stride cycle. Perceptual oscillations reflect the entrainment of visual processing by active behaviors. Robust contrast sensitivity during walking may be supported by action-contingent effects of locomotion on visual cortical activity that have been observed in several animal models.
Visual field of the ferret (Mustela putorius furo), rat (Rattus norvegicus), and tree shrew (Tupaia belangeri)
Morris JM, Fernández-Juricic E, Plummer CE and Moore BA
To describe the visual field of three common model species in vision science to understand the organization of their visual perceptual experience and contribute to continued studies of visual processing. Visual fields were measured using an ophthalmoscopic reflex technique in four common ferrets, four albino rats, and six northern tree shrews. Animals were anesthetized to avoid stress and the midpoint between their eyes was centered inside a spherical space. A rotating perimetric arm was manipulated in 10° increments around the head. At each increment, direct ophthalmoscopy was used to visualize limits of the retinal reflex for each eye, the overlap being the binocular visual field. Mean binocularity in the horizontal plane was 63.7 ± 5.1°, 79.1 ± 7.4°, and 53.6 ± 12.0° in the ferret, rat, and shrew, respectively. Maximum mean binocularity was 69.0 ± 1.6° in the ferret, 90.0 ± 3.1° in the rat, and 53.6 ± 12.2° in the shrew, located at 10° above, 40° above, and at the horizontal plane, respectively. Binocularity extended to 160°, 200°, and 180° in the sagittal plane in the ferret, rat, and shrew, respectively, from at least below the nose to above the head in all animals. Establishing the extent of the visual field accessible to the retina provides insight into the egocentric perceptual experience of animals. In describing the visual field, we provide a reference for the representation of the visual space in different cortical and retinal regions, many of which represent specific subregions of the visual field.
Distinct temporal dynamics of judging scene-relative object motion and estimating heading from optic flow
Xie M and Li L
During self-movement, the visual system can identify scene-relative object motion via flow parsing and estimate the direction of self-movement (heading) from optic flow. However, the temporal dynamics of these two processes have not been examined and compared using matched displays. In this study, we examined how the accuracy of flow parsing and heading estimation changed with stimulus duration. Participants viewed a stereo optic flow display simulating forward translational self-movement through a cloud composed of wireframe objects with stimulus durations at 100, 200, 400, 700, and 1000 ms. In Experiment 1, a yellow dot probe moved vertically for 100 ms in the scene near the end of the trial. A nulling motion component was added through an adaptive staircase to the probe's image motion to determine when the probe was perceived to move vertically in the scene, which was then used to compute the accuracy of flow parsing. In Experiment 2, participants viewed the same optic flow display without the moving probe object. The simulated heading was randomly varied in each trial, and participants were asked to estimate heading at the end of the trial. As stimulus duration increased, the accuracy of flow parsing decreased, whereas the accuracy of heading estimation increased. These contrasting temporal dynamics suggest that despite both processes relying on optic flow, flow parsing and heading estimation involve distinct processing mechanisms with different temporal characteristics. This divergence, together with previous neurophysiological findings, led us to propose two potential neural mechanisms subserving these two processes to inspire future research.
Violated expectations during locomotion through virtual environments: Age effects on gaze guidance
Meissner S, Miksch J, Würbach L, Feder S, Grimm S, Einhäuser W and Billino J
Gaze behavior during locomotion must balance the sampling of relevant information and the need for a stable gait. To maintain a safe gait in the light of declining resources, older adults might shift this balance toward the uptake of gait-related information. We investigated how violations of expectations affect gaze behavior and information uptake across age groups by asking younger and older adults to locomote through a virtual hallway, where they encountered expected and unexpected objects. We found that older adults look more on the floor, despite the translational locomotion, though not the rotational, being virtual. Dwell times on unexpected objects were increased in both age groups compared to expected objects. Although older adults showed shorter dwell times on expected objects, dwell times on unexpected objects were similar across age groups. Thus the difference between expected and unexpected objects was greater in older adults. Gaze distributions were more influenced by cognitive control capacities than by motor control capacities. Our findings indicate that unexpected information attracts attention during locomotion-particularly in older adults. However, during actual locomotion in the real world, increased information processing might come at the cost of reduced gait safety if processing resources are shifted away from stabilizing gait.
Representational dynamics of the main dimensions of object space: Face/body selectivity aligns temporally with animal taxonomy but not with animacy
Leys G, Chen CY, von Leupoldt A, Ritchie JB and Op de Beeck H
Object representations are organized according to multiple dimensions, with an important role for the distinction between animate and inanimate objects and for selectivity for faces versus bodies. For other dimensions, questions remain how they stand relative to these two primary dimensions. One such dimension is a graded selectivity for the taxonomic level that an animal belongs to. Earlier research suggested that animacy can be understood as a graded selectivity for animal taxonomy, although a recent functional magnetic resonance imaging study suggested that taxonomic effects are instead due to face/body selectivity. Here we investigated the temporal profile at which these distinctions emerge with multivariate electroencephalography (N = 25), using a stimulus set that dissociates taxonomy from face/body selectivity and from animacy as a binary distinction. Our findings reveal a very similar temporal profile for taxonomy and face/body selectivity with a peak around 150 ms. The binary animacy distinction has a more continuous and delayed temporal profile. These findings strengthen the conclusion that effects of animal taxonomy are in large part due to face/body selectivity, whereas selectivity for animate versus inanimate objects is delayed when it is dissociated from these other dimensions.
Temporal recalibration to delayed visual consequences of saccades
Nörenberg W, Schweitzer R and Rolfs M
The accurate inference of causality between actions and their sensory outcomes requires determining their temporal relationship correctly despite variable delays within and across sensory modalities. Temporal recalibration-the perceptual realignment of actions with delayed sensory feedback-has been demonstrated across various sensorimotor domains. Here, we investigate whether this mechanism extends to saccadic eye movements and sensory events contingent on them. In three experiments, participants made horizontal saccades that triggered high-contrast flashes at varying delays. They then reported whether the flashes occurred during or after the saccade, allowing us to track perceived event timing. Exposure to consistent delays between saccade onset and the flash led to a shift in perceptual reports: flashes presented after saccade offset were more often judged as occurring during the movement. This recalibration effect was robust even when we manipulated relevant visual cues such as the presence of a structured background or the continuity of the saccade target. In a replay condition, we found a significant but much smaller recalibration effect between replayed saccades and flash, demonstrating the importance of action execution for visuomotor temporal recalibration. These findings highlight the visual system's remarkable adaptability to temporal delays between eye movements and their sensory consequences. A similar recalibration mechanism may support perceptual stability in natural vision by dynamically realigning saccades with their resulting visual input, even amid changing visual conditions.
Attention-induced perceptual traveling waves in binocular rivalry
Cardoso JVX, Li HH, Heeger DJ and Dugué L
Cortical traveling waves-smooth changes of phase over time across the cortical surface-have been proposed to modulate perception periodically as they travel through retinotopic cortex, yet little is known about the underlying computational principles. Here, we make use of binocular rivalry, a perceptual phenomenon in which perceptual (illusory) waves are perceived when a shift in dominance occurs between two rival images. First, we assessed these perceptual waves using psychophysics. Participants viewed a stimulus restricted to an annulus around fixation, with orthogonal orientations presented to each eye. The stimulus presented to one eye was of greater contrast, thus generating perceptual dominance. When a patch of greater contrast was flashed briefly at one position in the other eye, it created a change in dominance that started at that location of the flash and expanded progressively, like a wave, as the previously suppressed stimulus became dominant. We found that the duration of the perceptual propagation increased with both distance traveled and eccentricity of the annulus. Diverting attention away from the annulus reduced drastically the occurrence and the speed of the wave. Second, we developed a computational model of traveling waves in which competition between the neural representations of the two stimuli is driven by both attentional modulation and mutual inhibition. We found that the model captured the key features of wave propagation dynamics. Together, these findings provide new insights into the functional relevance of cortical traveling waves and offer a framework for further experimental investigation into their role in perception.
Keeping your eye, head, and hand on the ball: Rapidly orchestrated visuomotor behavior in a continuous action task
Schroeger A, Goettker A, Braun DI and Gegenfurtner KR
In everyday life, we must adapt our behavior to a continuous stream of tasks and time motor responses and periods of resting accordingly. To mimic these challenges, we used a continuous interception computer game (Pong) on an iPad. This allowed us to measure the coordination of eye, hand, and head movements during natural sequential behavior while maintaining the benefits of experimental control. Participants intercepted a moving ball by sliding a paddle at the bottom of the screen so that the ball bounced back and moved toward the computerized opponent. We tested (i) how participants adapted their eye, hand, and head movements to this dynamic, continuous task, (ii) whether these adaptations are related to interception performance, and (iii) how their behavior changed under different conditions and (iv) over time. We showed that all movements are carefully adapted to the upcoming action. Pursuit eye movements provide crucial motion information and are emphasized shortly before participants must act; a strategy associated with better performance. Participants also increasingly used pursuit eye movements under more difficult conditions (fast targets and small paddles). Saccades, blinks, and head movements, which would lead to information loss, are minimized at critical times of interception. These strategic patterns are intuitively established and maintained over time and across manipulations. We conclude that humans carefully orchestrate their full repertoire of movements to aid performance and finely adjust them to the changing demands of our environment.
Visual-tactile shape perception in Argus II Participants: The impact of prolonged device use and blindness on performance
Saltzmann S and Stiles N
In Stiles et al. (2022), we showed that experienced Argus II retinal prosthesis users could accurately match visual and tactile shape stimuli (n = 6; ≤42 months of use). In this follow-up paper, we studied longer using participants (n = 5; ≤121 months of use) to evaluate visual and multisensory performance over prolonged visual restoration. With the combined cohort of participants from both studies (N = 11), we found that there was a significant positive correlation in multisensory performance up to the median duration of use (42 months) and a positive slope fit but not a significant correlation for the median duration of use and beyond. Therefore, there seems to be evidence for initial performance improvement with Argus II use. Nevertheless, there is also evidence for substantial individual differences with more extended device use, supported by a participant self-evaluation/questionnaire. Variations in the frequency of device usage, device functionality, or neurostructural plasticity could contribute to these individual differences. We also found a negative correlation in Argus II participants (N = 11) between task performance and the duration of blindness, potentially indicating the deleterious effects of atrophy and neurostructural changes during blindness on visual restoration functionality. Finally, a d' analysis showed that the Argus II participants in all tasks (including tactile-tactile matching) had significant differences in sensitivity and bias relative to controls, highlighting variation in the shape task strategy. Overall, these data highlight individual differences in performance over prolonged device use and the negative impact of prolonged blindness on visual restoration.
The effects of aging on directionally selective masking
Tsotsos LE, Roudaia E, Sekuler AB and Bennett PJ
Motion perception is degraded in older adults. Previous studies suggest that this effect of aging may be due in part to an increase in the bandwidth of directionally selective mechanisms. We tested this idea by measuring directional masking in younger and older adults. Experiments 1-3 measured the contrast needed to discriminate the direction of coherently moving signal dots embedded in high-contrast mask dots. The distribution of mask dot directions was varied with notch filters, and directional selectivity was indexed by the slope of the threshold-versus-notch function. Thresholds were higher and directional selectivity of masking was lower in older compared to younger adults. However, age differences were eliminated when signal and mask contrast were expressed as multiples of discrimination thresholds for unmasked motion. Experiments 4-5 measured direction discrimination thresholds by varying the proportion of coherently moving dots embedded in a mask consisting of dots whose directions varied across conditions. All dots were high contrast, so age differences in masking are unlikely to be caused by differences in contrast sensitivity in these conditions. Nevertheless, Experiments 4-5 also found higher discrimination thresholds and reduced directional selectivity in older adults. The results are consistent with the hypothesis that directionally selective mechanisms become more broadly tuned during senescence.
Facial feature representations in visual working memory: A reverse correlation study
Kuuramo C and Kurki I
For humans, storing facial identities in visual working memory (VWM) is crucial. Despite vast research on VWM, it is not well known how face identity and physical features (e.g., eyes) are encoded in VWM representations. Moreover, while it is widely assumed that VWM face representations encode efficiently the subtle individual differences in facial features, this assumption has been difficult to investigate directly. Finally, it is not known how facial representations are forgotten. Some facial features could be more susceptible to forgetting than others, or conversely, all features could decay randomly. Here, we use a novel application of psychophysical reverse correlation, enabling us to estimate how various facial features are weighted in VWM representations, how statistically efficient these representations are, and how representations decay with time. We employed the same-different task with two retention times (1 s and 4 s) with morphed face stimuli, enabling us to control the appearance of each facial feature independently. We found that only a few features, most prominently the eyes, had high weighting, suggesting face VWM representations are based on storing a few key features. A classifier using stimulus information near-optimally showed markedly similar weightings to human participants-albeit weighing eyes less and other features more-suggesting that human VWM face representations are surprisingly close to statistically optimal encoding. There was no difference in weightings between retention times; instead, internal noise increased, suggesting that forgetting in face VWM works as a random process rather than as a change in remembered facial features.
Detection and identification of monocular, binocular, and dichoptic stimuli are mediated by binocular sum and difference channels
Georgeson MA, Sato H, Chang R and Kingdom FAA
How are signals from the two eyes combined? We asked whether the mechanisms that limit detectability of simple binocular and dichoptic stimuli also set the limits for their identification. For example, at low contrasts, can we (a) identify monocular versus binocular stimulation and/or (b) identify stimuli that are the same in both eyes (e.g., both light discs or both dark) versus stimuli with opposite polarity (light disc in one eye, dark disc in the other). For the same- versus opposite-polarity tasks, mean proportions of correct trials for detection and for identification were almost identical. This is the classic signature of separate mechanisms for the two stimuli in question. For the monocular versus binocular task, however, identification (one eye or two?) was notably worse than detection, but these very different outcomes do not demand fundamentally different explanations. We developed a model with binocular sum and difference channels and formulated the identification task in a two-dimensional decision space whose coordinates were the sum and difference channel responses. This space was ideally suited to the same versus opposite polarity tasks, having orthogonal response axes (90° apart) for these stimuli. But monocular discs stimulated both channels, with greater overlap of monocular and binocular response distributions, hence greater perceptual confusion and poorer identification. When bias and uncertainty were also accounted for, the model fit to identification data was excellent. We conclude that the same binocular sum and difference channels are used in stimulus detection and in perceptually encoding the degree of difference between inputs to the two eyes.
Local motion governs visibility and suppression of biological motion in continuous flash suppression
Swann W, Davidson M, Clouston G and Alais D
Presenting unique visual stimuli to each eye induces a dynamic perceptual state where only one image is perceived at a time, and the other is suppressed from awareness. This phenomenon, known as interocular suppression, has allowed researchers to probe the dynamics of visual awareness and unconscious processing in the visual system. A key result is that different categories of visual stimuli may not be suppressed equally, but there is still a wide debate as to whether low- or high-level visual features modulate interocular suppression. Here we quantify and compare the strength of suppression for various motion stimuli in comparison to biological motion stimuli that are rich in high-level semantic information. We employ the tracking continuous flash suppression method, which recently demonstrated uniform suppression depth for a variety of static images that varied in semantic content. The accumulative findings of our three experiments outline that suppression depth is varied not by the strength of the suppressor alone but with different low-level visual motion features, in contrast to the uniform suppression depth previously shown for static images. Notably, disrupting high-level semantic information via the inversion or rotation of biological motion did not alter suppression depth. Ultimately, our data support the dependency of suppression depth on local motion information, further supporting the low-level local-precedence hypothesis of interocular suppression.
In the eye of the beholder? Gaze perception and the external morphology of the human eye
Alting C and Horstmann G
A well-known finding from research on gaze perception in triadic gaze tasks is the overestimation of horizontal gaze directions. In general, a looker model's gaze appears to deviate more from the straight line of sight than is objectively the case. Although there is, up to now, a substantial amount of evidence for what Anstis et al. (1969) termed the overestimation effect, results vary regarding the absolute overestimation factor. Starting from the occlusion hypothesis by Anstis et al. (1969), the present study examines the influence of horizontal iris movement range, operationalized as the sclera size index on overestimation factors acquired for a sample of 40 looker models. The study rendered two main findings. First, horizontal iris movement range (sclera size index: M = 2.02, SD = 0.11, min = 1.79, max = 2.25) proved not useful for the explanation of variance in the overestimation factors (M = 1.79, SD = 0.16, min = 1.49, max = 2.24) obtained separately for each of the looker models. Second, intraclass correlations revealed that variance in perceived gaze directions between observers was roughly 10 times larger (ICC = 0.189) than variance between looker models (ICC = 0.019). The results strongly emphasize the need for larger and more diverse observer samples and may serve as a post hoc justification for using only a few or no different looker models in triadic gaze judgment tasks.