Restoration of Sight with Electronic Retinal Prostheses
Retinal prostheses aim at restoring sight to patients blinded by atrophy of photoreceptors using electrical stimulation of the inner retinal neurons. Bipolar cells can be targeted using subretinal implants, and their responses are then relayed to the central visual pathways via the retinal neural network, preserving many features of natural signal processing. Epiretinal implants stimulate the output retinal layer-ganglion cells-and encode visual information directly in spiking patterns.Several companies and academic groups have demonstrated that electrical stimulation of the degenerate retina can elicit visual percepts. However, most failed to consistently and safely achieve an acceptable level of performance. Recent clinical trials demonstrated that subretinal photovoltaic arrays in patients visually impaired by age-related macular degeneration can provide letter acuity matching their 100 μm pixel pitch, corresponding to 20/420 acuity. Electronic zoom enabled patients to read smaller fonts. This review describes the concepts, technologies, and clinical outcomes of current systems and provides an outlook into future developments.
Innate Immune Pathways Regulating Retinal Cell Development and Regeneration
Development of the vertebrate retina involves the interaction of multiple signaling pathways and cell types, and there is growing appreciation of the role of innate immune pathways in this process. Resident innate immune cells, particularly microglia, play myriad roles in retinal development, disease, and regeneration. Here we aim to highlight what is known about innate immune cell populations and pathways in retinal cell development and regeneration. Resident innate immune cells are present from the earliest stages of retinal development and regulate developmental cell elimination, synapse refinement, angiogenesis, and recovery from retinal damage. We discuss the signaling pathways mediating immune cell interactions with other cell populations in developing and regenerating retina and highlight species-specific differences in retinal innate immune cell function, which are particularly evident in retinal cell regeneration.
The Role of Vision Science in Understanding Animal Camouflage
Animal camouflage in the natural world has been studied for over a century, with early research often relying on descriptive accounts of patterning as perceived by human observers. Recent advances, however, have leveraged a deeper understanding of visual processing across a wide range of predators. This review examines literature illustrating how insights from vision science have enriched research on camouflage. We focus on three areas: color and texture, motion processing, and the perception of shape and depth. We discuss findings from vision research that show how animals seeking to remain undetected optimize their camouflage. We also explore how predator visual systems have evolved to break that camouflage. Last, we highlight gaps where vision science has yet to be applied to research on camouflage, with the hope of encouraging further interdisciplinary work.
What Do Visual Neural Networks Learn?
Over the past decade, artificial neural networks trained to classify images downloaded from the internet have achieved astounding, almost superhuman performance and have been suggested as possible models for human vision. In this article, we review experimental evidence from multiple studies elucidating the classification strategy learned by successful visual neural networks (VNNs) and how this strategy may be related to human vision as well as previous approaches to computer vision. The studies we review evaluate the performance of VNNs on carefully designed tasks that are meant to tease out the cues they use. The use of this method shows that VNNs are often fooled by image changes to which human object recognition is largely invariant (e.g., the change of a few pixels in the image or a change of the background or illumination), and, conversely, that the networks can be invariant to very large image manipulations that disrupt human performance (e.g., randomly permuting the patches of an image). Taken together, the evidence suggests that these networks have learned relatively low-level cues that are extremely effective at classifying internet images but are ineffective at classifying many other images that humans can classify effortlessly.
Bio-Inspired Computational Imaging: Components, Algorithms, and Systems
Artificial vision has advanced significantly on the basis of insights from human and animal vision. Still, biological vision retains advantages over mainstream computer vision, notably in terms of robustness, adaptability, power consumption, and compactness. Natural vision also demonstrates a great diversity of solutions to problems, adapted to specific tasks. Biological vision best corresponds to the subfield of computation imaging, in which optics and algorithms are codesigned to uncover scene information. We review current progress and opportunities in optics, sensors, algorithms, and joint designs that enable computational cameras to mimic the power of natural vision.
Perceptual and Cognitive Foundations of Information Visualization
Information visualization is central to how humans communicate. Designers produce visualizations to represent information about the world, and observers construct interpretations based on the visual input as well as their heuristics, biases, prior knowledge, and beliefs. Several layers of processing go into the design and interpretation of visualizations. This review focuses on processes that observers use for interpretation: perceiving visual features and their interrelations, mapping those visual features onto the concepts they represent, and comprehending information about the world based on observations from visualizations. Observers are more effective at interpreting visualizations when the design is well-aligned with the way their perceptual and cognitive systems naturally construct interpretations. By understanding how these systems work, it is possible to design visualizations that play to their strengths and thereby facilitate visual communication.
Neural Control of Vergence and Ocular Accommodation
We review the current state of our knowledge of the neural control of vergence and ocular accommodation in primates including humans. We first describe the critical need for these behaviors for viewing in a three-dimensional world. We then consider the sensory stimuli that drive vergence eye movements and lens accommodation and describe models of the sensorimotor transformations required to drive these motor systems. We discuss the interaction of vergence with saccades to produce high-speed shifts in gaze between objects at different distances and eccentricities. We also cover the normal development of these eye movements as well as the sequelae associated with their maldevelopment. In particular, we examine the neural substrates that produce vergence and lens accommodation, including motoneurons, immediate premotor circuitry, cerebellar and precerebellar regions, and cerebral cortical areas.
Following the Tradition of the Italian School of Visual Science: Intellectually Challenging but also Incredibly Exciting
I enjoy studying the brain, a passion I inherited from my Italian mentors (Lamberto Maffei and Adriana Fiorentini) and Australian colleagues (John Ross and David Burr) when I was a young physics student. Looking back on the development of my career, I believe that my motivation in pursuing challenging research came from the great excitement that arose when we were close to understanding a problem. When this happened, I did not care whether I was without a real job or that I lacked the recognition I deserved as a female scientist in a highly competitive, male-dominated field. This professional joy, mixed also with family joys, such as being married to a scientist with whom I shared the same passion, was my strength. In this review, I briefly outline the values instilled by my family, teachers, peers, and research tutors, from the perspective of a woman from then-underdeveloped Southern Italy approaching a science career in the 1970s.
The Role of Layer 6 Corticothalamic Circuits in Vision: Plasticity, Sensory Processing, and Behavior
Layer 6 corticothalamic (L6 CT) pyramidal neurons send feedback projections from the primary visual cortex to both first- and higher-order visual thalamic nuclei. These projections provide direct excitation and indirect inhibition through thalamic interneurons and neurons in the thalamic reticular nucleus. Although the diversity of L6 CT pathways has long been recognized, emerging evidence suggests multiple subnetworks with distinct connectivity, inputs, gene expression gradients, and intrinsic properties. Here, we review the structure and function of L6 CT circuits in development, plasticity, visual processing, and behavior, considering computational perspectives on their functional roles. We focus on recent research in mice, where a rich arsenal of genetic and viral tools has advanced the circuit-level understanding of the multifaceted roles of L6 CT feedback in shaping visual thalamic activity.
Decoding Covert Visual Attention in Space and Time from Neural Signals
Visual attention prioritizes relevant stimuli in complex environments through top-down (goal-directed) and bottom-up (stimulus-driven) mechanisms within cortical networks. This review explores the neural mechanisms underlying visual attention, focusing on how attentional control is encoded and decoded from prefrontal signals in both spatial and temporal domains. Decoding methods enable real-time tracking of covert visual attention from prefrontal activity with high spatial and temporal resolution, as a neurophysiological proxy of the attentional spotlight. This research provides insights into stimulus selection mechanisms, proactive and reactive suppression of irrelevant stimuli, the rhythmic nature of attentional shifts and attentional saccades, the balance between focus and flexibility, and the variation of these processes along epochs of sustained attention. Additionally, the review highlights how recurrent neural networks in the prefrontal cortex contribute to supporting these attention dynamics. These findings collectively offer a comprehensive model of attention that integrates dynamic prioritization processes at short and longer timescales.
Visual Crowding
Crowding is ubiquitous: When objects are surrounded by other elements, their perception may be impaired depending on factors such as the proximity of the surrounding elements and the grouping of elements and targets. Crowding research aims to identify these factors, for instance, which elements interfere with one another and how close they need to be to cause crowding. Traditionally, crowding was thought to occur only within narrow temporal and spatial limits around the target. Recent studies, however, reveal that crowding may result from both low- and high-level processes, such as perceptual grouping and timing, as well as the arrangement of complex visual stimuli. This review highlights these new insights, suggesting that overall organization, as well as both feedforward and feedback processes, plays a role. Crowding emerges as a highly complex and dynamic phenomenon, underscoring the need for a more integrated approach to fully capture its intricacies, which may carry broader implications not only for crowding but also for vision science as a whole.
Computational and Neuronal Basis of Visual Confidence
The primate brain excels at transforming photons into knowledge. When light strikes the back of the eye, opsin molecules within rods and cones absorb photons, triggering a change in membrane potential. This energy transfer initiates a cascade of neural events that endows us with useful knowledge. This knowledge manifests as subjectively experienced perceptual interpretations and mostly pertains to the 3D structure of the visual environment and the affordances of the objects within the scene. However, some of this knowledge instead pertains to the quality of these interpretations and contributes to our sense of confidence in perceptual decisions. Because such confidence reflects knowledge about knowledge, psychologists consider this the domain of metacognition. Here, we examine what is known about the neuronal basis of perceptual decision confidence, with a focus on vision. We review the crucial computational processes and neural operations that underlie and constrain the transformation of photons into visual metacognition.
A Critical Look at Critical Periods
Over the past decade and a half, a new understanding has emerged of the role of vision during the critical period in the primary visual cortex. Rather than driving competition for cortical space, vision is now understood to inform the establishment of feature conjunctions that cannot be constructed intrinsically. Longitudinal imaging studies reveal that the establishment of these higher-order feature detectors is a remarkably dynamic process involving the gain and elimination of neurons from functional groups (e.g., binocular neurons with nonlinear response tuning). Experience exerts its influence selectively on this developing circuitry; some pathways require experience for normal development, while others appear to be intrinsically established. This difference drives the network dynamism that is exploited to construct novel cortical representations that best encode our local environment and inform our actions in it.
Development of Retinal Astroglia
Müller cells and retinal nerve fiber layer astrocytes are the major astroglia of the mammalian retina. They have numerous important functions in adulthood for maintaining neuronal homeostasis as well as in developing retina, where they facilitate key events in the assembly of the retinal tissue. Recent years have seen substantial progress in understanding how these astroglial cells develop and how their development shapes the cells around them. We review the mechanisms underlying the formation, maturation, and spatial patterning of Müller glia and retinal astrocytes, with an emphasis on how they acquire their functional properties. We focus on developmental events that have a major impact on overall retinal integrity, such as the formation of neuro-glial junctions at the outer limiting membrane and the patterning of retinal astrocytes into a template that guides angiogenesis. Finally, we discuss examples of retinal diseases that originate in developmental defects affecting Müller cells or retinal astrocytes. These include certain classes of inherited retinal degenerations, as well as retinopathy of prematurity.
Seeing the Light: Perception and Discrimination of Illumination Color
The contributions of surface reflectance and incident illumination are entangled in the light reflected to the eye. Historically, the extent to which the perception of one determines the other has long been debated, particularly in empirical studies of surface lightness and color constancy. Despite enormous progress in physical measurements of the spatial, spectral, and temporal properties of natural illumination, and in the ability to generate and control in real time artificial light of an almost infinite variety of spectra, the questions of whether and how people perceive the illumination as a distinct entity with its own color, and the interdependence of perceived surface color on perceived illumination, remain open. Given the rise in novel lighting interventions that modulate illumination spectra in order to improve health, well-being, productivity, and culture, it has become increasingly important to understand the two-way interaction between the visual and nonvisual sensing of illumination.
Seeing a Three-Dimensional World in Motion: How the Brain Computes Object Motion and Depth During Self-Motion
Humans and other animals move their eyes, heads, and bodies to interact with their surroundings. While essential for survival, these movements produce additional sensory signals that complicate visual scene analysis. However, these self-generated visual signals offer valuable information about self-motion and the three-dimensional structure of the environment. In this review, we examine recent advances in understanding depth and motion perception during self-motion, along with the underlying neural mechanisms. We also propose a comprehensive framework that integrates various visual phenomena, including optic flow parsing, depth from motion parallax, and coordinate transformation. The studies reviewed here begin to provide a more complete picture of how the visual system carries out a set of complex computations to jointly infer object motion, self-motion, and depth.
Visual Perception of Self-Motion
Visual perception of self-motion is essential for navigation and environmental interaction. This review examines the mechanisms by which we perceive self-motion, highlighting recent progress and significant findings. It first evaluates optic flow and its critical role in the perception of self-motion, then considers nonflow visual cues that contribute to this process. Key aspects of self-motion perception are discussed, including the perception of instantaneous direction (i.e., heading) and future trajectory (i.e., path) of self-motion. It then addresses two closely linked topics: the perception of independent object motion during self-motion and the perception of heading with independent object motion. While these processes occur concurrently, research indicates that they involve separate perceptual mechanisms. In light of recent neurophysiological findings, potential neural mechanisms underlying these two processes are proposed. Finally, it discusses how studies often conflate unreal optic flow with real optic flow, raising questions for future research to better understand how the brain processes optic flow for the perception of self-motion.
Visual Image Reconstruction from Brain Activity via Latent Representation
Visual image reconstruction, the decoding of perceptual content from brain activity into images, has advanced significantly with the integration of deep neural networks (DNNs) and generative models. This review traces the field's evolution from early classification approaches to sophisticated reconstructions that capture detailed, subjective visual experiences, emphasizing the roles of hierarchical latent representations, compositional strategies, and modular architectures. Despite notable progress, challenges remain, such as achieving true zero-shot generalization for unseen images and accurately modeling the complex, subjective aspects of perception. We discuss the need for diverse datasets, refined evaluation metrics aligned with human perceptual judgments, and compositional representations that strengthen model robustness and generalizability. Ethical issues, including privacy, consent, and potential misuse, are underscored as critical considerations for responsible development. Visual image reconstruction offers promising insights into neural coding and enables new psychological measurements of visual experiences, with applications spanning clinical diagnostics and brain-machine interfaces.
Ocular Accommodation: The Autofocus Mechanism of the Human Eye
Ocular accommodation, the autofocus mechanism of the human eye, is fundamental for the achievement and maintenance of clear vision across viewing distances. Together with its close ally, vergence eye movements, this mechanism also ensures that binocular single vision is achieved at all these distances. Several dimensions of this mechanism have been investigated for well over a century. The present article summarizes this large volume of work under three themes: () biomechanics and neural control of the accommodative apparatus, () its behavioral properties, and () control-engineering modeling endeavors that offer a theoretical framework for gaining insights into the functioning of this mechanism. Built into these themes is a discussion on the development of accommodation, its loss with aging (presbyopia), sensory cues that aid the generation of these responses, and the technologies available for the measurement of these responses. The article also raises several unresolved questions for future research.
What V1 Damage Can Teach Us About Visual Perception and Learning
In humans, occipital strokes invariably damage the primary visual cortex (V1), causing a loss of conscious vision over large portions of the visual field. This unfortunate experiment of nature affects a significant proportion of all stroke victims, but there is a lack of accepted vision restoration therapies clinically, despite a rich history of studies into the resulting visual deficit and the perceptual abilities that paradoxically survive in affected portions of the visual field. Over the last two decades, the clinical dogma that V1-damaged adult visual systems cannot recover has been challenged by accumulating evidence that visual retraining to detect or discriminate stimuli in the blind field can restore perceptual abilities. This review summarizes key developments in training approaches, some of the mechanistic insights they have revealed, and limitations and opportunities that have emerged.
Active Filtering: A Predictive Function of Recurrent Circuits of Sensory Cortex
Our brains encode many features of the sensory world into memories: We can sing along with songs we have heard before, interpret spoken and written language composed of words we have learned, and recognize faces and objects. Where are these memories stored? Each area of cerebral cortex has a huge number of local recurrent excitatory-excitatory synapses, as many as 500 million per cubic millimeter. Here I outline evidence for the theory that cortical recurrent connectivity in sensory cortex is a substrate for sensory memories. Evidence suggests that the local recurrent network encodes the structure of natural sensory input and that it does so via active filtering, transforming network inputs to boost or select those associated with natural sensation. Active filtering is a form of predictive processing-in which the cortical recurrent network selectively amplifies some input patterns and attenuates others-and a form of memory.
