German Translation and Validation of the Visually Induced Motion Sickness Susceptibility Questionnaire Short (VIMSSQ-short)
Motion sickness is a condition that is characterized by symptoms like dizziness, nausea, or vomiting, especially during transportation or immersive visual experiences such as gaming and virtual reality (VR). Visually Induced Motion Sickness (VIMS) is of particular concern due to its increasing relevance with the rise of immersive technologies. The 6-item version of the Visually Induced Motion Sickness Susceptibility Questionnaire (VIMSSQ-short), a modified version of the established Motion Sickness Susceptibility Questionnaire, was developed to quickly assess individual susceptibility to VIMS. This study focuses on its translation into German and the validation of this German-language version of the VIMSSQ-short. The translation process included independent translations by experts and a back-translation to identify and resolve discrepancies. An online survey collected normative data from 200 participants, revealing a mean score of 5.85 (SD = 3.31) for the translated VIMSSQ-short. The results indicated significant gender differences, with females exhibiting higher susceptibility scores than males. Additionally, a significant negative correlation between age and susceptibility was observed. An experimental study involving 70 participants further confirmed these findings in terms of mean scores, gender, and age. Additionally, the findings demonstrate that higher VIMSSQ scores predict symptom severity during VR exposure ( r s = 0.58 with Simulator Sickness Questionnaire total score). Overall, the translated VIMSSQ-short shows promise as a reliable tool for assessing VIMS susceptibility in German-speaking populations, contributing to the understanding of motion sickness in immersive environments. The identification of susceptible individuals is relevant both for practical applications (e.g. in the training of emergency forces) and in experimental settings for the randomization or screening of participants.
Investigating the Correspondence between Accuracy, Confidence, and Metacognitive Insight for Olfactory Processing
Olfaction is an important yet relatively often overlooked sense, relevant to cognition, emotion, mental health, and ageing. Research into sensory domains such as interoception indicates discrepancies between behaviourally assessed accuracy and self-report measures, such as confidence. Initial evidence suggests a similar discrepancy between behavioural and self-report measures may also be present for olfaction. This study employed a multidimensional framework to examine the relationships among behavioural accuracy, self-reported confidence, and metacognitive insight for olfactory processing, using the Sniffin' Sticks Test across three olfactory tasks: threshold, discrimination, and identification. Participants exhibited varying metacognitive insight into their olfactory performance, with insight being significantly lower for the threshold task. In a complementary analysis, accuracy and confidence were not related in the threshold task, but were aligned during both the discrimination and identification tasks. These results suggest that individuals who have the most sensitive olfactory threshold performance do not display corresponding high levels of confidence, supporting the finding of reduced insight for this type of olfactory task specifically. Across tasks, behavioural accuracy was related in all tasks, as was self-reported confidence. However, metacognitive insight was not associated across tasks. Together, these results highlight that metacognitive insight for olfactory ability may differ depending on the type of olfactory task. These initial findings require replication with alternative methods to account for differences in task performance, and further investigation to ascertain patterns of results in old age where there can be a decline in olfactory function. A more comprehensive understanding of olfactory processing across accuracy, self-report, and insight measures, across a range of different olfactory tests, may provide further insight into the relationship between specific aspects of olfactory ability and how these might relate to mental health and neurological conditions.
The Identification of Haptic Invariants in Humans and Their Applications to Robotics
When searching for an object in my pocket, I can clearly perceive the edges, shape, and hardness of the items I touch - regardless of which part of my fingers makes contact and despite large sensorimotor noise. What accounts for these perceptual invariants? Vincent Hayward introduced key elements to investigate this question, by (i) analyzing mechanical spatiotemporal invariants emerging from the constraints of skin-body-environment interactions, alongside corresponding neural invariants, as well as (ii) introducing the underlying concept of a plenhaptic function encompassing all haptic effects that could arise from interaction with the environment. This article presents this framework and some key insights into the nature of human haptics it has enabled. Specifically, it reviews evidence that haptic neural sensing relies on distributed dynamic effects produced within the skin and musculoskeletal system, describes how these effects may be encoded in the brain, and demonstrates how this approach can inform the design of versatile and effective haptic sensing for robots.
On Haptic Cues, Dimensions, and Stability in Touch
Haptic perception plays an integral role in how we interact with and make sense of the physical world. As research in this domain continues to expand across disciplines, conceptual ambiguities in terminology and theoretical frameworks persist. In this review, I address several of these ambiguities by proposing explicit accounts of notions often used implicitly in haptic research. Following a brief overview of the physiological and mechanical foundations of skin-object interactions and the somatosensory system, I introduce working definitions for terms such as haptic cues, stimuli, and perceptual dimensions, along with related assumptions and misconceptions. I argue that the distinction between distal and proximal stimuli or cues - more commonly applied in visual perceptual psychology - can serve as a helpful framework in haptic research for distinguishing physical (object-level) and mechanical (interaction-level) inputs. I then review some of the most commonly studied haptic dimensions, including stimulus magnitude, shape, material and texture, motion and time, weight, and size, along with the cues thought to inform them. This is followed by an examination of the notion of stability in haptic perception - often referred to as constancy or invariance - and its relevance for understanding how the sensory system creates consistent and behaviorally relevant percepts under variable conditions and cues. The specific phenomenon of perceptual and sensory metamers is explored as a particularly compelling example of cue integration and perceptual stability. I then address the various levels at which haptic stimuli and cues can be manipulated in research. Finally, I consider implications for haptic technologies and virtual environments as well as future directions. This conceptual framework aims to reduce terminological ambiguity and advance a clearer account of how stable haptic percepts are achieved.
An Illusion of Tactile Slip
Tactile slip is a common sensation that is interesting from the perspective of brain perception since it is susceptible to illusions. Here we create the illusion of slipping in a particular direction despite a net zero movement using a simple haptic interface consisting of only two moving parts. We further show that by superimposing net zero motions of different frequencies we can also control the illusion of movement speed. The latter extends previous observations where differential local skin displacement has been reported to cause illusions of surface roughness.
How Scratch-and-Sniff Books Encode Smell Across Development
Scratch-and-sniff books combine olfactory cues with textual and visual narratives thus offering multisensory reading experiences. Despite their popularity since their emergence in the 1970s, these books have received little scholarly attention, particularly regarding their cognitive and developmental relevance. This paper presents a mixed-methods analysis of 334 scratch-and-sniff books, investigating how smell content varies by target age, conceptual complexity, and linguistic framing. Quantitative analyses reveal systematic differences in smell types, labelling strategies, and descriptive complexity depending on target readership age. Books targeted at younger readers tend to feature simple, explicitly labelled smells (e.g., "apple", "chocolate"), while those books targeting an older audience rely more on implicit references, humour, and/or socially transgressive smells (e.g., "fart", "smelly feet"). Qualitative readings show how olfactory cues are embedded in broader narrative structures that support inferencing, emotional resonance, and social norm acquisition. We argue that scratch-and-sniff books may be designed to use smell not only with the intention to entertain, but also to prompt crossmodal mapping, encourage inferential reasoning, and scaffold early engagements with sensory and social meaning. These findings indicate that scratch-and-sniff books systematically adapt their olfactory content and linguistic framing to developmental stages, suggesting that smell may serve not only to entertain, but also to scaffold emerging sensory, cognitive, and social capacities in early reading.
Exploring Tactile Perception: Similarities and Differences between Sighted and Blind Individuals
This study explores how visual experience and sensory compensation shape tactile perception in two groups of individuals: sighted (SP) and visually impaired (VIP). A total of 100 participants (60 SP, 40 VIP) evaluated 37 material samples using a semantic differential scale. This study proposes a novel methodological approach: visually impaired participants evaluated tactile materials using exclusively visually based imagery descriptors. Despite group differences, both groups shared core perceptual invariants, specifically roughness, hardness, and temperature, essential for consistent and reliable tactile interaction. However, VIPs demonstrated heightened sensitivity to fine textures, likely due to sensory compensation mechanisms involving tactile acuity enhancement and neural plasticity. In contrast, SPs relied predominantly on macroscopic tactile cues. Clarifying these invariants and compensatory strategies is critical for inclusive and universally accessible product design, enabling products to be precisely tailored to users' sensory abilities. These findings provide significant societal value by offering concrete guidelines for improving tactile-based accessibility and enhancing everyday tactile navigation, interaction, and overall quality of life, especially for visually impaired populations.
The Role of Multisensory Integration in Postural Stability Regulation among 5- to 7-Year-Old Children
While postural control in preschool children relies on visual, proprioceptive, and vestibular inputs, the hierarchical contribution of multisensory integration - particularly the role of tactile feedback - remains undercharacterised. Few studies have systematically mapped the developmental trajectory of sensory weighting strategies in early childhood. We randomly selected 128 preschool children from a kindergarten in Suzhou in June 2025. Sensors measured the angular velocity modulus (ω) of the body centre of mass shaking under eight conditions. Paired-samples t-test and one-way repeated-measures analysis of variance were used to analyze differences in ω across sensory integrations. The ω of vestibular integration with proprioception was smaller than with visual or tactile senses ( P < 0.001). The ω of vestibular integration with visual-proprioception was smaller than that with proprioception-tactile senses or visual-tactile senses ( P < 0.001). The ω significantly decreased ( P < 0.001) when proprioception was integrated with all sensory conditions and under vestibular integration with visual, vestibular-tactile, and vestibular-proprioceptive inputs. No significant changes ( P > 0.05) occurred under vestibular-tactile-proprioception integration with visual. The ω significantly decreased ( P < 0.001) under vestibular-tactile integration. Proprioceptive integration consistently reduced postural sway, with vestibular-proprioceptive coupling demonstrating the strongest stabilizing effect, followed by visual integration. Tactile input only enhanced stability in the absence of visual and proprioceptive cues, highlighting its compensatory role in sensory-deprived developmental contexts.
Mid-Level Audiovisual Crossmodal Correspondences: A Narrative Review
This paper critically reviews the literature on mid-level audiovisual crossmodal correspondences, that is, those associations that emerge between structured, often dynamic stimuli in vision and audition. Unlike basic correspondences (involving perceptually simple, or unitary, features) or complex ones (involving semantically rich combinations of stimuli), mid-level correspondences occur between temporally and/or spatially patterned stimuli that are perceptually structured but are typically not inherently meaningful (e.g., melodic contours and moving shapes). Taken together, the literature published to date suggests that such correspondences often rely on structural or analogical mappings, reflecting shared spatiotemporal organization across the senses rather than the direct similarity of low-level features or emotional content. Drawing on evidence from developmental, comparative, and experimental studies, we discuss the possible mechanisms underpinning these mappings - including perceptual scaffolding, amodal dimensions, and metaphorical mediation - and outline open questions regarding their perceptual, cognitive, and neural bases. We also evaluate key methodological approaches and provide suggestions for future research aiming to understand the hierarchy of crossmodal correspondences across levels of perceived stimulus complexity. Besides advancing theoretical models, our paper offers practical insights for domains such as multimedia design and crossmodal art.
Mood Music: Studying the Impact of Background Music on Film
In this narrative historical review, we both summarize and critically evaluate the experimental literature that has emerged over the last century or so investigating the various ways in which the addition of music influences people's perception of, and response to, film. While 'sensation transference', whereby the mood of the background music carries over to influence the viewer's feeling about the film content, has often been documented, background music can also affect a viewer's visual attention, their interpretation, and their memory for whatever they happen to have seen. The use of sound in film (no matter whether its use is diegetic or non-diegetic - that is, part of the recounted story or not) is interesting inasmuch as simultaneously presented auditory and visual inputs do not necessarily have to be integrated perceptually for crossmodal effects to occur. The literature published to date highlights the multiple ways in which music affects people's perception of semantically meaningful film clips. Nevertheless, despite the emerging body of rigorous scientific research, the professional addition of music to film would still appear to be as much an art as a science. Furthermore, a number of potentially important questions remain unresolved, including the extent to which habituation, sensory overload, distraction, film type (i.e., fictional or informational), and/or context modulates the influence of background music. That said, this emerging body of empirical literature provides a number of relevant insights for those thinking more generally about sensory augmentation and multisensory experience design. Looking to the future, the principles uncovered in this work have growing relevance for emerging domains such as immersive media, virtual reality, multisensory marketing, and the design of adaptive audiovisual systems.
3-D Reconstruction of Fingertip Deformation During Contact Initiation
Dexterous manipulations rely on tactile feedback from the fingertips, which provides crucial information about contact events, object geometry, interaction forces, friction, and more. Accurately measuring skin deformations during tactile interactions can shed light on the mechanics behind such feedback. To address this, we developed a novel setup using 3-D digital image correlation (DIC) to both reconstruct the bulk deformation and local surface skin deformation of the fingertip under natural loading conditions. Here, we studied the local spatiotemporal evolution of the skin surface during contact initiation. We showed that, as soon as contact occurs, the skin surface deforms very rapidly and exhibits high compliance at low forces (<0.05 N). As loading and thus the contact area increases, a localized deformation front forms just ahead of the moving contact boundary. Consequently, substantial deformation extending beyond the contact interface was observed, with maximal amplitudes ranging from 5% to 10% at 5 N, close to the border of the contact. Furthermore, we found that friction influences the partial slip caused by these deformations during contact initiation, as previously suggested. Our setup provides a powerful tool to get new insights into the mechanics of touch and opens avenues for a deeper understanding of tactile afferent encoding.
Temporal Window of Integration XOR Temporal Window of Synchrony
One of the most extensively studied constructs in multisensory research is the temporal window of integration. Its extent has been variously estimated by measuring the temporal boundaries within which stimuli in different sensory modalities are perceived as simultaneous or elicit multisensory integration effects. However, there is ample evidence that these two approaches produce distinct psychometric outcomes, as the widths of the windows they yield differ even when estimated with equivalent designs and stimuli. In fact, these two estimates can sometimes even be negatively correlated. What is more, the perception of synchrony has been found to be neither necessary nor sufficient for the occurrence of multisensory illusions. This suggests that subjective simultaneity and integration phenomena are dissociable, undermining the conclusions of studies that use them interchangeably. Failing to disentangle the temporal windows in which they occur has led to contradictory findings and considerable confusion in basic research that has started extending to other domains. In clinical studies, for example, this confusion has affected work ranging from neuropsychological conditions (such as schizophrenia, mild cognitive impairment, dyslexia, and autism) to more general health factors (such as obesity and inflammation); in applied research, it is seen in studies using virtual reality, human-computer interfaces, and warning systems in vehicles. In this brief review, we discuss the importance of distinguishing these two constructs. We propose that, while the temporal boundaries of integration phenomena are aptly described as the temporal window of integration (TWI), the temporal boundaries of simultaneity judgements should be referred to as the temporal window of synchrony (TWS).
Older Adults with Clinically Normal Sensory and Cognitive Abilities Perceive Audiovisual Simultaneity and Temporal Order Differently than Younger Adults
It is well established that individual sensory and cognitive abilities often decline with older age; however, previous studies examining whether multisensory processes and multisensory integration also change with older age have been inconsistent. One possible reason for these inconsistencies may be due to differences across studies in how sensory and cognitive abilities have been characterized and controlled for in older adult participant groups. The current study examined whether multisensory (audiovisual) synchrony perception is different in younger and older adults using the audiovisual simultaneity judgement (SJ) and temporal order judgement (TOJ) tasks and explored whether performance on these audiovisual tasks was associated with unisensory (hearing, vision) and cognitive (global cognition and executive functioning) abilities within clinically normal limits. Healthy younger and older adults completed audiovisual SJ and TOJ tasks. Auditory-only and visual-only SJ tasks were also completed independently to assess temporal processing in hearing and vision. Older adults completed standardized assessments of hearing, vision, and cognition. Results showed that, compared to younger adults, older adults had wider temporal binding windows in the audiovisual SJ and TOJ tasks and larger points of subjective simultaneity in the TOJ task. No significant associations were found among the unisensory (standard baseline and unisensory SJ), cognitive, or audiovisual (SJ, TOJ) measures. These findings suggest that audiovisual integrative processes change with older age, even within clinically normal sensory and cognitive abilities.
Temporal Coherence in Crossmodal Perceptual Binding: Implications for the Design of a Real-Time Multisensory Speech Recognition Algorithm
The inputs delivered to different sensory organs provide complementary information about the environment. Many previous studies have demonstrated that presenting multisensory information (e.g., visual) can improve auditory perception, especially in noisy environments. Understanding temporal asynchronicity between different sensory modalities is fundamentally important to process and deliver multisensory information in real time with minimal time delay. The purpose of this study was to quantify the average limit of temporal asynchronicity where multisensory stimuli are likely to be perceptually integrated. Twenty adults participated in simultaneity judgment measurements using 100-ms stimuli in three different sensory modalities (auditory, visual, and tactile), and their test-retest reliability of the simultaneity judgments was verified on a weekly basis by three separate tests. Two crossmodal temporal coherence cues were examined: the temporal binding window (TBW), denoting a time frame where two sensory modalities were perceptually integrated, and the point of subjective simultaneity (PSS), denoting a perceptual lead toward one modality over others. According to the average results, the TBWs occurred in 389 ms (auditory-visual, AV), 324 ms (auditory-tactile, AT), and 299 ms (visual-tactile, VT), and the PSSs were shifted 105 ms toward a visual cue, 16 ms toward a tactile cue, and 77 ms toward a visual cue for the AV, AT, and VT conditions, respectively. Over all three crossmodalities, the test-retest reliability averaged less than 50 ms for the TBW and 30 ms for the PSS. The findings in this study might specify a minimum amount of time delay for real-time multisensory processing, suggesting temporal parameters for future developments in multisensory hearing assistive devices.
Is It Different to Touch Oneself Than to Touch Others? A Scientific Journey with Vincent Hayward
In this article, we wish to share a scientific journey with our colleague and dear friend Vincent Hayward. The question of the extent to which it is different to touch oneself and someone else's skin brought us to many experimental studies and scientific discoveries. We present some of them here. It started with the use of a tactile device to investigate whether the reference frames specific to the hand differs depending on its position, towards or away from oneself. We then developed a technique allowing us to record skin-to-skin touch by means of an accelerator fixed at a short distance from the touching skin. We used this technique to probe specific parameters involved in skin-to-skin touch, such as speed and pressure, as well as the differences that arise in the signal when touching our own versus someone else's skin. Finally, the same methodology was used to record social touch to convey it at a distance through the auditory channel. Through this short piece we wish to show how Vincent Hayward inspired this new field of research, opening to myriads of applications.
Preceding Flavor Cues Modulate Visual Search via Color-Flavor Associations: Evidence for Top-Down Working-Memory Mechanisms
Although crossmodal interactions between vision and other modalities have been extensively studied, the reverse influence of nonvisual cues on visual processing remains underexplored. Through three experiments, we demonstrate how flavor cues bias visual search via color-flavor associations, with this modulation critically dependent on working-memory engagement. In Experiment 1, participants performed a shape-based visual search task after tasting either a predictive flavor (e.g., target consistently appeared in red after strawberry flavor) or an unpredictive flavor (e.g., target appeared in any of four colors with equal probability after pineapple flavor). Results showed that only predictive cues biased attention, whereas unpredictive cues had no effect. In Experiment 2, when participants performed a working-memory task, even unpredictive flavor cues shortened reaction times and accelerated fixations on targets appearing in the flavor-associated color. Experiment 3 further generalized these effects to ecologically valid product search scenarios. Collectively, these findings demonstrate that flavor cues modulate visual search through top-down mechanisms rather than bottom-up attentional capture, highlighting the essential role of working memory in driving this crossmodal attentional bias.
Haptic Microscopy: Tactile Perception of Small Scales
Over three PhD theses co-supervised with Vincent Hayward, we developed a technique to scale up microscale force interactions to a user's hand with near-perfect linear amplification. While this challenge could be approached through robotic teleoperation - using a precise robot manipulator with force sensing controlled via a haptic device - the required bilateral coupling between different physical scales demands extremely large homothetic gains (typically ×10 000 to ×100 000) in both displacement and force. These large gains compromise transparency, as device imperfections and stability requirements mask the faithful perception of microscale phenomena. To overcome this limitation, we developed the concept of haptic microscopy. We designed a complete microscale teleoperation system from the ground up, featuring a custom robotic manipulator and novel haptic device, implementing direct bilateral coupling with pure gains. This electromechanical system successfully amplifies microscale forces several thousand times, enabling operators to better understand the physical landscape they are manipulating. Our paper details the design process for both the microtool and haptic device, and presents experiments demonstrating users' ability to tactilely explore microscale interactions.
Pitch-Color Associations are Context-Dependent and Driven by Lightness
Pitch-color associations have been widely explored in the context of cross-modal correspondences. Previous research indicates that pitch height maps onto lightness, and that high pitches are often associated with yellow and low pitches with blue. However, whether these associations are absolute or relative remains unclear. This study investigated the effect of context on pitch-color associations by presenting seven pitch stimuli (C4-B4) in randomized, ascending, and descending orders. A large sample ( N = 6626) was asked to select colors for each pitch using a color wheel. Results revealed that pitch height was linearly mapped onto lightness, with higher pitches associated with lighter colors. Notably, this mapping was influenced by context, as ascending sequences produced lighter colors and descending sequences resulted in darker colors compared to randomized presentations. Furthermore, lightness associations developed progressively, going from binary to linear as trials progressed. Saturation on the other hand did not follow a linear pattern but peaked at mid-range pitches and was not influenced by context. Additionally, compared to randomized presentation, color associations show a downward shift (i.e., reported for lower pitches) in the ascending presentation, and an upward shift (i.e., reported for higher pitches) in the descending presentation. These findings suggest that pitch-color associations are relative rather than absolute, possibly due to low ability to categorize pitches in the general population, with lightness appearing to emerge as the primary factor for color choices. This study contributes to the understanding of associations across sensory modalities, which may be a promising venue to investigate hidden cognitive processes such as sensory illusions.
Tingle-Eliciting Audiovisual Properties of Autonomous Sensory Meridian Response (ASMR) Videos
In autonomous sensory meridian response (ASMR), certain audiovisual stimuli can evoke a range of spontaneous sensations, in particular a pleasant tingling that often originates across the scalp, spreading down the spine toward the shoulders ('tingles'). Major drivers of tingle elicitation in ASMR stimuli are often 'crisp' sounds created by whispering or manipulating an object, as well as social-attentional features such as implied direct attention to the viewer. However, relationships between specific stimulus properties and ASMR-typical subjective responses remain to be fully mapped. In two studies, we therefore sought to isolate specific tingle-eliciting stimulus features by comparing tingle reports for ASMR video clips between ASMR experiencers and control participants. The first study compared intact versus desynchronized video clips to probe whether the presence of audiovisual features would be sufficient to elicit tingles, or whether these features needed to be presented in a coherent sequence. The second study compared clips with filtered and unfiltered audio, demonstrating that 'crisp' sounds had greater tingle efficacy over 'blunted' sounds. Overall, the presence of stimulus features in both synchronized and desynchronized clips was effective in eliciting self-reported subjective responses (tingle frequency), while intact clips involving object manipulation and speech sounds were most effective. An exploratory analysis suggested that viewer-oriented implied attention also influenced tingle ratings. These findings further pinpoint the importance of object and speech sounds in eliciting ASMR tingle responses, supporting the proposition that audiovisual stimulus features implying proximity to the viewer play a key role.
Introduction to the Special Issue on The Merging of the Senses
The 1993 book The Merging of the Senses has proven to be a profoundly impactful text that has shaped research programs studying the interaction between the senses for the last three decades. The book combines skillful and approachable narration with engaging illustrations and was received with rave reviews on publication as one of the first comprehensive approaches to the subject. It captures the impressive breadth of domains in which multisensory integration impacts the daily life of all animals and promotes a systematic approach to understanding its underlying operation by interrogating the nervous system at multiple levels, from the peripheral organ, through convergence, integration, and decision-making, to effected behavior. Thirty years later, the multiple generations of scientists that have been inspired by the text have built an amazing structure on this foundation, through advancing refinements in theory and experimental technique, investigation of new domains and species, an understanding of the origins, maturation, and plasticity of the process, the translation of biological principles to artificial systems, and discovering new applications of multisensory research in clinical and rehabilitative domains.
CART: The Comprehensive Analysis of Reaction Times - GUI for Multisensory Processes and Race Models
Multisensory integration (MSI) is a core neurobehavioral operation that enhances our ability to perceive, decide, and act by combining information from different sensory modalities. This integrative capability is essential for efficiently navigating complex environments and responding to their multisensory nature. One of the powerful behavioral benefits of MSI is in speeding responses. To evaluate this speeding, traditional research in MSI often relies on so-called race models, which predict reaction times (RTs) based on the assumption that information from the different sensory modalities is initially processed independently. When observed RTs are faster than those predicted by these models, it indicates the presence of true convergence and integration of multisensory information prior to the initiation of the motor response. Despite the strong applicability of race models in MSI research, analysis of multisensory RT data often poses challenges for researchers, particularly in managing, interpreting and modeling large datasets or a collection of datasets. To surmount these challenges, we developed a user-friendly graphical user interface (GUI) packaged into a freely available software application that is compatible with both Windows and Mac and that requires no programming expertise. This tool simplifies the processes of data loading, filtering, and statistical analysis. It allows the calculation and visualization of RTs across different sensory modalities, the performance of robust statistical tests, and the testing of race model violations. By integrating these capabilities into a single platform, the CART-GUI facilitates MSI analyses and makes it accessible to a wider range of users, from novice researchers to experts in the field. The GUI's user-friendly design and advanced analytical features will allow for valuable insights into the mechanisms underlying MSI and contribute to the advancement of research in this domain.
