PSYCHONOMIC BULLETIN & REVIEW

Understanding Navon: A detailed structural and conceptual analysis of a basic local-global task
Schweigkofler F, Stuit S, Wagemans J, Nijboer T, van Maanen L and van der Stigchel S
When perceiving visual information, either the local parts or more commonly the global whole can dominate on a perceptual-cognitive level (local/global bias). Using hierarchical figures consisting of smaller local elements forming a larger global shape, researchers have tried to understand how this local-global bias emerges. However, despite extensive research, local-global biases remain an elusive concept (possibly partially due to inadequacies of the task design itself), with implicit assumptions untested and different effect metrics being in use. To provide conceptual clarity, this study presents a detailed description of metrics and effects (content validity) and discusses four major assumptions in the current literature. Where possible, these assumptions are tested on a basic local-global task (the so-called Navon task) across 18 datasets (> 8,000 participants). The metric reliability is tested on nine datasets (> 7,000 participants). Our conceptual reasoning attempts to disentangle the complexity of interference effects in hierarchical figure tasks and discusses a potential facilitation effect. Our empirical results most importantly indicate: (1) the independence of the biased precedence and biased interference metrics, underlining that local-global biases do not reflect a unitary construct, (2) the independence of local-to-global and global-to-local interference, and (3) a low split-half reliability for interference metrics. Based on these insights we argue for further re-evaluating the concept (and theory) of local-global biases, prioritizing research on a thorough conceptual (and theoretical) understanding of local-global tasks, a more consistent use of bias-related metrics in future studies, and a possible need for a more mechanistic approach to facilitate effective future research on hierarchical figure tasks.
Does your surname undermine your research impact?
Cai YL, Wong KFE and Kwong JYY
Citation frequency is widely recognized as a crucial metric for assessing academic impact. Previous studies analyzing data from citation databases have observed a surname order bias-a phenomenon where the alphabetical ordering of researchers' surnames negatively impacts their citation counts. However, the underlying mechanisms driving this bias, the causality behind it, and its implications for in-text citation practices remain poorly understood. Therefore, the present research aims to address these gaps through two preregistered studies. Study 1 replicates and extends the work of Stevens and Duque (Psychonomic Bulletin & Review, 26, 1020-1026, 2019), using a larger sample of 446,755 articles and controlling for surname initial frequency and publication year. Study 2 is an experiment with 307 valid responses from academics holding doctoral degrees, manipulating both citation systems and surname alphabetical order. Consistent and robust findings emerged across both studies: articles authored by individuals with surnames appearing earlier in the alphabet were more likely to be cited. This effect was especially pronounced in the context of alphabetical citation systems, compared with numerical citation systems. The current research provides a testable, reliable explanation for the surname order bias and establishes a causal link between surname alphabetical order and citation frequency. Implications for theory and academic practice are discussed.
Name-face congruence: Mechanisms and influences on social perception
Yan X, Pat N, Jiang Z, Wu Q, Liu Y and Halberstadt J
Previous studies have revealed that people judge the suitability of a person's name based in part on the congruence between the mouth shape required to pronounce the name and the roundness of the named person's face. To investigate the neuro-cognitive mechanisms and downstream implications of this effect for social judgments, we recorded participants' event-related brain potentials (Study 1) and personality judgments (Study 2) associated with name-face pairs. The results revealed that a face incongruent with its name evoked more negative N300 and more positive P3b amplitudes (but not more positive P1 and more negative N170), relative to a congruent face. This suggests that names and faces are integrated at the late stage of cognitive processing (i.e., N300 and P3b time windows), likely through connotative meaning, rather than at the earlier stage (i.e., P1 and N170 time windows) which may reflect perceptual similarity. In addition, participants judged congruently named individuals as more likeable, extroverted, and relatable, implying tangible social benefits for those whose names fit their appearance.
Human turn-taking development: A multi-faceted review of turn-taking comprehension and production in the first years of life
Cosper SH and Pika S
Human communication builds on a highly cooperative and interactional infrastructure-conversational turn-taking. Turn-taking is characterized by reciprocal, alternating exchanges between two or more interactants, avoidance of overlap, and relatively short response times. Although the behavioral principles governing turn-taking in spoken interactions of human adults have been investigated for decades, relatively little is known about the acquisition of conversational turn-taking skills and the developmental trajectories of turn-taking comprehension and production. The aim of the present review was to provide a comprehensive overview of turn-taking development enabling the extrapolation of developmental milestones and investigations across species and taxa. it thus aims to serve as a crucial guide to our current understanding of turn-taking in childhood and instigate a better understanding of turn-taking phylogeny, its evolutionary roots, as well as systematic, quantitative applications across and between species, thereby possibly bridging the existing gap between linguistic and nonlinguistic species.
Switching the motor response weakens confidence serial dependence
Bocheva M and Rahnev D
"Confidence leak" (i.e., confidence serial dependence) is a phenomenon where confidence from a previous trial predicts confidence in a current trial independent of current choice or accuracy. Confidence leak has been shown to robustly occur across various cognitive domains and tasks. However, it remains unclear what factors, if any, modulate the strength of the confidence serial dependence. Here we investigate whether switching the motor response in a perceptual decision-making task influences the strength of the confidence leak effect. Subjects indicated the orientation of a Gabor patch using their left or right hand, with the response hand being randomly cued on each trial. We found that switching the response substantially weakened the confidence leak effect. We further replicated this finding in a second experiment in which left-hand responses were given using a keyboard and right-hand responses were given with a mouse. In both experiments, we also found that confidence leak was weaker whenever the left hand was used in the previous trial, suggesting that lack of motor fluency reduces the strength of confidence serial dependence. These results demonstrate that switching the motor response weakens serial dependencies and imply that the action required to make a choice can impact one's metacognitive evaluations.
Sentence processing by humans and machines: Large language models as a tool to better understand human reading
Kaye NG and Gordon PC
Online measures of reading have been studied with the goal of understanding how humans process language incrementally as they progress through a text. A focus of this research has been on pinpointing how the context of a word influences its processing. Quantitatively measuring the effects of context has proven difficult but with advances in artificial intelligence, large language models (LLMs) are more capable of generating humanlike language, drawing solely on information about the probabilistic relationships of units of language (e.g., words) occurring together. LLMs can be used to estimate the probability of any word in the model's vocabulary occurring as the next word in a given context. These next-word probabilities can be used in the calculation of information theoretic metrics, such as entropy and surprisal, which can be assessed as measures of word-by-word processing load. This is done by analyzing whether entropy and surprisal derived from language models predict variance in online measures of human reading comprehension (e.g., eye-movement, self-paced reading, ERP data). The present review synthesizes empirical findings on this topic and evaluates their methodological and theoretical implications.
Visual cocktail party effect? Self-referencing facilitates object recognition in visual crowding
Gong M, Huangfu B, Yang J, Chen S and Gao J
Visual crowding impedes the recognition of object identity. However, some studies have demonstrated that stimuli with significant meaning (e.g., threats) are processed preferentially under crowded conditions. Here we examined whether self-associated objects, which are personally significant, similarly facilitate object recognition in crowding. To this end, objects were rendered either self-related or other-related by an imagined ownership procedure. Subsequently, a memory test was conducted, along with an assessment of object recognition under crowding conditions. The results replicated the self-reference effect (SRE) in memory. More crucially, objects only transiently associated with self showed reduced crowding interference compared to other-related objects. This finding demonstrates that self-referencing enhances the recognition of objects in crowded scenarios. It supports the notion that personally significant stimuli are prioritized in processing under crowding, a phenomenon that can be conceptualized as a visual cocktail party effect.
Effects of emergent and planned interpersonal synchronization on individual spatiotemporal variability
Uccelli S, Bacchini B, Paulesu E and Sacheli LM
In real-life social exchanges, people synchronize movements via visuomotor information. How synchronizing voluntarily ('planned' synchronization) or involuntary ('emergent' synchronization) affects differently the variability of spatiotemporal movement parameters remains unclear. Here, we explored changes in the kinematics of pairs of participants performing a finger-tapping task in four full-within experimental conditions. In solo-pre and solo-post conditions, participants listened to a target tempo and individually reproduced it (unpaced) while blindfolded. In two social conditions, both participants had full vision of the partner's hand and concomitantly reproduced the target tempo while voluntarily synchronizing together (Synch condition) or resisting synchronization with the partner (Resist condition). Results revealed that participants co-adjusted taps and correlated finger movement peaks spatiotemporally in the social conditions and, crucially, individual variability lowered compared with the solo-pre condition. Moreover, the Synch condition revealed larger correlations and lower variability than the Resist one. Last, the partners' parameters no longer correlated in the solo-post condition and variability was similar to that of the solo-pre condition. This work unveils the importance of minimizing spatiotemporal variability for facilitating perception-action coupling during both emergent and planned interpersonal synchronization.
Individuals who are 'super recognisers' show superior performance on independent measures of face perception, face memory, and face matching
Stantić M, Pounder Z, Bate S, Catmur C and Bird G
Individuals who are superior at face recognition are described as 'super recognisers' (SRs). On standard face recognition tasks SRs outperform individuals who have typical face recognition ability. However, high accuracy on face recognition tasks may be driven by superior ability in one or more of several component processes including face perception, face matching, and face memory. The present study utilised the Oxford Face Matching Test (OFMT) and a novel analysis strategy to derive independent measures of face perception, face matching, and face memory. Thirty-two SRs and the same number of matched controls with typical face recognition ability undertook three face processing tasks: the OFMT, the Glasgow Face Matching Test, and the Cambridge Face Memory Test. At the group level, SRs were more accurate than controls across all tasks, and they reported greater face recognition ability. Of most importance, however, was the finding that SRs exhibited superior face perception, face matching, and face memory. Collectively, these results suggest that SRs have superior ability across multiple independent face-related processes.
Is rate-dependent perception affected by linguistic information about the intended syllable rate?
Severijnen GGA, Bosker HR and McQueen JM
Speech is highly variable in rate, challenging the perception of sound contrasts that are dependent on duration. Listeners deal with such variability by perceiving incoming speech relative to the rate in the surrounding context. For instance, the same ambiguous vowel is more likely to be perceived as being long when embedded in a fast sentence, but as short when embedded in a slow sentence. However, it is still debated to what extent domain-general and domain-specific mechanisms (i.e., language- or speech-specific mechanisms) contribute to rate-dependent perception. Here we examined the role of domain-specific mechanisms in an implicit rate-normalization task in which we manipulated linguistic knowledge about how many syllables words have. Dutch participants were presented with lists of Dutch words that were acoustically ambiguous with regard to having one or two syllables (e.g., /k?'lɔm/ can be monosyllabic klom, /klɔm/, or bisyllabic kolom, /ko.'lɔm/). While being presented with these ambiguous word lists, they saw monosyllabic or bisyllabic transcriptions of the lists on the screen. We predicted that the same acoustic stimulus would be perceived as faster (more syllables per second) when combined with bisyllabic orthography compared to monosyllabic orthography. In turn, this would lead to downstream influences on vowel length perception in target words embedded within the word lists (rate-dependent perception of Dutch /ɑ/ vs./ /aː/). Despite evidence of successful orthographic disambiguation of the ambiguous word lists, we did not find evidence that linguistic knowledge influenced participants' rate-dependent perception. Our results are best accounted for by a domain-general account of rate-dependent perception.
Animacy effect in the mnemonic advantage: A three-level meta-analysis
Cheng S, Zhao X, Yang X and Liu Y
Over the last decade, the animacy effect has emerged as a significant advantage in the processing of adaptive memory, illustrating that individuals tend to have superior memory for animate over inanimate stimuli. Despite this, a systematic analysis of the effect remains absent. A quantitative review is needed to assess the stability of the animacy effect, and the reverse animacy effect observed in some studies also requires investigation. Employing a three-level meta-analytic approach, this study provides a comprehensive evaluation and synthesis of the animacy effect's influence on memory processing. Through the integration of 45 primary studies, we conclusively demonstrate the consistent presence of the animacy effect within the context of enhanced memory processing, characterized by a large effect size (η= .19, 95% CI [.38, .55]). Our findings indicate that the impact of the animacy effect is robust across variations in study year and geographical location, confirming its stability across diverse cultural and temporal frameworks. Importantly, our analysis revealed that the animacy effect was moderated by the type of memory task. Specifically, the animacy effect was stronger in free recall compared to recognition and cued recall, with the latter two showing a less consistent animacy effect. This insight underscores the necessity of considering the memory task type in research on the animacy effect, particularly in experimental designs investigating the underlying mechanisms of its influence. In sum, although the magnitude of the animacy effect may vary across memory tasks, it represents a stable memory advantage shaped by evolutionary pressures. Continued research is needed to uncover its cognitive underpinnings and to translate these findings into practical domains such as education and marketing.
Category learning difficulties in ADHD across modalities and multiple learning systems
Roark CL, Ben-Anat Y and Gabay Y
Attention-deficit/hyperactivity disorder (ADHD) has been associated with suboptimal functioning of both the prefrontal cortex and the striatum. These abnormalities may impede the acquisition of perceptual categories, important for fundamental abilities such as object recognition and speech perception. While prior research has shown that children with ADHD perform comparably to neurotypical peers in visual category learning despite using suboptimal strategies, much remains unknown about how adults with ADHD acquire perceptual categories, where more mature functions may shape learning processes differently than in childhood. To address this gap, we investigated auditory and visual category learning in adults with ADHD compared with neurotypical controls. Specifically, we focused on two types of category structures: rule-based categories, which are believed to rely on hypothesis-testing mechanisms mediated by the prefrontal cortex, and information-integration categories, thought to depend on reinforcement learning processes governed by the striatum. Our findings revealed consistent impairments in both rule-based and information-integration category learning among adults with ADHD across sensory modalities. Furthermore, category learning performance was negatively associated with ADHD symptom severity. Computational modeling analyses showed that individuals with ADHD were slower to adopt optimal learning strategies than their neurotypical counterparts, regardless of the category type or sensory modality. These findings point to disruptions in multiple learning systems in young adults with ADHD that extend across sensory modalities and arise from impairments in domain-general mechanisms.
From shape to number: Shape-from-dots homogeneity boosts groupitizing enumeration
Adriano A and Velde MV
Enumerating a large set of objects (e.g., more than four items) can be accomplished more quickly and/or accurately when the objects are grouped into clusters based on Gestalt principles such as proximity and color similarity, a phenomenon known as "groupitizing." However, whether other visuospatial features can similarly influence this mechanism remains unclear. This study investigated the impact of a novel feature: shape-from-dots homogeneity. Participants performed a simple enumeration task involving dot patterns, ranging from four to 20 items, spatially arranged in small clusters. In Experiment 1, the dots within the clusters were placed to form either homogeneous patterns of regular quadrilaterals (e.g., squares) or heterogeneous patterns of irregular, randomly shaped quadrilaterals. To test whether the effect was simply due to symmetry/canonicity, in Experiment 2, the dots within the clusters were placed to form either homogeneous patterns of regular quadrilaterals (e.g., squares) or homogeneous patterns of irregular, randomly shaped quadrilaterals. The results revealed that enumeration reaction times were significantly faster when clusters formed homogeneous shapes compared to heterogeneous ones (Experiment 1), while no difference was found when both patterns contained homogeneous arrays independently of the shapes (Experiment 2), ruling out that the effect was merely driven by spatial symmetry or canonicity. These findings indicate a close interaction between general shape processing and numerosity perception in the Groupitizing mechanism. This suggests that shape-from-dots homogeneity can facilitate numerosity processing akin to other Gestalt principles, likely promoting a multiplication mechanism.
The effects of contextual diversity on lexical processing: A scoping review
Norman R, Taylor JSH and Rodd JM
Research into the effects of contextual diversity on lexical processing has flourished in the past 20 years, encompassing different tasks, populations, and languages, and informing influential theories of word learning. This review provides a comprehensive synthesis of the field. Eighty-six articles (145 experiments) composed of three distinct study types (behavioural [N = 111], computational modelling [N = 20], and corpus validations [N = 14]) met preregistered inclusion criteria. Across experiments, the terminology used for different diversity metrics has been inconsistently applied. We classify all metrics into four categories (count-based, computational, composite, unspecified) to standardise comparisons. Four key findings emerge from this review: Experiments that assessed the impact of diversity on word-form processing (N = 85) show a consistent high-diversity advantage, possibly because high-diversity words are more likely to be 'needed' in the future. Effects of diversity on word-meaning processing (N = 13) were more mixed, showing both low- and high-diversity benefits. We attribute these inconsistencies to varying task demands. Specifically, we conclude that selecting highly precise semantic information can be challenging for words that occur in variable contexts. Computational modelling studies indicate that diversity metrics that quantify the distinctiveness of contexts in which words occur better predict behaviour than simple context counts. Corpus validations show that diversity effects are consistent across languages. This review confirms that diversity in linguistic experience is a key organizational principle of the lexicon but indicates that current theories lack specificity when describing the underlying mechanisms. We make specific recommendations for future research within a structured research cycle.
The influence of perceptual load on behavioral interference of simultaneous positive and negative emotional distractors
Thyagaraj Y and Padmala S
Prioritized processing of emotional stimuli could be detrimental in contexts where the affective information is not goal relevant. To examine whether the behavioral interference of task-irrelevant emotional stimuli depends on the primary task demands, several past studies varied the perceptual load, thus manipulating the resources available for processing emotional distractors. In particular, some previous work reported valence asymmetry in the effects of high (vs. low) perceptual load where negative distraction was reduced but positive distraction was resistant to the perceptual load manipulation. However, previous studies have investigated the impact of perceptual load on attentional capture by positive and negative distractors in isolation. Given real-life scenarios where positive and negative emotional stimuli co-occur, it becomes essential to understand how perceptual load modulates the behavioral interference of simultaneous positive and negative distractors. To address this critical gap, we conducted three behavioral experiments using a letter-search task involving low and high perceptual load, during which simultaneous positive-negative, positive-positive, negative-negative, and neutral-neutral emotional scene distractors were presented. We tested several competing hypotheses regarding how high (relative to low) perceptual load could impact the attentional capture of simultaneous emotional distractors. Across three experiments, in the reaction time data, we consistently found evidence that during high (relative to low) perceptual load, the interference effects of simultaneous emotional distractors were diminished regardless of their valence combination. Our findings do not support the proposed special status of positive distractors under high perceptual load and instead indicate that the effects of perceptual load on simultaneous emotional distractors are valence insensitive.
Statistical learning prioritizes abstract over item-specific representations
Zhou M and Tong SX
Statistical learning optimizes limited working memory by abstracting probabilistic associations among specific items. However, the cognitive mechanisms responsible for the working memory representation of abstract and item-specific information remain unclear. This study developed a learning-memory representation paradigm and tested three participant groups across three conditions: control (Experiment 1), item-specific encoding (Experiment 2), and abstract encoding (Experiment 3). All groups were first shown picture-artificial-character pairs that contained abstract semantic categories at high (100%), moderate (66.7%), and low (33.3%) probability levels and item-specific information (16.7%). Participants then completed an online visual search task that simultaneously assessed statistical learning and memory representation by examining how abstract or item-specific distractors influenced their speed for searching artificial characters. In the control condition, participants spent more time searching abstract than item-specific distractors across all probability levels, indicating abstract prioritization. In the item-specific condition, abstract prioritization was absent. In the abstract condition, enhanced prioritization of abstract information was observed for moderate and low, but not high, probability items. These findings suggest that statistical learning is central to the abstraction process, with input probabilities and encoding strategies jointly shaping the formation of abstract and item-specific representations. This process depends on a flexible working memory system that dynamically adjusts prioritization, particularly when inputs are uncertain.
Attending to attention: Reverse correlation reveals subtle cues to attentiveness in others' faces
Colombatto C and Scholl BJ
Some of the most foundational properties we can perceive from others' faces involve cognitive states, such as how attentive (vs. distracted) they seem - an important ability, since the likelihood of someone in our local environment affecting our fitness is enhanced when they are attentive. But how can we tell whether another person is attentive? This study reveals that the way in which we perceive attentiveness in others' faces is straightforward in some ways, but deeply counterintuitive in others. We explored this using reverse correlation, a data-driven approach that can reveal the nature of internal representations without prior assumptions. In two online studies (n = 200 each), observers viewed pairs of faces created by adding randomly generated noise (across many spatial frequencies) to a constant base face, and had to select which appeared to be most attentive. Analyses of automatically extracted facial landmarks from the resulting "classification images" revealed the determinants of perceived attentiveness. Some cues were straightforward: attentive faces indeed had more direct eye gaze, and larger pupils. But other novel and equally robust cues were subtle and surprising; for example, attentive faces reliably had darker (as if more flared, or retroussé) nostrils. These powerful and subtle effects of facial cues on impressions of attentiveness highlight the importance of attention not just as a perceptual process, but as an object of perception itself.
Bits of confidence: Metacognition as uncertainty reduction
Fitousi D
How do people know when they are right? Confidence judgments - the ability to assess the correctness of one's own decisions - are a key aspect of human metacognition. This self-evaluative act plays a central role in learning, memory, consciousness, and group decision-making. In this paper, I reframe metacognition as a structured exchange of information between stimulus, decision-maker (the actor), and confidence judge (the rater), akin to a multi-agent communication system. Within this framework, the actor aims to resolve stimulus uncertainty, while the rater seeks to infer the accuracy of the actor's response. Applying techniques from information theory, I develop three novel measures of metacognitive efficiency: meta- , meta- , and meta- . These indices are derived from entropy and divergence principles, and quantify how effectively confidence judgments transmit information about both external stimuli and internal decisions. Simulations show that these measures possess several advantages over traditional signal detection theory metrics such as meta- and the M-ratio, including more interpretable scaling, robustness to performance imbalances, and sensitivity to structural constraints. By formalizing metacognitive sensitivity as an information-processing problem, this framework offers a unified, theoretically grounded approach to studying confidence and sheds light on the sources of metacognitive inefficiency across individuals and contexts.
Breaking down prefixed words is unaffected by morphological boundary opacity: Evidence from behavioral and MEG experiments
Cayado DKT, Wray S, Lai MC, Chong AJ and Stockall L
Previous experiments support an initial stage of early, form-based visual word recognition, where morphologically complex words like adorable are segmented into morphemes {adore}+{-able}, despite an orthographic change in the stem. However, most experiments have focused on words with clear boundaries between the affix and stem, making decomposition more straightforward. We investigate whether obscured boundaries between the prefix and stem affect morphological decomposition. Using Tagalog as a test case, we compare the processing of prefixed words [1] without morphophonological changes (e.g., {mang}+{hila} becomes manghila "to pull"), [2] with nasal assimilation obscuring prefix identity (e.g., {mang}+{bulag} becomes mambulag "to blind"), and [3] with nasal substitution obscuring both prefix and stem identities and their morphological boundary at orthographic and phonological levels (e.g., {mang}+{tulak} becomes manulak "to push"). Crucially, these morphophonological changes exhibit variability: nasal substitution is more likely than assimilation for voiceless-initial stems, while the opposite holds for voiced-initial stems. Experiment 1 presents behavioral masked priming data that prefixed words are decomposed into morphemes, even with obscured {prefix}+{stem} boundaries. Experiment 2 further supports these results with data from magnetoencephalography showing neural activity is modulated by stem:whole word transition probability, which indicates morphological decomposition. Findings from both experiments unambiguously show that early, form-based decomposition is robust and flexible enough to recognize morphemes, despite morphophonological changes obscuring the {prefix}+{stem} boundary.
Benefits from sketching for improving comprehension monitoring from illustrated texts
Wiley J, George T and Griffin TD
Although frequently used with instructional expository text, it has been suggested that illustrations can lead to illusions of understanding (beliefs that we understand better than we actually do). In this study using geoscience texts, relative metacomprehension accuracy (the ability to monitor one's own understanding across a set of topics) was found to be particularly poor when only some topics were illustrated. However, when readers were prompted to generate sketches while reading, relative accuracy was improved, and was more similar across illustration conditions. Consistent with the situation-model approach to metacognition, sketching activities may help readers to generate valid and diagnostic cues on which to base their judgments of understanding and avoid reliance on heuristic cues or superficial processing.
Perceiving exertion in others: From interoception to exteroception
Liu M, Dudarev V, de Brouwer AJ and Enns JT
Physical activities are commonly associated with exertion. Yet most of the research to date has focused on the first-person, interoceptive questions of "What are the internal signals associated with exertion?" and "How well do subjective reports correlate with objective measures of energy expenditure?" Here we aim to broaden the scope of this research by asking "How closely are observations of exertion in other people correlated with first-person reports of exertion and objective measures of energy expenditure?" and "What factors influence the accuracy of exertion perception in others?" Although exertion often occurs in the company of other people, there is surprisingly little research on these questions. This is somewhat surprising, since the accurate perception of other people's exertion is often critical, whether that be to cooperate with them, to compete with them, or to encourage them to go on. In this review, we first briefly review the large background on perceived exertion in oneself before turning to our central question of the perception of exertion in others. The small literature we review in the second section offers some clues about the potential exteroceptive signals available from individuals undergoing exertion. A third section in the review considers potential behavioral and neural mechanisms underlying the social perception of exertion, by considering the broader literature on action perception and social perception. In a final section, we offer suggestions for future research in this area, with the goal of including the perception of exertion as but one of the many facets of social perception more broadly.
Within-subject confidence intervals for pairwise differences in scatter plots
Schütz AC and Gegenfurtner KR
Scatter plots are a standard tool to illustrate the covariation of bivariate data. For paired observations of the same variable, they can also be used to illustrate differences in the central tendency. For these differences, it would be useful to draw confidence intervals (CIs) that correctly align with statistical analyses. Here, we describe a method to compute and draw a diagonal CI for pairwise differences in scatter plots. This CI can be compared to the identity line that marks coordinates with identical values in both observations. Such CIs offer advantages for both authors and readers: for authors, the CI is simple to compute and to draw; for readers, the CI is less ambiguous and more informative than other types of illustrations, because the three CIs of the standalone effects of x, y and their pairwise differences can be plotted simultaneously along horizontal, vertical and diagonal axes, respectively. A survey testing the interpretation of standalone effects and pairwise differences in bar and scatter plots by scientists showed that such effects can be interpreted with high certainty and accuracy from scatter plots containing horizonal and vertical CIs for standalone effects and diagonal CIs for pairwise differences.
Demotivated, but still attentive: Text disfluency does not affect mind-wandering and reading comprehension, but reduces motivation
Tietz S, Müller M, Rummel J and Steindorf L
Studies on the relationship between text-processing difficulty, mind wandering, and reading comprehension achieved mixed results. Whereas most studies found mind-wandering frequency to be increased and reading comprehension to be decreased when text processing became more difficult, Faber et al. (Psychonomic Bulletin and Review 24(3), 914-919, 2017) reported an opposite effect when manipulating text difficulty via different font types (i.e., Arial vs. Comic Sans). This effect may reflect a potential of mildly disfluent fonts, such as Comic Sans, to introduce desirable difficulties during reading, thereby enhancing focus on the text. Strongly disfluent fonts, however, may contribute to the commonly observed disadvantages in text focus under conditions of increased text processing difficulty. To test this idea, we conducted a new study (N = 151, student sample) in which we manipulated disfluency in three levels (i.e., fluent, mildly disfluent, strongly disfluent) by using different font types, and compared mind-wandering frequency, reading comprehension, and reading motivation between conditions. The disfluency manipulation affected motivation but not mind wandering or reading comprehension. Additional Bayesian analyses strongly supported the null hypothesis for the latter two. These results suggest that the positive effects of reading disfluency may be less robust than previously assumed and that further research is needed to explore to which extent text-processing difficulty effects on mind wandering are reliant on sample and text characteristics.
Controlling unwanted memories: A conceptual review grounded in the process model of emotion regulation
Bachfischer A and Harris IM
Autobiographical memories are a crucial source of emotional states in our daily lives. While remembering negative events in the past is important to guide future behaviours and steer us away from harm, being reminded of unpleasant events too often or too intensely can have a serious impact on our wellbeing. A solution that may reconcile these positive and negative effects of negative memories is memory control. Being able to control when, how, and which memories to remember, based on our current goals, is similar to being able to control our emotions, which taps into the well-established field of emotion regulation (ER) where the ER Process Model (Gross, Journal of Personality and Social Psychology, 74(1), 224-237 1998b, Psychological Inquiry, 26(1), 1-26 2015) has been extensively used as a theoretical framework. The memory control field is missing such an overarching model that would provide a guiding framework and new insights for emotional memory control research and practice. In this conceptual review, we bring together three lines of well-established research - on Emotion Regulation, Involuntary Autobiographical Memories, and Memory Control - to demonstrate how the Process Model of ER can be applied to memories. The application of the ER model to emotional memories enhances conceptual clarity of the field of memory control, helps to organise existing findings, reveals meaningful similarities and differences between various memory control strategies, identifies the most potentially effective strategies, and points to the most promising future research directions.
Feedback based on simple strategies facilitates strategy execution and selection in foraging
Lin HY and von Helversen B
Previous research has shown that human participants performed suboptimally in patch-leaving behavior during foraging tasks. This suboptimal performance stemmed from two primary sources: Participants often adopted a strategy unsuited to the environment and failed to apply it optimally. The current study investigates whether providing feedback on participants' patch-leaving behavior can improve their performance by facilitating either a switch to a more effective strategy or an enhanced application of their existing strategy. All participants completed a patch-leaving task across three sessions: pre-feedback, feedback, and post-feedback. Their patch-leaving strategies in each session were identified through computational modeling. During the feedback session, participants received feedback based on either the fixed-time (FT) or giving-up-time (GUT) strategy. Most participants employed the GUT strategy in the pre-feedback session and showed improved performance in the post-feedback session. In the FT feedback condition, many participants switched to using the FT strategy in the post-feedback session. Participants who switched improved in performance, whereas those who continued using the GUT strategy did not. In contrast, in the GUT feedback condition, most participants continued using the GUT strategy but benefited from feedback due to a more precise execution of the GUT strategy in the post-feedback session. These results suggest that participants can adapt to a better-suited strategy or improve their application of a suboptimal strategy with appropriate feedback.
Talker-specificity beyond the lexicon: Recognition memory for spoken sentences
Clapp W and Sumner M
Over the past 35 years, it has been established that mental representations of language include fine-grained acoustic details stored in episodic memory. The empirical foundations of this fact were established through a series of word recognition experiments showing that participants were better at remembering words repeated by the same talker than words repeated by a different talker (talker-specificity effect). This effect has been widely replicated, but exclusively with isolated, generally monosyllabic, words as the object of study. Whether fine-grained acoustic detail plays a role in the encoding and retrieval of larger structures, such as spoken sentences, has important implications for theories of language understanding in natural communicative contexts. In this study, we extended traditional recognition memory methods to use full spoken sentences rather than individual words as stimuli. Additionally, we manipulated attention at the time of encoding in order to probe the automaticity of fine-grained acoustic encoding. Participants were more accurate for sentences repeated by the same talker than by a different talker. They were also faster and more accurate in the Full Attention than in the Divided Attention condition. The specificity effect was more pronounced for the Divided Attention than the Full Attention group. These findings provide evidence for specificity at the sentence level. They also highlight the implicit, automatic encoding of fine-grained acoustic detail and point to a central role for cognitive resource allocation in shaping memory-based language representations.
Signal suppression 2.0: An updated account of attentional capture and suppression
Gaspelin N, Ma X and Luck SJ
The signal suppression account of attentional capture was proposed in 2010 to resolve a longstanding debate between bottom-up and top-down theories of capture by proposing that a top-down suppressive mechanism can eliminate bottom-up capture of attention. Since its original proposal, the signal suppression account has garnered much support and has also been challenged in important ways. The current article reviews how the signal suppression account has survived several challenges but has also been updated to account for new findings. The primary updates are that (a) suppression operates on specific feature values and locations rather than squashing a generalized "attend-to-me" signal produced by salient distractors, and (b) suppression reflects implicit learning that is triggered when attention is captured. This revised hypothesis predicts that initial instances of attentional capture are needed to drive the implicit learning processes that lead to distractor suppression. Because high-salience distractors are more likely to capture attention than low-salience distractors prior to this implicit learning process, the revised hypothesis predicts that it will be easier to learn to suppress high-salience distractors than low-salience distractors. It also predicts that explicit attempts to override capture may (ironically) lead to increased rather than decreased distraction.
An investigation of the feeling that an inter-turn silence has lasted too long
Thomas AM and Kaschak MP
During conversations, there is often a short silence between the end of one turn and the beginning of the next. These silences tend to be brief. If a speaker waits too long before starting their turn it may trigger a negative interpretation (e.g., that the speaker is lying). We investigated whether the sense that a response took too long is related to the time it typically takes speakers in general to respond to a given question, participants' tendency to over- or underestimate temporal durations, and participants' level of general and social anxiety. Average response time for individual questions was related to variation in participants' sense that a response has taken too long, but biases in time perception, general anxiety, and social anxiety were not.
Hyper-speed meaning and form predictions: An EEG-based representational similarity analysis
Angulo-Chavira AQ, Castellón-Flores AM, Carrasco-Ortiz H and Arias-Trejo N
Language comprehension involves predictive processing, in which comprehenders anticipate both semantic and form-related attributes of upcoming words. This predictive mechanism is crucial as it enables efficient real-time language processing, allowing listeners and readers to keep pace with rapid information streams and quickly correct potential errors. Using electroencephalography (EEG) and representational similarity analysis (RSA), we investigated whether predictions follow a hierarchical, top-down process or occur in parallel, facilitated by associative mechanisms. In this study, native Spanish-speaking undergraduate students read highly constrained sentences designed to elicit specific target words. RSA was applied to evaluate the similarity between all possible pairs of expected words, and signals were classified into semantic, form-related, or specific-word effects based on their relationship to the expected word. The results revealed a rapid transition between effects, with semantic predictions consistently preceding form-related and specific-word predictions. While the sequential order aligns with hierarchical processing, this rapid predictive transition is better understood in the context of associative mechanisms and predictive coding.
How time shapes letter position flexibility: Testing positional uncertainty and open bigram accounts
Romero-Ortells I, Perea M, Baciero A, Gómez P and Marcet A
One of the critical benchmarks for understanding orthographic processing during word recognition and reading is the transposed-letter effect (e.g., in lexical decision, CHOLOCATE [created by transposing two letters from CHOCOLATE] produces slower and more error responses than CHOTONATE). Two main theoretical frameworks explain this phenomenon: positional uncertainty models, which attribute the effect to uncertainty in letter position encoding that diminishes over time, and open bigram models, which propose a level of ordered pairs of letters between the letter and word levels that may be more resilient to decay. We designed two delayed lexical decision experiments to test whether the transposed-letter effect vanishes or persists at two time delays (750 ms and 1,500 ms). In Experiment 1, a robust transposed-letter effect in accuracy emerged at 750 ms (9.6%) but diminished to a small (2.9%) yet reliable effect at 1,500 ms. Experiment 2 replicated this pattern with a contrast manipulation on the critical letters (e.g., CHOLOCATE vs CHOTONATE), yielding a slightly smaller transposed-letter effect (2.0%) at 1,500 ms. These findings demonstrate that positional uncertainty diminishes over time, yet residual orthographic overlap persists, particularly for a subset of participants, supporting hybrid accounts that combine bottom-up perceptual refinement with top-down contributions from shared sublexical codes (e.g., open bigrams).