Biophysical Circuit Modeling of Electro-Quasistatic Multi-Human Body Communication
Human body communication, particularly of the Electro-Quasistatic variety, has gained traction among low-power wireless circuit designers due to its benefits in terms of power and physical security compared to conventional electromagnetic or radio frequency communication. Yet, applying its theory and practice has thus far been limited to a single person, without a clear understanding of how the channel behaves when multiple individuals or human bodies are added. To the author's knowledge, this work analyzes the limit of the quasistatic approximation in a multiple human body scenario for the first time. It demonstrates how the approximation changes with the length scale. Furthermore, the channel gain for varying numbers of human bodies is measured to be -35 dB (one human body), -41 dB (two humans), and -44 dB (three humans) for a ground-connected transmitter. A complete bio-physical circuit model is generated to accurately predict the voltage received on the quasi-static structure depending on how many human bodies are connected serially. The impact of the work can aid designers in realizing applications in secure key exchange, authentication, and music sharing using electro-quasistatic human body communication techniques.
Assessing the robustness of deep learning based brain age prediction models across multiple EEG datasets
The increasing availability of large electroencephalography (EEG) datasets enhances the potential clinical utility of deep learning (DL) for cognitive and pathological decoding. However, dataset shifts due to variations in the population and acquisition hardware can considerably degrade the model performance. We systematically investigated the generalisation of DL models to unseen datasets with different characteristics, using age as the target variable. Five datasets were used in two different experimental setups, including (1) leave-one-dataset-out (LODO) and (2) leave-one-dataset-in (LODI) cross validation. A comprehensive set of 1805 different hyperparameter configurations was tested, including variations in the DL architectures and data pre-processing. The performance varied across source/target dataset pair. Using LODO, we obtained Pearson's r values of {0.63, 0.84, 0.75, 0.23, 0.10} and $R^{2}$ values of {-0.01, 0.63, 0.41, -4.66, -70.98}. For LODI, the results varied in Pearson's r from -0.11 to 0.84 and $R^{2}$ values from -704.89 to 0.65, depending on the source and target dataset. Adjusting the model intercepts using the average age of the target dataset substantially improved some $R^{2}$ scores. Our results show that DL models can learn age-related EEG patterns which generalise with strong correlations to datasets with broad age spans. The most important hyperparameter was to use the frequency range between 1 and 45Hz, rather than a single frequency band. The second most important hyperparameter effect depended on the experimental setup. Our findings highlight the challenges of dataset shifts in EEG-based DL models and establish a benchmark for future studies aiming to improve the robustness of DL models across diverse datasets.
Compression-Enabled Joint Entropy Estimation for Seizure Detection on Human Intracortical Electroencephalography
Of the 1% of the world population with epilepsy, one-third have drug-resistant epilepsy and often turn to surgical intervention. Current epilepsy treatment relies on manual review by epileptologists and could benefit from reliable quantitative electroencephalography (qEEG) approaches to speed up evaluation, minimize inter-reviewer variance, and deliver higher quality and more equitable care.
Deep Transfer Learning in Intra-subject and Inter-subjects for Intracortical Brain Machine Interface Decoding
This study proposes an Improved Deep Transfer Network (IDTN) to enhance decoding accuracy, calibration efficiency, and adaptability of intracortical brain machine interface (iBMI) systems while reducing the reliance on new labeled samples.
ZS-KAN: Zero-shot Image Denoising with Lightweight Kolmogorov-Arnold Networks
Current learning-based image denoising methods have achieved impressive performance. However, their reliance on deep neural architectures and large paired datasets limits their applicability in data-limited or edge computing scenarios. Motivated by the expressive functional approximation power of Kolmogorov-Arnold networks (KANs), here we present ZS-KAN-a lightweight yet highly effective and computationally efficient zero-shot denoising method. ZS-KAN combines the computational efficiency of convolutional neural networks with the representational flexibility of KANs, achieving competitive denoising performance while requiring only 1%-25% of the parameters used by other recent zero-shot approaches. Experimental results on synthetic and real-world noisy data demonstrate that ZS-KAN achieves comparable or even superior performance to state-of-the-art zero-shot methods while maintaining significantly lower model complexity. These advantages highlight the potential of ZS-KAN for practical deployment. The PyTorch implementation is publicly available at: https://github.com/Jayx-Wang/ZS-KAN.
Full-Spectrum Analysis with Machine Learning for Quantitative Assessment of Lateral Flow Immunoassays: A Platform Approach
Lateral flow immunoassays (LFIAs) provide rapid point-of-care results but lack quantitative capabilities. This study presents a platform technology integrating full-spectrum analysis (400-700 nm) with machine learning to enhance qualitative LFIAs with semi-quantitative assessment capabilities. We analyzed 241 clinical nasopharyngeal specimens using portable spectrometry to capture gold nanoparticle optical signatures from SARS-CoV-2 rapid tests as a validation model. Systematic evaluation of normalization strategies revealed T-C differential outperformed T/C ratio normalization. Signal processing through Savitzky-Golay filtering, standard normal variate transformation, and principal component analysis reduced dimensionality from 601 to 4 features while retaining 97.26% variance. Among five evaluated algorithms, random forest achieved optimal performance (R² = 0.961, RMSE = 2.235 Ct) across clinically relevant ranges (PCR Ct 10.8-35.0). Bland-Altman analysis revealed measurement uncertainty of ±4.2 Ct, indicating suitability for population surveillance rather than precise individual quantification. Feature importance analysis identified 520-570 nm as the critical spectral region, consistent with gold nanoparticle surface plasmon resonance. This platform approach demonstrates that standard LFIAs contain extractable semi-quantitative information accessible through spectral-machine learning integration. While validated using COVID-19, the framework's modular design enables adaptation to diverse analytes including biomarkers, therapeutic drugs, and environmental contaminants without fundamental architectural changes. The methodology establishes a foundation for enhanced lateral flow diagnostics, particularly valuable in resource-limited settings where rapid semi-quantitative results provide greater utility than delayed laboratory measurements.
Stability strategy restrictions do not elicit compensatory mechanisms during mediolaterally perturbed slow walking
Healthy individuals have the ability to overcome perturbations when walking without falling. They have multiple stability strategies at their disposal, but it remains unclear how these different strategies compensate for one another when one may be limited due to external factors. The objective of the current study was to determine how the different stability strategies compensate for one another when mediolateral perturbations were applied.
Fast 3D Partial Boundary Data EIT Reconstructions using Direct Inversion CGO-based Methods
Objective: To develop, and test, a fast image 3D reconstruction method for partial boundary data electrical impedance tomographic absolute and time-difference imaging.
Smartphone-Based Blood Pressure Monitoring via the Oscillometric Finger Pressing Method: Investigation of the DC Component of PPG
Oscillometric finger pressing is a potential method for smartphone-based blood pressure (BP) monitoring. A photoplethysmography (PPG)-force sensor unit measures the slowly increasing finger pressure applied by the user under visual guidance and the resulting variable blood volume oscillations ("AC PPG"). BP can then be estimated from the oscillation height versus finger pressure function. The non-oscillating component of PPG ("DC PPG") during oscillometric finger pressing was investigated.
A Dynamic Local-Global Spatiotemporal Transformer Network for Pain Intensity Estimation in Patients With Disorders of Consciousness
Clinical diagnosis of disorders of conscious ness (DOC) suffers from a high misdiagnosis rate, particularly in differentiating the minimally conscious state (MCS) from the vegetative state/unresponsive wakefulness syndrome (VS/UWS). Recent studies have linked pain perception to the level of consciousness. This study proposes a dynamic local-global spatiotemporal transformer (DLGSTT) network for estimating pain intensity from facial expressions. The DLGSTT network integrates a global multi-scale feature extraction module with a local attention feature ex traction module to efficiently capture diverse features in facial expressions and enhance the perception of expression changes. Additionally, a discrete cosine transform (DCT) enhanced temporal transformer module is incorporated to extract temporal features from the dynamic changes in facial expressions, with pain intensity scores used to quantify pain perception. Experimental results demonstrate that the DLGSTT network outperforms state-of-the-art algorithms on public datasets. Furthermore, when applied to a self-collected dataset of 33 DOC patients, the results show a significant correlation between pain intensity and levels of consciousness, and reveal gender-based differences in pain perception thresholds. Our method is validated as a feasible clinical tool for the auxiliary diagnosis of DOC patients, serving as a valuable complement to behavioral scales and potentially improving diagnostic accuracy.
A Pulsed Tumor Treating Fields Protocol to Improve Glioblastoma Therapy
Tumor Treating Fields (TTFields) utilize alternating fields (AC fields) within 100-300 kHz and electric field strengths above 1 V/cm for glioblastoma (GBM) treatment. However, the electric field is often reduced to a relatively low value (below 1 V/cm) due to the unavoidable thermal effects induced by Joule heating on the patient's skin.
Ex-vivo Prostate Evaluation of Fused-data TREIT using only Biopsy-probe electrodes
This study evaluates a fused-data transrectal electrical impedance tomography (TREIT) method for prostate cancer imaging on a set of 22 ex vivo prostates. A previously optimized TREIT algorithm is utilized, and novel validation and fusion approaches leveraging pathology information are considered. Overall, the aim was to increase the sensed volume of a standard 12-core prostate biopsy by adding TREIT imaging. Two TREIT approaches were considered: 1. including prostate boundary information (EIT-P) and 2. including prostate and tumor boundary information (EIT-P+T). Both simple electrical impedance spectroscopy (EIS) metrics and the two imaging approaches (EIT-P and EIT-P+T) were evaluated with respect to biopsy core, 3D (EIT-P) image, and tumor-grade data. Best AUCs of 0.85, 0.84, and 0.83 were found when considering increasing volumes of tissue (0.8%, 2.7%, and 15% of the prostate). The largest measurement volume (15%), which utilized EIT-P, sensed significantly more prostate tissue than the standard biopsy only approach (<1%). These represent large improvement compared to prior clinical EIS biopsy and TREIT studies. Tumor-grade analysis (via EIT-P+T) appears to show promise but more data is required to confirm this. Overall, the study made important strides in developing the TREIT technique and further investigation, likely in an in vivo study, appears merited.
Data Augmentation Via Digital Twins to Develop Personalized Deep Learning Glucose Prediction Algorithms for Type 1 Diabetes in Poor Data Context
Accurately predicting glucose levels is essential for effectively managing type 1 diabetes (T1D), a chronic condition in which the body cannot produce insulin. Although deep learning approaches have shown promise, their training requires extensive datasets that capture a wide range of physiological and behavioral variations. However, obtaining such datasets can be challenging and impractical, especially when their collection demands significant patient effort. To overcome this limitation, we propose a data augmentation strategy that leverages digital twins of individuals with T1D (DT-T1D) to generate personalized synthetic data mirroring real-world glucose-insulin dynamics.
Joint-Shrinkage Pattern Matching for Small-Sample and Imbalanced ERP Decoding in Brain-Computer Interfaces
Event-related potential (ERP)-based brain-computer interface (BCI) systems are approaching sub-microvolt-level resolution, enabling detailed decoding of sophisticated cognitive processes. This progress has increased the demand for robust classifiers. Current algorithms encounter two fundamental challenges when decoding ERPs: data scarcity and class imbalance. To address these challenges, we propose a joint-shrinkage pattern matching (JSPM) algorithm consisting of two modules. First, a novel joint-shrinkage spatial filter is constructed by integrating shrinkage-based regularization with the ℓℓ22,pp norm. This regularization approach effectively bridges the gap between complex structured regularization and implementation simplicity, which introduces automated regularization to enhance module robustness under data-scarce conditions. The ℓℓ22,pp-norm provides a flexible feature distance measurement, enabling adaptation to data quality variability. Second, a weighted template matching module mitigates decision boundary shift caused by class imbalance. Using error-related potentials (ErrPs) as representative signals, we validated the algorithm through comprehensive comparisons. JSPM significantly outperformed 14 state-of-the-art classifiers on one self-collected and two public ErrP datasets. With only 40 imbalanced training samples, it achieved up to 14.84% higher average balanced accuracy (bAcc) than competing methods, maintaining a 4.88% average bAcc advantage over its nearest competitor. Notably, JSPM significantly enhanced inter-class discriminability for ErrP features with approximately 1 μV amplitude, achieving a maximum bAcc enhancement of 8.80%compared to deep learning methods. Overall, JSPM effectively addresses small-sample and imbalanced ERP decoding in BCI systems, facilitating the transition from laboratory research to real-world applications.
Leveraging Swin Transformer for enhanced diagnosis of Alzheimer's disease using multi-shell diffusion MRI
This study aims to support early diagnosis of Alzheimer's disease and detection of amyloid accumulation by leveraging the microstructural information available in multi-shell diffusion MRI (dMRI) data, using a vision transformer-based deep learning framework.
EMI Cancellation for Shielding-Free Ultra-Low-Field MRI
Ultra-Low-Field Magnetic Resonance Imaging (ULF MRI) offers low cost and portability but suffers from electromagnetic interference (EMI) in unshielded environments. This study developed a deep learning-based active EMI suppression method to overcome these limitations.
BDFM: Foundation Model for Segmentation and Classification Tasks of Brain Diseases
The lack of high-quality annotated images and the limited transferability of task-specific models hamper the practical of AI-assisted diagnosis for brain diseases. Developing self-supervised foundation model is a promising solution to address this problem.
Computerized Assessment of Motor Imitation for Distinguishing Autism in Video (CAMI-2DNet)
Motor imitation impairments are commonly reported in individuals with autism spectrum conditions (ASCs), suggesting that motor imitation could be used as a phenotype for addressing autism heterogeneity. Traditional methods for assessing motor imitation are subjective and labor-intensive, and require extensive human training. Modern Computerized Assessment of Motor Imitation (CAMI) methods, such as CAMI-3D for motion capture data and CAMI-2D for video data, are less subjective. However, they rely on labor-intensive data normalization and cleaning techniques, and human annotations for algorithm training. To address these challenges, we propose CAMI-2DNet, a scalable and interpretable deep learning-based approach to motor imitation assessment in video data, which eliminates the need for ad hoc normalization, cleaning and annotation. CAMI-2DNet uses an encoder-decoder architecture to map a video to a motion representation that is disentangled from nuisance factors such as body shape and camera views. To learn a disentangled representation, we employ synthetic data generated by motion retargeting of virtual characters through the reshuffling of motion, body shape, and camera views, as well as real participant data. To automatically assess how well an individual imitates an actor, we compute a similarity score between their motion encodings, and use it to discriminate individuals with ASCs from neurotypical (NT) individuals. Our comparative analysis demonstrates that CAMI-2DNet has a strong correlation with human scores while outperforming CAMI-2D in discriminating ASC vs NT children. Moreover, CAMI-2DNet performs comparably to CAMI-3D while offering greater practicality by operating directly on video data and without the need for ad hoc normalization and human annotations.
From Speech to Sonography: Spectral Networks for Ultrasound Microstructure Classification
The frequency dependence of backscattered radiofrequency (RF) signals produced by ultrasound scanners carries rich information related to the tissue microstructure (i.e., scatterer size, attenuation). This information can be sue to classify tissues based on microstructural changes associated to disease onset and progression. Conventional convolutional neural networks (CNNs) can learn this information directly from radio-frequency (RF) data, but they often struggle to achieve adequate frequency selectivity. This increases model complexity and convergence time, and limits generalization. To overcome these challenges, SincNet, originally developed for speech processing, was adapted to classify RF data based on differences in frequency properties. Rather than learning every filter coefficient, SincNet only learns each filter's low frequency and bandwidth, dramatically reducing the number of parameters and improving frequency resolution. For model interpretability, a Gradient-Weighted Filter Contribution is introduced, which highlights the importance of spectral bands. The approach was validated on three datasets: simulated data with different scatterer sizes, experimental phantom data, and in vivo data of rats which were fed a methionine and choline- deficient diet to develop liver steatosis, inflammation, and fibrosis. The modified SincNet consistently achieved the best results in material/tissue classifications.
SDDA: Spatial Distillation based Distribution Alignment for Cross-Headset EEG Classification
A non-invasive brain-computer interface (BCI) enables direct interaction between the user and external devices, typically via electroencephalogram (EEG) signals. This paper tackles the problem of decoding EEG signals across different headsets, which is challenging due to differences in the number and locations of the electrodes.
Leveraging Rich Mechanical Features and Long-Range Physical Constraints for Lumbar Spine Stress Analysis
The biomechanical properties of the lumbar spine is crucial for assisting the diagnosis, treatment, and prevention of spinal diseases. Traditional biomechanical analysis methods, especially the finite element analysis, require extensive computational resources, precise material property definitions, and complex meshing processes to accurately model the biomechanical behavior of the lumbar spine. While deep learning is introduced to enhance efficiency and accuracy, challenges like data dependency and lack of physical consistency remain.
