-
121
Neural mechanisms by which attention modulates the comparison of remembered and perceptual representations.
Published 2014-01-01“…A no-cue condition was also included. When attention cannot be effectively deployed in advance (i.e. following the simultaneous-cues), we observed a distributed and extensive activation pattern in the prefrontal and parietal cortices in support of successful change detection. …”
Get full text
Article -
122
MMG-Based Motion Segmentation and Recognition of Upper Limb Rehabilitation Using the YOLOv5s-SE
Published 2025-04-01Get full text
Article -
123
Lightweight Deep Learning Model for Fire Classification in Tunnels
Published 2025-02-01“…This model integrates MobileNetV3 for spatial feature extraction, Temporal Convolutional Networks (TCNs) for temporal sequence analysis, and advanced attention mechanisms, including Convolutional Block Attention Modules (CBAMs) and Squeeze-and-Excitation (SE) blocks, to prioritize critical features such as flames and smoke patterns while suppressing irrelevant noise. …”
Get full text
Article -
124
Separable sustained and selective attention factors are apparent in 5-year-old children.
Published 2013-01-01“…Here we examine whether this pattern is detectable in 5-year-old children from the healthy population. …”
Get full text
Article -
125
Attention-fused residual transformer CNN for robust lower limb movement recognition
Published 2025-07-01“…The AF-RT-CNN architecture combines residual blocks, attention mechanism and Transformer Encoder aiding robust feature extraction, good generalization capability and pattern recognition. …”
Get full text
Article -
126
Spatial attention-guided pre-trained networks for accurate identification of crop diseases
Published 2025-07-01“…To address these challenges, we introduce an enhanced crop disease classification framework that incorporates EfficientNet-B3 with an ancillary convolutional layer and a spatial attention module (ACSA). EfficientNet-B3 offers a strong foundation for feature extraction due to its compound scaling and efficient computation, while the spatial attention module improves classification accuracy by directing the model to focus on critical regions of diseased leaves. …”
Get full text
Article -
127
Segmentation of Low-Grade Brain Tumors Using Mutual Attention Multimodal MRI
Published 2024-11-01“…This study focuses on enabling multimodal MRI sequences to advance the automatic segmentation of low-grade astrocytomas, a challenging task due to their diffuse and irregular growth patterns. A novel mutual-attention deep learning framework is proposed, which integrates complementary information from multiple MRI sequences, including T2-weighted and fluid-attenuated inversion recovery (FLAIR) sequences, to enhance the segmentation accuracy. …”
Get full text
Article -
128
Nonparametric analysis of inter‐individual relations using an attention‐based neural network
Published 2021-08-01“…The high interpretability of the attention mechanism and flexibility of the entire neural network allow for automatic detection of inter‐individual relations included in the raw data, without requiring prior knowledge/assumptions about what modes/types of relations are included in the data. …”
Get full text
Article -
129
Can attention-deficit/hyperactivity disorder be considered a form of cerebellar dysfunction?
Published 2025-01-01“…We suggest considering more rigorous assessments in future ADHD studies, including cerebellar-associated skill evaluations to correlate with symptom severity and other detected outcomes, such as executive dysfunction, and study possible associative patterns that may serve as more objective measures for this diagnosis.…”
Get full text
Article -
130
Optimized classification of potato leaf disease using EfficientNet-LITE and KE-SVM in diverse environments
Published 2025-05-01“…EfficientNet-LITE improves the model's emphasis on pertinent features through Channel Attention (CA) and 1-D Local Binary Pattern (LBP), while preserving computational economy with a reduced model size of 12.46 MB, fewer parameters at 3.11M, and a diminished FLOP count of 359.69 MFLOPs.ResultsBefore optimization, the SVM classifier attained an accuracy of 79.38% on uncontrolled data and 99.07% on laboratory-controlled data. …”
Get full text
Article -
131
Statistical learning re-shapes the center-surround inhibition of the visuo-spatial attentional focus
Published 2025-03-01“…Abstract To effectively navigate a crowded and dynamic visual world, our neurocognitive system possesses the remarkable ability to extract and learn its statistical regularities to implicitly guide the allocation of spatial attention resources in the immediate future. The way through which we deploy attention in the visual space has been consistently outlined by a “center-surround inhibition” pattern, wherein a ring of sustained inhibition is projected around the center of the attentional focus to optimize the signal–noise ratio between goal-relevant targets and interfering distractors. …”
Get full text
Article -
132
SDMA-Net: Swin Transformer-Based Dynamic Memory-Attention Network for Endoscopic Navigation
Published 2025-01-01“…Nevertheless, endoscopic video data often exhibit low texture, variable lighting, and dynamic motion patterns, which poses significant challenges to existing methods. …”
Get full text
Article -
133
-
134
Revealing Depression Through Social Media via Adaptive Gated Cross-Modal Fusion Augmented With Insights From Personality Traits
Published 2025-01-01“…To bridge this gap, we introduce DeXMAG, a novel personalized depression detection framework that integrates a Cross-Modal Attention mechanism with an Adaptive Gated Fusion strategy. …”
Get full text
Article -
135
A Novel Framework for Whole-Slide Pathological Image Classification Based on the Cascaded Attention Mechanism
Published 2025-01-01“…We developed a framework incorporating a cascaded attention mechanism, enhancing meaningful pattern recognition while suppressing irrelevant background information. …”
Get full text
Article -
136
Attention-enhanced hybrid CNN–LSTM network with self-adaptive CBAM for COVID-19 diagnosis
Published 2025-07-01“…To address this, we propose Dual-Attention CNN-LSTM, an innovative hybrid deep learning model designed to enhance COVID-19 detection from CXR images. …”
Get full text
Article -
137
-
138
Neuropsychological Performance: How Mental Health Drives Attentional Function in University-Level Football Athletes
Published 2025-02-01“…QEEG data revealed patterns associated with burnout, chronic pain, and insomnia among the athletes. …”
Get full text
Article -
139
An interpretable XAI deep EEG model for schizophrenia diagnosis using feature selection and attention mechanisms
Published 2025-07-01“…The study proposes a novel automated technique for diagnosing Schizophrenia based on electroencephalogram (EEG) sensor data, aiming to enhance interpretability and prediction performance.MethodsThis research utilizes Deep Learning (DL) models, including the Deep Neural Network (DNN), Bi-Directional Long Short-Term Memory-Gated Recurrent Unit (BiLSTM- GRU), and BiLSTM with Attention, for the detection of Schizophrenia based on EEG data. …”
Get full text
Article -
140
OXSeg: Multidimensional Attention UNet-Based Lip Segmentation Using Semi-Supervised Lip Contours
Published 2025-01-01“…A further challenge with lip segmentation is its reliance on image quality, lighting, and skin tone, leading to inaccuracies in the detected boundaries. To address these challenges, we propose a sequential lip segmentation method that integrates attention UNet and multidimensional input. …”
Get full text
Article