Showing 1,341 - 1,360 results of 16,436 for search 'Model performance features', query time: 0.26s Refine Results
  1. 1341

    Methane Concentration Inversion Based on Multi-Feature Fusion and Stacking Integration by Yanling Han, Wei Li, Congqin Yi, Ge Song, Yun Zhang

    Published 2025-03-01
    “…The method leverages the series-parallel cascade structure between multiple base models and meta-models to learn different feature representations and patterns in the original data, fully exploring the intrinsic relationships between various feature factors and methane concentration. …”
    Get full text
    Article
  2. 1342

    Emotion Recognition in the Eye Region Using Textural Features, IBP and HOG by Laura Jalili, Josue Espejel, Jair Cervantes, Farid Lamont

    Published 2024-01-01
    “…Results: Our experimental methodology involved employing various classification techniques toassess performance across different models. Among these, SVM exhibited exceptional performance, boasting an impressive accuracy rate of 99.2 %. …”
    Get full text
    Article
  3. 1343

    Feature Engineering for the Prediction of Scoliosis in 5q‐Spinal Muscular Atrophy by Tu‐Lan Vu‐Han, Vikram Sunkara, Rodrigo Bermudez‐Schettino, Jakob Schwechten, Robin Runge, Carsten Perka, Tobias Winkler, Sebastian Pokutta, Claudia Weiß, Matthias Pumberger

    Published 2025-02-01
    “…To test the predictive performance of the selected features, we trained a Random Forest Classifier and evaluated model performance using standard metrics. …”
    Get full text
    Article
  4. 1344

    In-Season Potato Nitrogen Prediction Using Multispectral Drone Data and Machine Learning by Ehsan Chatraei Azizabadi, Mohamed El-Shetehy, Xiaodong Cheng, Ali Youssef, Nasem Badreldin

    Published 2025-05-01
    “…This study evaluated the performance of three machine learning (ML) models—Random Forest (RF), Support Vector Machine (SVM), and Gradient Boosting Regression (GBR)—for predicting potato N status and examined the impact of feature selection techniques, including Partial Least Squares Regression (PLSR), Boruta, and Recursive Feature Elimination (RFE). …”
    Get full text
    Article
  5. 1345

    SAR Target Recognition With Image Generation and Azimuth Angle Feature Constraints by Deliang Xiang, Ye Liu, Jianda Cheng, Xinyu Lu, Yuzhen Xie, Dongdong Guan

    Published 2025-01-01
    “…Experiments conducted on the moving and stationary target acquisition and recognition and a self-collected dataset demonstrate that our method outperforms existing baselines in structural detail preservation and feature distribution quality. When the generated data are used to retrain the recognition model, target recognition accuracy improves by 1.69% and 3.03%, respectively, validating the effectiveness of the proposed generation strategy in boosting downstream automatic target recognition performance.…”
    Get full text
    Article
  6. 1346

    A Linguistic Features-Based Approach for the Functional Analysis of Disinformation in Spanish by Eduardo Puraivan, Fabian Riquelme, Rene Venegas

    Published 2025-01-01
    “…Moreover, these models demonstrate strong performance when the same features are applied to a different dataset and continue to perform well when the feature selection is adjusted to fit the new context.…”
    Get full text
    Article
  7. 1347

    Consensus Guided Multi-View Unsupervised Feature Selection with Hybrid Regularization by Yifan Shi, Haixin Zeng, Xinrong Gong, Lei Cai, Wenjie Xiang, Qi Lin, Huijie Zheng, Jianqing Zhu

    Published 2025-06-01
    “…A hybrid regularization strategy incorporating the <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msub><mi>L</mi><mrow><mn>2</mn><mo>,</mo><mn>1</mn></mrow></msub></semantics></math></inline-formula>-norm and the Frobenius norm is introduced into the feature selection objective function, which not only promotes feature sparsity but also effectively prevents overfitting, thereby improving the stability of the model. …”
    Get full text
    Article
  8. 1348

    Multiscale Feature Reconstruction and Interclass Attention Weighting for Land Cover Classification by Zongqian Zhan, Zirou Xiong, Xin Huang, Chun Yang, Yi Liu, Xin Wang

    Published 2024-01-01
    “…In addition, compared with other state-of-the-art models, our method can achieve similar or even better classification results, yet offer superior inference performance.…”
    Get full text
    Article
  9. 1349
  10. 1350

    Feature selection based on Mahalanobis distance for early Parkinson disease classification by Mustafa Noaman Kadhim, Dhiah Al-Shammary, Ahmed M. Mahdi, Ayman Ibaida

    Published 2025-01-01
    “…Significant improvements in classification performance were observed across all models. On the ''Parkinson Disease Classification Dataset'', the feature set was reduced from 22 to 11 features, resulting in accuracy improvements ranging from 10.17 % to 20.34 %, with the K-Nearest Neighbors (KNN) classifier achieving the highest accuracy of 98.31 %. …”
    Get full text
    Article
  11. 1351

    Adaptive deep feature representation learning for cross-subject EEG decoding by Shuang Liang, Linzhe Li, Wei Zu, Wei Feng, Wenlong Hang

    Published 2024-12-01
    “…Methods: We propose a novel adaptive deep feature representation (ADFR) framework to improve the cross-subject EEG classification performance through learning transferable EEG feature representations. …”
    Get full text
    Article
  12. 1352

    Two-level feature selection method based on SVM for intrusion detection by Xiao-nian WU, Xiao-jin PENG, Yu-yang YANG, Kun FANG

    Published 2015-04-01
    “…To select optimized features for intrusion detection,a two-level feature selection method based on support vector machine was proposed.This method set an evaluation index named feature evaluation value for feature selection,which was the ratio of the detection rate and false alarm rate.Firstly,this method filtrated noise and irrelevant features to reduce the feature dimension respectively by Fisher score and information gain in the filtration mode.Then,a crossing feature subset was obtained based on the above two filtered feature sets.And combining support vector machine,the sequential backward selection algorithm in the wrapper mode was used to select the optimal feature subset from the crossing feature subset.The simulation test results show that,the better classification performance is obtained according to the selected optimal feature subset,and the modeling time and testing time of the system are reduced effectively.…”
    Get full text
    Article
  13. 1353

    YOLO-SWD—An Improved Ship Recognition Algorithm for Feature Occlusion Scenarios by Ruyan Zhou, Mingkang Gu, Haiyan Pan

    Published 2025-03-01
    “…This study aims to enhance the accuracy and robustness of ship recognition by improving deep learning-based object detection models, enabling the algorithm to perform ship detection and recognition tasks effectively in feature-occluded scenarios. …”
    Get full text
    Article
  14. 1354

    Research on filter-based adversarial feature selection against evasion attacks by Qimeng HUANG, Miaomiao WU, Yun LI

    Published 2023-07-01
    “…With the rapid development and widespread application of machine learning technology, its security has attracted increasing attention, leading to a growing interest in adversarial machine learning.In adversarial scenarios, machine learning techniques are threatened by attacks that manipulate a small number of samples to induce misclassification, resulting in serious consequences in various domains such as spam detection, traffic signal recognition, and network intrusion detection.An evaluation criterion for filter-based adversarial feature selection was proposed, based on the minimum redundancy and maximum relevance (mRMR) method, while considering security metrics against evasion attacks.Additionally, a robust adversarial feature selection algorithm was introduced, named SDPOSS, which was based on the decomposition-based Pareto optimization for subset selection (DPOSS) algorithm.SDPOSS didn’t depend on subsequent models and effectively handles large-scale high-dimensional feature spaces.Experimental results demonstrate that as the number of decompositions increases, the runtime of SDPOSS decreases linearly, while achieving excellent classification performance.Moreover, SDPOSS exhibits strong robustness against evasion attacks, providing new insights for adversarial machine learning.…”
    Get full text
    Article
  15. 1355

    CNN-based salient features in HSI image semantic target prediction by Vishal Srivastava, Bhaskar Biswas

    Published 2020-04-01
    “…Therefore in this work, we have extracted the informative features from different CNN models for the benchmark HSI datasets. …”
    Get full text
    Article
  16. 1356

    Bio inspired feature selection and graph learning for sepsis risk stratification by D. Siri, Raviteja Kocherla, Sudharshan Tumkunta, Pamula Udayaraju, Krishna Chaitanya Gogineni, Gowtham Mamidisetti, Nanditha Boddu

    Published 2025-05-01
    “…Using the MIMIC-IV dataset, we employ the Wolverine Optimization Algorithm (WoOA) to select clinically relevant features, followed by a Generative Pre-Training Graph Neural Network (GPT-GNN) that models complex patient relationships through self-supervised learning. …”
    Get full text
    Article
  17. 1357

    Keywords, morpheme parsing and syntactic trees: features for text complexity assessment by Dmitry A. Morozov, Ivan A. Smal, Timur A. Garipov, Anna V. Glazkova

    Published 2024-06-01
    “…The use of an extensive set of syntactic features allowed, in most cases, to improve the quality of work of neural network models in comparison with the previously described set.…”
    Get full text
    Article
  18. 1358

    Automated prediction of fibroblast phenotypes using mathematical descriptors of cellular features by Alex Khang, Abigail Barmore, Georgios Tseropoulos, Kaustav Bera, Dilara Batan, Kristi S. Anseth

    Published 2025-03-01
    “…We train and validate models on features extracted from over 3000 primary heart valve interstitial cells and test their predictive performance on cells treated with the small molecule drugs 5-azacytidine and bisperoxovanadium (HOpic), which inhibited and promoted myofibroblast activation, respectively. …”
    Get full text
    Article
  19. 1359

    Physically Based Dimensionless Features for Pluvial Flood Mapping With Machine Learning by Mark S. Bartlett, Jared VanBlitterswyk, Martha Farella, Jinshu Li, Curtis Smith, Anthony J. Parolari, Lalitha Krishnamoorthy, Assaad Mrad

    Published 2025-04-01
    “…This is demonstrated by incorporating them as features in a logistic regression model for delineating flood extents. …”
    Get full text
    Article
  20. 1360

    An Indoor Scene Classification Method for Service Robot Based on CNN Feature by Shaopeng Liu, Guohui Tian

    Published 2019-01-01
    “…To further evaluate our method, test experiments on unknown scene images from SUN 397 dataset had been done, and the models based on different training datasets obtained 94.34% and 79.80% test accuracy severally, which proved that the proposed method owned good performance in indoor scene classification.…”
    Get full text
    Article