Showing 1,081 - 1,100 results of 16,436 for search 'Model performance features', query time: 0.25s Refine Results
  1. 1081

    Enhancing maize LAI estimation accuracy using unmanned aerial vehicle remote sensing and deep learning techniques by Zhen Chen, Weiguang Zhai, Qian Cheng

    Published 2025-09-01
    “…Moreover, the combination of spectral features, texture features, and crop height using the CNN model achieved the highest accuracy in LAI estimation, with the R2 ranging from 0.83 to 0.88, the RMSE ranging from 0.35 to 0.46, and the rRMSE ranging from 8.73 % to 10.96 %.…”
    Get full text
    Article
  2. 1082

    Numerical Investigation on the Thermo‐Mechanical Performance and Structural Mechanisms of Glass–Glass PV Modules in Standard Fire Conditions by Chiara Bedon, Yu Wang

    Published 2025-08-01
    “…Compared to structural needs under ordinary loads, the fire performance is highly demanding and requires appropriate modeling assumptions, as well as sound performance limits. …”
    Get full text
    Article
  3. 1083
  4. 1084
  5. 1085

    Lightweight rice leaf spot segmentation model based on improved DeepLabv3+ by Jianian Li, Long Gao, Xiaocheng Wang, Jiaoli Fang, Zeyang Su, Yuecong Li, Shaomin Chen

    Published 2025-08-01
    “…First, the lightweight feature extraction network MobileNetV3_Large (MV3L) was adopted as the backbone of the model. …”
    Get full text
    Article
  6. 1086

    Balancing accuracy and cost in machine learning models for detecting medial vascular calcification in chronic kidney disease: a pilot study by Urszula Bialonczyk, Malgorzata Debowska, Lu Dai, Abdul Rashid Qureshi, Leon Bobrowski, Magnus Soderberg, Bengt Lindholm, Peter Stenvinkel, Tomasz Lukaszuk, Jan Poleszczuk

    Published 2025-05-01
    “…However, modest performance improvements may not justify higher costs, underscoring the importance of considering cost-effectiveness when selecting classification models.…”
    Get full text
    Article
  7. 1087
  8. 1088

    Exploring the Impact of Species Participation Levels on the Performance of Dominant Plant Identification Models in the Sericite–<i>Artemisia</i> Desert Grassland by Using Deep Lear... by Wenhao Liu, Guili Jin, Wanqiang Han, Mengtian Chen, Wenxiong Li, Chao Li, Wenlin Du

    Published 2025-07-01
    “…The optimal index factor (OIF) was employed to synthesize feature band images, which were subsequently used as input for the DeepLabv3p, PSPNet, and UNet deep learning models in order to assess the influence of species participation on classification accuracy. …”
    Get full text
    Article
  9. 1089

    Classification Based on Brain Storm Optimization With Feature Selection by Yu Xue, Yan Zhao, Adam Slowik

    Published 2021-01-01
    “…Therefore, this paper aims at improving the structure of the evolutionary classification model to improve classification performance. Feature selection is an effective method to deal with large datasets, firstly, we introduce the concept of feature selection and use the different feature subsets to construct the structure of the evolutionary classification model. …”
    Get full text
    Article
  10. 1090

    Encrypted traffic classification method based on convolutional neural network by Rongna XIE, Zhuhong MA, Zongyu LI, Ye TIAN

    Published 2022-12-01
    “…Aiming at the problems of low accuracy, weak generality, and easy privacy violation of traditional encrypted network traffic classification methods, an encrypted traffic classification method based on convolutional neural network was proposed, which avoided relying on original traffic data and prevented overfitting of specific byte structure of the application.According to the data packet size and arrival time information of network traffic, a method to convert the original traffic into a two-dimensional picture was designed.Each cell in the histogram represented the number of packets with corresponding size that arrive at the corresponding time interval, avoiding reliance on packet payloads and privacy violations.The LeNet-5 convolutional neural network model was optimized to improve the classification accuracy.The inception module was embedded for multi-dimensional feature extraction and feature fusion.And the 1*1 convolution was used to control the feature dimension of the output.Besides, the average pooling layer and the convolutional layer were used to replace the fully connected layer to increase the calculation speed and avoid overfitting.The sliding window method was used in the object detection task, and each network unidirectional flow was divided into equal-sized blocks, ensuring that the blocks in the training set and the blocks in the test set in a single session do not overlap and expanding the dataset samples.The classification experiment results on the ISCX dataset show that for the application traffic classification task, the average accuracy rate reaches more than 95%.The comparative experimental results show that the traditional classification method has a significant decrease in accuracy or even fails when the types of training set and test set are different.However, the accuracy rate of the proposed method still reaches 89.2%, which proves that the method is universally suitable for encrypted traffic and non-encrypted traffic.All experiments are based on imbalanced datasets, and the experimental results may be further improved if balanced processing is performed.…”
    Get full text
    Article
  11. 1091

    Integrating Shallow and Deep Features for Precision Evaluation of Corn Grain Quality: A Novel Fusion Approach by Kunal Mishra, Santi Kumari Behera, A. Geetha Devi, Prabira Kumar Sethy, Aziz Nanthaamornphong

    Published 2025-06-01
    “…We evaluated 13 pre-trained CNN models, including AlexNet, VGG19, and ResNet, with AlexNet emerging as the top performer, achieving 71% accuracy validated through statistical analysis using Duncan’s multiple range test. …”
    Get full text
    Article
  12. 1092

    Performance of an AI prediction tool for new-onset atrial fibrillation after coronary artery bypass graftingResearch in context by Hualong Ma, Dalong Chen, Weitao Lv, Qiuying Liao, Jingyi Li, Qinai Zhu, Ying Zhang, Lizhen Deng, Xiaoge Liu, Qinyang Wu, Xianliang Liu, Qiaohong Yang

    Published 2025-03-01
    “…The stacking model achieved superior performance with AUCs 0·931 and F1 scores 0·797 in the independent external validation, outperforming CHA2DS2-VASc, HATCH, and POAF scores (AUC 0·931 vs. 0·713, 0·708, and 0·667; p < 0·05). …”
    Get full text
    Article
  13. 1093

    Feature-based enhanced boosting algorithm for depression detection by Muhammad Sadiq Rohei, Kasturi Dewi Varathan, Shivakumara Palaiahnakote, Nor Badrul Anuar

    Published 2025-07-01
    “…The proposed model covers two pipelines, the feature engineering pipeline which improves the quality of features by picking up the most relevant features while the classification pipeline uses an ensemble approach designed to boost/elevate the model’s performances. …”
    Get full text
    Article
  14. 1094

    Contrastive Feature Bin Loss for Monocular Depth Estimation by Jihun Song, Yoonsuk Hyun

    Published 2025-01-01
    “…Recently monocular depth estimation has achieved notable performance using encoder-decoder-based models. These models have utilized the Scale-Invariant Logarithmic (SILog) loss for effective training, leading to significant performance improvements. …”
    Get full text
    Article
  15. 1095

    Effects of feature selection and normalization on network intrusion detection by Mubarak Albarka Umar, Zhanfang Chen, Khaled Shuaib, Yan Liu

    Published 2025-03-01
    “…The RF models also achieved an excellent performance compared to recent works. …”
    Get full text
    Article
  16. 1096

    Research on Chinese patent classification based on structured features by Ran Li, Wangke Yu, Shuhua Wang

    Published 2025-05-01
    “…The proposed PMDI model, MIP model, and classification method based on structured patent text features collectively contribute to a substantial improvement in classification performance, offering significant support for knowledge-driven services such as knowledge retrieval and patent management.…”
    Get full text
    Article
  17. 1097

    Aligning to the teacher: multilevel feature-aligned knowledge distillation by Yang Zhang, Pan He, Chuanyun Xu, Jingyan Pang, Xiao Wang, Xinghai Yuan, Pengfei Lv, Gang Li

    Published 2025-08-01
    “…Knowledge distillation is a technique for transferring knowledge from a teacher’s (large) model to a student’s (small) model. Usually, the features of the teacher model contain richer information, while the features of the student model carry less information. …”
    Get full text
    Article
  18. 1098

    Embedded feature selection using dual-network architecture by Abderrahim Abbassi, Arved Dörpinghaus, Niklas Römgens, Tanja Grießmann, Raimund Rolfes

    Published 2025-09-01
    “…This mask is applied to a shifted version of the original features, serving as input to the task model. The task model then uses the selected features to perform the target supervised task. …”
    Get full text
    Article
  19. 1099
  20. 1100

    Affective feature knowledge interaction for empathetic conversation generation by Ensi Chen, Huan Zhao, Bo Li, Xupeng Zha, Haoqian Wang, Song Wang

    Published 2022-12-01
    “…In this paper, we propose a novel affective feature knowledge interactive model named AFKI, to enhance response generation performance, which enriches conversation history to obtain emotional interactive context by leveraging fine-grained emotional features and commonsense knowledge. …”
    Get full text
    Article