Showing 1 - 20 results of 772 for search 'Deep knowledge training', query time: 0.12s Refine Results
  1. 1

    Domain knowledge-infused pre-trained deep learning models for efficient white blood cell classification by P. Jeneessha, Vinoth Kumar Balasubramanian

    Published 2025-05-01
    “…This paper aims to utilize domain knowledge and image data to improve the classification performance of pre-trained models namely Inception V3, DenseNet 121, ResNet 50, MobileNet V2, and VGG 16. …”
    Get full text
    Article
  2. 2

    Intra-day dispatch method via deep reinforcement learning based on pre-training and expert knowledge by Yanbo Chen, Qintao Du, Huayu Dong, Tao Huang, Jiahao Ma, Zitao Xu, Zhihao Wang

    Published 2025-08-01
    “…At the same time, expert knowledge is embedded in the deep reinforcement learning to guide the training of the agent. …”
    Get full text
    Article
  3. 3
  4. 4
  5. 5

    Optimizing Knowledge Transfer Graph for Deep Collaborative Learning by Soma Minami, Naoki Okamoto, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi

    Published 2025-01-01
    “…Knowledge transfer among multiple networks, using predicted probabilities or intermediate-layer activations, has evolved significantly through extensive manual design, ranging from simple teacher—student approaches (for example, knowledge distillation) to bidirectional cohort methods (for example, deep mutual learning). …”
    Get full text
    Article
  6. 6

    The Impact of Integrating Shallow and Deep Information on Knowledge Distillation by Yilin Miao, Yuhong Tang, Huangliang Ren, Jianjun Li

    Published 2025-01-01
    “…It effectively addresses issues such as gradient vanishing and degradation in deep neural networks, making the training process more manageable. …”
    Get full text
    Article
  7. 7

    Boosting Deep Reinforcement Learning with Semantic Knowledge for Robotic Manipulators by Lucía Güitta-López, Vincenzo Suriani, Jaime Boal, Álvaro J. López-López, Daniele Nardi

    Published 2025-06-01
    “…Our architecture combines KGEs with visual observations, enabling the agent to exploit environmental knowledge during training. Experimental validation with robotic manipulators in environments featuring both fixed and randomized target attributes demonstrates that our method achieves up to 60% reduction in learning time and improves task accuracy by approximately 15 percentage points, without increasing training time or computational complexity. …”
    Get full text
    Article
  8. 8

    Knowledge-Based Deep Learning for Time-Efficient Inverse Dynamics by Shuhao Ma, Yu Cao, Ian D. Robertson, Chaoyang Shi, Jindong Liu, Zhi-Qiang Zhang

    Published 2025-01-01
    “…In this paper, we propose a knowledge-based deep learning framework for time-efficient inverse dynamic analysis, which can predict muscle activation and muscle forces from joint kinematic data directly while not requiring any label information during model training. …”
    Get full text
    Article
  9. 9
  10. 10

    Deep Learning for Automatic Image Captioning in Poor Training Conditions by Caterina Masotti, Danilo Croce, Roberto Basili

    Published 2018-06-01
    “…The disadvantage that comes with this straightforward result is that this approach requires the existence of large-scale corpora, which are not available for many languages.This paper introduces a simple methodology to automatically acquire a large-scale corpus of 600 thousand image/sentences pairs in Italian. At the best of our knowledge, this corpus has been used to train one of the first neural captioning systems for the same language. …”
    Get full text
    Article
  11. 11

    Asymmetrical estimator for training encapsulated deep photonic neural networks by Yizhi Wang, Minjia Chen, Chunhui Yao, Jie Ma, Ting Yan, Richard Penty, Qixiang Cheng

    Published 2025-03-01
    “…Despite backpropagation (BP)-based training algorithms being the industry standard for their robustness, generality, and fast gradient convergence for digital training, existing PNN-BP methods rely heavily on accurate intermediate state extraction or extensive computational resources for deep PNNs (DPNNs). …”
    Get full text
    Article
  12. 12

    Enhanced intelligent train operation algorithms for metro train based on expert system and deep reinforcement learning. by Yunhu Huang, Wenzhu Lai, Dewang Chen, Geng Lin, Jiateng Yin

    Published 2025-01-01
    “…In this paper, expert knowledge is combined with deep reinforcement learning algorithm (Proximal Policy Optimization, PPO) and two enhanced intelligent train operation algorithms (EITO) are proposed. …”
    Get full text
    Article
  13. 13

    Exercise Semantic Embedding for Knowledge Tracking in Open Domain by Zhi Cheng, Jinlong Li

    Published 2025-04-01
    Subjects: “…deep learning…”
    Get full text
    Article
  14. 14
  15. 15

    Osteosarcoma knowledge graph question answering system: deep learning-based knowledge graph and large language model fusion by Lulu Zhang, Weisong Zhao, Zhiwei Cheng, Yafei Jiang, Kai Tian, Jia Shi, Zhenyu Jiang, Yingqi Hua

    Published 2025-05-01
    “…The extracted elements were synthesized to create the OSKG, resulting in a deep learning-based knowledge base to explore osteosarcoma pathogenesis and molecular mechanisms. …”
    Get full text
    Article
  16. 16

    Nurses’ knowledge, attitudes, and practices regarding deep vein thrombosis and the nursing management by Suzhen Hu, Jinyin Huang, Hua Wang, Yan Wang

    Published 2025-05-01
    “…To assess the knowledge, attitudes, and practices (KAP) of nurses regarding deep vein thrombosis (DVT) and its nursing management. …”
    Get full text
    Article
  17. 17

    Short video preloading via domain knowledge assisted deep reinforcement learning by Yuhong Xie, Yuan Zhang, Tao Lin, Zipeng Pan, Si-Ze Qian, Bo Jiang, Jinyao Yan

    Published 2024-12-01
    “…In this paper, we propose an end-to-end Deep reinforcement learning framework with Action Masking called DAM that leverages domain knowledge to learn an optimal policy for short video preloading. …”
    Get full text
    Article
  18. 18
  19. 19

    Optimizing Deep Learning Models for Resource‐Constrained Environments With Cluster‐Quantized Knowledge Distillation by Niaz Ashraf Khan, A. M. Saadman Rafat

    Published 2025-05-01
    “…To address these issues, we propose Cluster‐Quantized Knowledge Distillation (CQKD), a novel framework that integrates structured pruning with knowledge distillation, incorporating cluster‐based weight quantization directly into the training loop. …”
    Get full text
    Article
  20. 20

    Method of adaptive knowledge distillation from multi-teacher to student deep learning models by Oleksandr Chaban, Eduard Manziuk, Pavlo Radiuk

    Published 2025-08-01
    “…In this work, we improve multi-teacher knowledge distillation by developing a holistic framework, enhanced multi-teacher knowledge distillation (EMTKD), that synergistically integrates three components: domain adaptation within teacher training, an instance-specific adap-tive weighting mechanism for knowledge fusion, and semi-supervised learning to leverage unlabeled data. …”
    Get full text
    Article