Distilling Diverse Knowledge for Deep Ensemble Learning

Bidirectional knowledge distillation improves network performance by sharing knowledge between networks during the training of multiple networks. Additionally, performance is further improved by using an ensemble of multiple networks during inference. However, the performance improvement achieved by...

Full description

Saved in:
Bibliographic Details
Main Authors: Naoki Okamoto, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11028994/
Tags: Add Tag
No Tags, Be the first to tag this record!