Method for Knowledge Transfer via Multi-Task Semi-Supervised Self-Paced

Adequate labeled data is essential for learning a reliable and generalizable model in many machine learning tasks. However, labeled data is becoming scarce and costly to obtain, which has spurred consistent interest in knowledge transfer techniques. Therefore, semi-supervised and multi-task learning...

Full description

Saved in:
Bibliographic Details
Main Authors: Yao Zhao, Hongying Liu, Huaxian Pan, Zhen Song, Chunting Liu, Anni Wei, Baoshuang Zhang, Wei Lu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11017642/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Adequate labeled data is essential for learning a reliable and generalizable model in many machine learning tasks. However, labeled data is becoming scarce and costly to obtain, which has spurred consistent interest in knowledge transfer techniques. Therefore, semi-supervised and multi-task learning is combined to alleviate the challenge, but the complexity of the task should be considered. To achieve more effective knowledge transfer with limited labeled data, we propose a unified multi-task semi-supervised self-paced learning (MSSP) scheme in this paper. MSSP naturally integrates the common structures shared by multiple related tasks and the manifold structure regularized by unlabeled data, enabling the respective knowledge transferred from the feature space and the instance space to complement and constrain each other. This leads to faster and more accurate searches in the underlying hypothesis space. We adopt Alternating convex search (ACS) method to solve MSSP, that is, each iteration sequentially trains the prediction model with a fixed set of labeled instances and then updates the labeled training set by adding more complex instances. With the aid of a self-controlled learning pace, a more robust and globally optimal model can be gradually constructed. Experimental results on several benchmark datasets show that our method achieves a performance gain of 3%-15% in classification accuracy compared to baseline algorithms, along with significant advantages in convergence speed.
ISSN:2169-3536