An Overview of Deep Neural Networks for Few-Shot Learning

Recent advancements in deep learning have led to significant breakthroughs across various fields. However, these methods often require extensive labeled data for optimal performance, posing challenges and high costs in practical applications. Addressing this issue, Few-Shot Learning (FSL) is introdu...

Full description

Saved in:
Bibliographic Details
Main Authors: Juan Zhao, Lili Kong, Jiancheng Lv
Format: Article
Language:English
Published: Tsinghua University Press 2025-02-01
Series:Big Data Mining and Analytics
Subjects:
Online Access:https://www.sciopen.com/article/10.26599/BDMA.2024.9020049
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850160185152110592
author Juan Zhao
Lili Kong
Jiancheng Lv
author_facet Juan Zhao
Lili Kong
Jiancheng Lv
author_sort Juan Zhao
collection DOAJ
description Recent advancements in deep learning have led to significant breakthroughs across various fields. However, these methods often require extensive labeled data for optimal performance, posing challenges and high costs in practical applications. Addressing this issue, Few-Shot Learning (FSL) is introduced. FSL aims to learn effectively from limited labeled samples and generalize well during testing. This paper provides a comprehensive survey of FSL, reviewing prominent deep learning based approaches of FSL. We define FSL through literature review in machine learning and specify the “N-way K-shot” paradigm to distinguish it from related learning challenges. Next, we classify FSL methods by analyzing the Vapnik−Chervonenkis dimension of neural networks. It underscores the necessity for models with abundant labeled examples and finite hypothesis space to generalize well to new and unseen instances. We categorize FSL methods into three types based on strategies to increase labeled samples or reduce hypothesis space: data augmentation, model-based methods, and algorithm-optimized approaches. Using this taxonomy, we review various methods and evaluate their strengths and weaknesses. We also present a comparison of these techniques as summarized in this paper, using benchmark datasets. Moreover, we delve into specific sub-tasks within FSL, such as applications in computer vision and robotics. Lastly, we examine the limitations, unique challenges, and future directions of FSL, aiming to offer a thorough understanding of this rapidly evolving field.
format Article
id doaj-art-b44bb150d0b047f5a257e7652ccc7b99
institution OA Journals
issn 2096-0654
language English
publishDate 2025-02-01
publisher Tsinghua University Press
record_format Article
series Big Data Mining and Analytics
spelling doaj-art-b44bb150d0b047f5a257e7652ccc7b992025-08-20T02:23:13ZengTsinghua University PressBig Data Mining and Analytics2096-06542025-02-018114518810.26599/BDMA.2024.9020049An Overview of Deep Neural Networks for Few-Shot LearningJuan Zhao0Lili Kong1Jiancheng Lv2College of Computer Science, Sichuan University, Chengdu 610065, ChinaCollege of Computer Science, Sichuan University, Chengdu 610065, ChinaCollege of Computer Science, Sichuan University, Chengdu 610065, ChinaRecent advancements in deep learning have led to significant breakthroughs across various fields. However, these methods often require extensive labeled data for optimal performance, posing challenges and high costs in practical applications. Addressing this issue, Few-Shot Learning (FSL) is introduced. FSL aims to learn effectively from limited labeled samples and generalize well during testing. This paper provides a comprehensive survey of FSL, reviewing prominent deep learning based approaches of FSL. We define FSL through literature review in machine learning and specify the “N-way K-shot” paradigm to distinguish it from related learning challenges. Next, we classify FSL methods by analyzing the Vapnik−Chervonenkis dimension of neural networks. It underscores the necessity for models with abundant labeled examples and finite hypothesis space to generalize well to new and unseen instances. We categorize FSL methods into three types based on strategies to increase labeled samples or reduce hypothesis space: data augmentation, model-based methods, and algorithm-optimized approaches. Using this taxonomy, we review various methods and evaluate their strengths and weaknesses. We also present a comparison of these techniques as summarized in this paper, using benchmark datasets. Moreover, we delve into specific sub-tasks within FSL, such as applications in computer vision and robotics. Lastly, we examine the limitations, unique challenges, and future directions of FSL, aiming to offer a thorough understanding of this rapidly evolving field.https://www.sciopen.com/article/10.26599/BDMA.2024.9020049few-shot learning (fsl)meta-learningdata augmentationprior knowledgeparameter optimization
spellingShingle Juan Zhao
Lili Kong
Jiancheng Lv
An Overview of Deep Neural Networks for Few-Shot Learning
Big Data Mining and Analytics
few-shot learning (fsl)
meta-learning
data augmentation
prior knowledge
parameter optimization
title An Overview of Deep Neural Networks for Few-Shot Learning
title_full An Overview of Deep Neural Networks for Few-Shot Learning
title_fullStr An Overview of Deep Neural Networks for Few-Shot Learning
title_full_unstemmed An Overview of Deep Neural Networks for Few-Shot Learning
title_short An Overview of Deep Neural Networks for Few-Shot Learning
title_sort overview of deep neural networks for few shot learning
topic few-shot learning (fsl)
meta-learning
data augmentation
prior knowledge
parameter optimization
url https://www.sciopen.com/article/10.26599/BDMA.2024.9020049
work_keys_str_mv AT juanzhao anoverviewofdeepneuralnetworksforfewshotlearning
AT lilikong anoverviewofdeepneuralnetworksforfewshotlearning
AT jianchenglv anoverviewofdeepneuralnetworksforfewshotlearning
AT juanzhao overviewofdeepneuralnetworksforfewshotlearning
AT lilikong overviewofdeepneuralnetworksforfewshotlearning
AT jianchenglv overviewofdeepneuralnetworksforfewshotlearning