Text Embedding Augmentation Based on Retraining With Pseudo-Labeled Adversarial Embedding

Pre-trained language models (LMs) have been shown to achieve outstanding performance in various natural language processing tasks; however, these models have a significantly large number of parameters to handle large-scale text corpora during the pre-training process, and thus, they entail the risk...

Full description

Saved in:
Bibliographic Details
Main Authors: Myeongsup Kim, Pilsung Kang
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9680703/
Tags: Add Tag
No Tags, Be the first to tag this record!