MoCoUTRL: a momentum contrastive framework for unsupervised text representation learning
This paper presents MoCoUTRL: a Momentum Contrastive Framework for Unsupervised Text Representation Learning. This model improves two aspects of recently popular contrastive learning algorithms in natural language processing (NLP). Firstly, MoCoUTRL employs multi-granularity semantic contrastive lea...
Saved in:
| Main Authors: | Ao Zou, Wenning Hao, Dawei Jin, Gang Chen, Feiyan Sun |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Taylor & Francis Group
2023-12-01
|
| Series: | Connection Science |
| Subjects: | |
| Online Access: | http://dx.doi.org/10.1080/09540091.2023.2221406 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Unsupervised Canine Emotion Recognition Using Momentum Contrast
by: Aarya Bhave, et al.
Published: (2024-11-01) -
Dual Context Representation Learning Framework for Entity Alignment
by: Bo Cheng, et al.
Published: (2025-04-01) -
Swin Transformer and Momentum Contrast (MoCo) in Leukemia Diagnostics: A New Paradigm in AI-Driven Blood Cell Cancer Classification
by: Eshika Jain, et al.
Published: (2025-01-01) -
Momentum and calendar effects
by: Tomasz Wojtowicz
Published: (2012-11-01) -
CGRclust: Chaos Game Representation for twin contrastive clustering of unlabelled DNA sequences
by: Fatemeh Alipour, et al.
Published: (2024-12-01)