Efficient Structured Prediction with Transformer Encoders

Finetuning is a useful method for adapting Transformer-based text encoders to new tasks but can be computationally expensive for structured prediction tasks that require tuning at the token level. Furthermore, finetuning is inherently inefficient in updating all base model parameters, which prevent...

Full description

Saved in:
Bibliographic Details
Main Author: Ali Basirat
Format: Article
Language:English
Published: Linköping University Electronic Press 2024-12-01
Series:Northern European Journal of Language Technology
Online Access:https://nejlt.ep.liu.se/article/view/4932
Tags: Add Tag
No Tags, Be the first to tag this record!