Labelling Training Samples Using Crowdsourcing Annotation for Recommendation
The supervised learning-based recommendation models, whose infrastructures are sufficient training samples with high quality, have been widely applied in many domains. In the era of big data with the explosive growth of data volume, training samples should be labelled timely and accurately to guaran...
Saved in:
| Main Authors: | Qingren Wang, Min Zhang, Tao Tao, Victor S. Sheng |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2020-01-01
|
| Series: | Complexity |
| Online Access: | http://dx.doi.org/10.1155/2020/1670483 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Speech Emotion Recognition and Serious Games: An Entertaining Approach for Crowdsourcing Annotated Samples
by: Lazaros Matsouliadis, et al.
Published: (2025-03-01) -
Boosting Crowdsourced Annotation Accuracy: Small Loss Filtering and Augmentation-Driven Training
by: Yanming Fu, et al.
Published: (2024-01-01) -
Developer Recommendation and Team Formation in Collaborative Crowdsourcing Platforms
by: Yasir Munir, et al.
Published: (2025-01-01) -
Incorporating Crowdsourced Annotator Distributions into Ensemble Modeling to Improve Classification Trustworthiness for Ancient Greek Papyri
by: Graham West, et al.
Published: (2024-02-01) -
Your Cursor Reveals: On Analyzing Workers’ Browsing Behavior and Annotation Quality in Crowdsourcing Tasks
by: Pei-Chi Lo, et al.
Published: (2025-01-01)