Creation of Reliable Relevance Judgments in Information Retrieval Systems Evaluation Experimentation through Crowdsourcing: A Review
Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in per...
Saved in:
Main Authors: | Parnia Samimi, Sri Devi Ravana |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2014-01-01
|
Series: | The Scientific World Journal |
Online Access: | http://dx.doi.org/10.1155/2014/135641 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Improving the accuracy of the information retrieval evaluation process by considering unjudged document lists from the relevant judgment sets
by: Minnu Helen Joseph, et al.
Published: (2024-09-01) -
Information Retrieving Through Sensors for Smart Parking
by: Ibrahim Mekawy, et al.
Published: (2022-03-01) -
Evaluation criteria for information retrieval systems.
by: Julian Warner
Published: (1999-01-01) -
Lasater clinical judgment rubric in nursing education: a Turkish validity and reliability study
by: Esra Sezer, et al.
Published: (2025-01-01) -
Task dimensions of user evaluations of information retrieval systems
by: F.C.Johnson, et al.
Published: (2003-01-01)