Sensitivity Analysis of a BERT-based scholarly recommendation system
With the exponential growth of publicly available datasets, a scholarly recommendation system of datasets would be an essential tool in the field of information filtering. Recommending datasets to users can be formulated as a classification problem where deep learning models can be carefully trained...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2022-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Subjects: | |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/130595 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | With the exponential growth of publicly available datasets, a scholarly recommendation system of datasets would be an essential tool in the field of information filtering. Recommending datasets to users can be formulated as a classification problem where deep learning models can be carefully trained. In such a case, when preparing training data for the learning models, one needs to consider different ratios of false and true pairs. Therefore, a sensitivity analysis is necessary. In this work, we conduct a sensitivity analysis using different class ratios on a deep learning model (BERT) for recommending datasets. We found out that our BERT-based recommender model is relatively robust using recommender metrics such as Mean Reciprocal Rank (MRR)@k, Recall@k, etc., except for the extreme class imbalance case (1:5000). Therefore, we conclude that a moderate ratio of the random negative sampling scheme, (in our case 1:10) is reasonable, sufficient and time-efficient in the recommendation system training |
|---|---|
| ISSN: | 2334-0754 2334-0762 |