Transformer-based similarity learning for re-identification of chickens
Continuous animal monitoring relies heavily on the ability to re-identify individuals over time, essential for both short-term tracking, such as video analysis, and long-term monitoring of animal conditions. Traditionally, livestock re-identification is approached using tags or sensors, which requir...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2025-08-01
|
| Series: | Smart Agricultural Technology |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2772375525001789 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850182929580294144 |
|---|---|
| author | Christian Lamping Gert Kootstra Marjolein Derks |
| author_facet | Christian Lamping Gert Kootstra Marjolein Derks |
| author_sort | Christian Lamping |
| collection | DOAJ |
| description | Continuous animal monitoring relies heavily on the ability to re-identify individuals over time, essential for both short-term tracking, such as video analysis, and long-term monitoring of animal conditions. Traditionally, livestock re-identification is approached using tags or sensors, which require additional handling effort and potentially impact animal welfare. In response to these limitations, non-invasive vision-based approaches have emerged recently, with existing research primarily focusing on the re-identification of pigs and cows. Re-identification of chickens, which exhibit high uniformity and are housed in larger groups, remains challenging and has received less research attention. This study addresses this gap by exploring the feasibility of re-identifying individual laying hens within uncontrolled farm environments using images of their heads. It proposes the first similarity-learning approach based on a VisionTransformer architecture to re-identify chickens without requiring training images for each individual bird. In our experiments, we compared the transformer-based approach to traditional CNN architectures while assessing the impact of different model sizes and triplet mining strategies during training. Moreover, we evaluated practical applicability by analyzing the effects of the number of images per chicken and overall population size on re-identification accuracy. Finally, we examined which visual features of the chicken head were most relevant for re-identification. Results show Top-1 accuracies exceeding 80 % for small groups and maintaining over 40 % accuracy for a population of 100 chickens. Moreover, it was shown that the transformer-based architecture outperformed CNN models, with the use of semi-hard negative samples during training yielding the best results. Furthermore, it was revealed that the evaluated models learned to prioritize features such as the comb, wattles, and ear lobes, often aligning with human perception. These results demonstrate promising potential for re-identifying chickens even when recorded in an uncontrolled farm environment, providing a foundation for future applications in animal tracking and monitoring. |
| format | Article |
| id | doaj-art-4885a6b989e342579e2908d108d9bb23 |
| institution | OA Journals |
| issn | 2772-3755 |
| language | English |
| publishDate | 2025-08-01 |
| publisher | Elsevier |
| record_format | Article |
| series | Smart Agricultural Technology |
| spelling | doaj-art-4885a6b989e342579e2908d108d9bb232025-08-20T02:17:29ZengElsevierSmart Agricultural Technology2772-37552025-08-011110094510.1016/j.atech.2025.100945Transformer-based similarity learning for re-identification of chickensChristian Lamping0Gert Kootstra1Marjolein Derks2Corresponding author at: Farm Technology Group, Wageningen University & Research, 6700, AA, Wageningen, the Netherlands.; Farm Technology Group, Wageningen University & Research, 6700, AA, Wageningen, the NetherlandsFarm Technology Group, Wageningen University & Research, 6700, AA, Wageningen, the NetherlandsFarm Technology Group, Wageningen University & Research, 6700, AA, Wageningen, the NetherlandsContinuous animal monitoring relies heavily on the ability to re-identify individuals over time, essential for both short-term tracking, such as video analysis, and long-term monitoring of animal conditions. Traditionally, livestock re-identification is approached using tags or sensors, which require additional handling effort and potentially impact animal welfare. In response to these limitations, non-invasive vision-based approaches have emerged recently, with existing research primarily focusing on the re-identification of pigs and cows. Re-identification of chickens, which exhibit high uniformity and are housed in larger groups, remains challenging and has received less research attention. This study addresses this gap by exploring the feasibility of re-identifying individual laying hens within uncontrolled farm environments using images of their heads. It proposes the first similarity-learning approach based on a VisionTransformer architecture to re-identify chickens without requiring training images for each individual bird. In our experiments, we compared the transformer-based approach to traditional CNN architectures while assessing the impact of different model sizes and triplet mining strategies during training. Moreover, we evaluated practical applicability by analyzing the effects of the number of images per chicken and overall population size on re-identification accuracy. Finally, we examined which visual features of the chicken head were most relevant for re-identification. Results show Top-1 accuracies exceeding 80 % for small groups and maintaining over 40 % accuracy for a population of 100 chickens. Moreover, it was shown that the transformer-based architecture outperformed CNN models, with the use of semi-hard negative samples during training yielding the best results. Furthermore, it was revealed that the evaluated models learned to prioritize features such as the comb, wattles, and ear lobes, often aligning with human perception. These results demonstrate promising potential for re-identifying chickens even when recorded in an uncontrolled farm environment, providing a foundation for future applications in animal tracking and monitoring.http://www.sciencedirect.com/science/article/pii/S2772375525001789Re-identificationDeep learningComputer visionTransformers |
| spellingShingle | Christian Lamping Gert Kootstra Marjolein Derks Transformer-based similarity learning for re-identification of chickens Smart Agricultural Technology Re-identification Deep learning Computer vision Transformers |
| title | Transformer-based similarity learning for re-identification of chickens |
| title_full | Transformer-based similarity learning for re-identification of chickens |
| title_fullStr | Transformer-based similarity learning for re-identification of chickens |
| title_full_unstemmed | Transformer-based similarity learning for re-identification of chickens |
| title_short | Transformer-based similarity learning for re-identification of chickens |
| title_sort | transformer based similarity learning for re identification of chickens |
| topic | Re-identification Deep learning Computer vision Transformers |
| url | http://www.sciencedirect.com/science/article/pii/S2772375525001789 |
| work_keys_str_mv | AT christianlamping transformerbasedsimilaritylearningforreidentificationofchickens AT gertkootstra transformerbasedsimilaritylearningforreidentificationofchickens AT marjoleinderks transformerbasedsimilaritylearningforreidentificationofchickens |