A segment-based framework for explainability in animal affective computing

Abstract Recent developments in animal motion tracking and pose recognition have revolutionized the study of animal behavior. More recent efforts extend beyond tracking towards affect recognition using facial and body language analysis, with far-reaching applications in animal welfare and health. De...

Full description

Saved in:
Bibliographic Details
Main Authors: Tali Boneh-Shitrit, Lauren Finka, Daniel S. Mills, Stelio P. Luna, Emanuella Dalla Costa, Anna Zamansky, Annika Bremhorst
Format: Article
Language:English
Published: Nature Portfolio 2025-04-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-96634-y
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849712790342729728
author Tali Boneh-Shitrit
Lauren Finka
Daniel S. Mills
Stelio P. Luna
Emanuella Dalla Costa
Anna Zamansky
Annika Bremhorst
author_facet Tali Boneh-Shitrit
Lauren Finka
Daniel S. Mills
Stelio P. Luna
Emanuella Dalla Costa
Anna Zamansky
Annika Bremhorst
author_sort Tali Boneh-Shitrit
collection DOAJ
description Abstract Recent developments in animal motion tracking and pose recognition have revolutionized the study of animal behavior. More recent efforts extend beyond tracking towards affect recognition using facial and body language analysis, with far-reaching applications in animal welfare and health. Deep learning models are the most commonly used in this context. However, their “black box” nature poses a significant challenge to explainability, which is vital for building trust and encouraging adoption among researchers. Despite its importance, the field of explainability and its quantification remains under-explored. Saliency maps are among the most widely used methods for explainability, where each pixel is assigned a significance level indicating its relevance to the neural network’s decision. Although these maps are frequently used in research, they are predominantly applied qualitatively, with limited methods for quantitatively analyzing them or identifying the most suitable method for a specific task. In this paper, we propose a framework aimed at enhancing explainability in the field of animal affective computing. Assuming the availability of a classifier for a specific affective state and the ability to generate saliency maps, our approach focuses on evaluating and comparing visual explanations by emphasizing the importance of meaningful semantic parts captured as segments, which are thought to be closely linked to behavioral indicators of affective states. Furthermore, our approach introduces a quantitative scoring mechanism to assess how well the saliency maps generated by a given classifier align with predefined semantic regions. This scoring system allows for systematic, measurable comparisons of different pipelines in terms of their visual explanations within animal affective computing. Such a metric can serve as a quality indicator when developing classifiers for known biologically relevant segments or help researchers assess whether a classifier is using expected meaningful regions when exploring new potential indicators. We evaluated the framework using three datasets focused on cat and horse pain and dog emotions. Across all datasets, the generated explanations consistently revealed that the eye area is the most significant feature for the classifiers. These results highlight the potential of the explainability frameworks such as the suggested one to uncover new insights into how machines ‘see’ animal affective states.
format Article
id doaj-art-d61247f4daf548e0899b1b51aa94f87e
institution DOAJ
issn 2045-2322
language English
publishDate 2025-04-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-d61247f4daf548e0899b1b51aa94f87e2025-08-20T03:14:09ZengNature PortfolioScientific Reports2045-23222025-04-0115111510.1038/s41598-025-96634-yA segment-based framework for explainability in animal affective computingTali Boneh-Shitrit0Lauren Finka1Daniel S. Mills2Stelio P. Luna3Emanuella Dalla Costa4Anna Zamansky5Annika Bremhorst6Information Systems Department, University of HaifaCats Protection, National Cat CentreSchool of Life&Environmental Sciences, Joseph Bank Laboratories, University of LincolnSchool of Veterinary Medicine and Animal Science, São Paulo State University (Unesp)Department of Veterinary Medicine and Animal Sciences, University of MilanInformation Systems Department, University of HaifaDogs and ScienceAbstract Recent developments in animal motion tracking and pose recognition have revolutionized the study of animal behavior. More recent efforts extend beyond tracking towards affect recognition using facial and body language analysis, with far-reaching applications in animal welfare and health. Deep learning models are the most commonly used in this context. However, their “black box” nature poses a significant challenge to explainability, which is vital for building trust and encouraging adoption among researchers. Despite its importance, the field of explainability and its quantification remains under-explored. Saliency maps are among the most widely used methods for explainability, where each pixel is assigned a significance level indicating its relevance to the neural network’s decision. Although these maps are frequently used in research, they are predominantly applied qualitatively, with limited methods for quantitatively analyzing them or identifying the most suitable method for a specific task. In this paper, we propose a framework aimed at enhancing explainability in the field of animal affective computing. Assuming the availability of a classifier for a specific affective state and the ability to generate saliency maps, our approach focuses on evaluating and comparing visual explanations by emphasizing the importance of meaningful semantic parts captured as segments, which are thought to be closely linked to behavioral indicators of affective states. Furthermore, our approach introduces a quantitative scoring mechanism to assess how well the saliency maps generated by a given classifier align with predefined semantic regions. This scoring system allows for systematic, measurable comparisons of different pipelines in terms of their visual explanations within animal affective computing. Such a metric can serve as a quality indicator when developing classifiers for known biologically relevant segments or help researchers assess whether a classifier is using expected meaningful regions when exploring new potential indicators. We evaluated the framework using three datasets focused on cat and horse pain and dog emotions. Across all datasets, the generated explanations consistently revealed that the eye area is the most significant feature for the classifiers. These results highlight the potential of the explainability frameworks such as the suggested one to uncover new insights into how machines ‘see’ animal affective states.https://doi.org/10.1038/s41598-025-96634-y
spellingShingle Tali Boneh-Shitrit
Lauren Finka
Daniel S. Mills
Stelio P. Luna
Emanuella Dalla Costa
Anna Zamansky
Annika Bremhorst
A segment-based framework for explainability in animal affective computing
Scientific Reports
title A segment-based framework for explainability in animal affective computing
title_full A segment-based framework for explainability in animal affective computing
title_fullStr A segment-based framework for explainability in animal affective computing
title_full_unstemmed A segment-based framework for explainability in animal affective computing
title_short A segment-based framework for explainability in animal affective computing
title_sort segment based framework for explainability in animal affective computing
url https://doi.org/10.1038/s41598-025-96634-y
work_keys_str_mv AT talibonehshitrit asegmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT laurenfinka asegmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT danielsmills asegmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT steliopluna asegmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT emanuelladallacosta asegmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT annazamansky asegmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT annikabremhorst asegmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT talibonehshitrit segmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT laurenfinka segmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT danielsmills segmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT steliopluna segmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT emanuelladallacosta segmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT annazamansky segmentbasedframeworkforexplainabilityinanimalaffectivecomputing
AT annikabremhorst segmentbasedframeworkforexplainabilityinanimalaffectivecomputing