Global-Mapping-Consistency-Constrained Visual-Semantic Embedding for Interpreting Autonomous Perception Models

From the perspective of artificial intelligence evaluation, the need to discover and explain the potential shortness of the evaluated intelligent algorithms/systems as well as the need to evaluate the intelligence level of such testees are of equal importance. In this paper, we propose a possible so...

Full description

Saved in:
Bibliographic Details
Main Authors: Chi Zhang, Meng Yuan, Xiaoning Ma, Ping Wei, Yuanqi Su, Li Li, Yuehu Liu
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Open Journal of Intelligent Transportation Systems
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10570287/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832590307919659008
author Chi Zhang
Meng Yuan
Xiaoning Ma
Ping Wei
Yuanqi Su
Li Li
Yuehu Liu
author_facet Chi Zhang
Meng Yuan
Xiaoning Ma
Ping Wei
Yuanqi Su
Li Li
Yuehu Liu
author_sort Chi Zhang
collection DOAJ
description From the perspective of artificial intelligence evaluation, the need to discover and explain the potential shortness of the evaluated intelligent algorithms/systems as well as the need to evaluate the intelligence level of such testees are of equal importance. In this paper, we propose a possible solution to these challenges: Explainable Evaluation for visual intelligence. Specifically, we focus on the problem setting where the internal mechanisms of AI algorithms are sophisticated, heterogeneous or unreachable. In this case, a latent attribute dictionary learning method with constrained by mapping consistency is proposed to explain the performance variation patterns of visual perception intelligence under different test samples. By jointly iteratively solving the learning of latent concept representation for test samples and the regression of latent concept-generalization performance, the mapping relationship between deep representation, semantic attribute annotation, and generalization performance of test samples is established to predict the degree of influence of semantic attributes on visual perception generalization performance. The optimal solution of proposed method could be reached via an alternating optimization process. Through quantitative experiments, we find that global mapping consistency constraints can make the learned latent concept representation strictly consistent with deep representation, thereby improving the accuracy of semantic attribute-perception performance correlation calculation.
format Article
id doaj-art-148ea933199a41a9970d4e1bb09f28f0
institution Kabale University
issn 2687-7813
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Open Journal of Intelligent Transportation Systems
spelling doaj-art-148ea933199a41a9970d4e1bb09f28f02025-01-24T00:02:39ZengIEEEIEEE Open Journal of Intelligent Transportation Systems2687-78132024-01-01539340810.1109/OJITS.2024.341855210570287Global-Mapping-Consistency-Constrained Visual-Semantic Embedding for Interpreting Autonomous Perception ModelsChi Zhang0https://orcid.org/0000-0001-9604-2800Meng Yuan1https://orcid.org/0009-0004-7348-033XXiaoning Ma2https://orcid.org/0009-0000-1238-4206Ping Wei3https://orcid.org/0000-0002-8535-9527Yuanqi Su4Li Li5https://orcid.org/0000-0002-9428-1960Yuehu Liu6National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi’an Jiaotong University, Xi’an, ChinaNational Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi’an Jiaotong University, Xi’an, ChinaSchool of Computer Science and Technology, Xi’an Jiaotong University, Xi’an, ChinaNational Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi’an Jiaotong University, Xi’an, ChinaSchool of Computer Science and Technology, Xi’an Jiaotong University, Xi’an, ChinaDepartment of Automation, BNRist, Tsinghua University, Beijing, ChinaNational Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi’an Jiaotong University, Xi’an, ChinaFrom the perspective of artificial intelligence evaluation, the need to discover and explain the potential shortness of the evaluated intelligent algorithms/systems as well as the need to evaluate the intelligence level of such testees are of equal importance. In this paper, we propose a possible solution to these challenges: Explainable Evaluation for visual intelligence. Specifically, we focus on the problem setting where the internal mechanisms of AI algorithms are sophisticated, heterogeneous or unreachable. In this case, a latent attribute dictionary learning method with constrained by mapping consistency is proposed to explain the performance variation patterns of visual perception intelligence under different test samples. By jointly iteratively solving the learning of latent concept representation for test samples and the regression of latent concept-generalization performance, the mapping relationship between deep representation, semantic attribute annotation, and generalization performance of test samples is established to predict the degree of influence of semantic attributes on visual perception generalization performance. The optimal solution of proposed method could be reached via an alternating optimization process. Through quantitative experiments, we find that global mapping consistency constraints can make the learned latent concept representation strictly consistent with deep representation, thereby improving the accuracy of semantic attribute-perception performance correlation calculation.https://ieeexplore.ieee.org/document/10570287/Explainable AI evaluationdictionary learninglatent knowledge representation
spellingShingle Chi Zhang
Meng Yuan
Xiaoning Ma
Ping Wei
Yuanqi Su
Li Li
Yuehu Liu
Global-Mapping-Consistency-Constrained Visual-Semantic Embedding for Interpreting Autonomous Perception Models
IEEE Open Journal of Intelligent Transportation Systems
Explainable AI evaluation
dictionary learning
latent knowledge representation
title Global-Mapping-Consistency-Constrained Visual-Semantic Embedding for Interpreting Autonomous Perception Models
title_full Global-Mapping-Consistency-Constrained Visual-Semantic Embedding for Interpreting Autonomous Perception Models
title_fullStr Global-Mapping-Consistency-Constrained Visual-Semantic Embedding for Interpreting Autonomous Perception Models
title_full_unstemmed Global-Mapping-Consistency-Constrained Visual-Semantic Embedding for Interpreting Autonomous Perception Models
title_short Global-Mapping-Consistency-Constrained Visual-Semantic Embedding for Interpreting Autonomous Perception Models
title_sort global mapping consistency constrained visual semantic embedding for interpreting autonomous perception models
topic Explainable AI evaluation
dictionary learning
latent knowledge representation
url https://ieeexplore.ieee.org/document/10570287/
work_keys_str_mv AT chizhang globalmappingconsistencyconstrainedvisualsemanticembeddingforinterpretingautonomousperceptionmodels
AT mengyuan globalmappingconsistencyconstrainedvisualsemanticembeddingforinterpretingautonomousperceptionmodels
AT xiaoningma globalmappingconsistencyconstrainedvisualsemanticembeddingforinterpretingautonomousperceptionmodels
AT pingwei globalmappingconsistencyconstrainedvisualsemanticembeddingforinterpretingautonomousperceptionmodels
AT yuanqisu globalmappingconsistencyconstrainedvisualsemanticembeddingforinterpretingautonomousperceptionmodels
AT lili globalmappingconsistencyconstrainedvisualsemanticembeddingforinterpretingautonomousperceptionmodels
AT yuehuliu globalmappingconsistencyconstrainedvisualsemanticembeddingforinterpretingautonomousperceptionmodels