Nomological Deductive Reasoning for Trustworthy, Human-Readable, and Actionable AI Outputs
The lack of transparency in many AI systems continues to hinder their adoption in critical domains such as healthcare, finance, and autonomous systems. While recent explainable AI (XAI) methods—particularly those leveraging large language models—have enhanced output readability, they often lack trac...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | Algorithms |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1999-4893/18/6/306 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849472652295536640 |
|---|---|
| author | Gedeon Hakizimana Agapito Ledezma Espino |
| author_facet | Gedeon Hakizimana Agapito Ledezma Espino |
| author_sort | Gedeon Hakizimana |
| collection | DOAJ |
| description | The lack of transparency in many AI systems continues to hinder their adoption in critical domains such as healthcare, finance, and autonomous systems. While recent explainable AI (XAI) methods—particularly those leveraging large language models—have enhanced output readability, they often lack traceable and verifiable reasoning that is aligned with domain-specific logic. This paper presents Nomological Deductive Reasoning (NDR), supported by Nomological Deductive Knowledge Representation (NDKR), as a framework aimed at improving the transparency and auditability of AI decisions through the integration of formal logic and structured domain knowledge. NDR enables the generation of causal, rule-based explanations by validating statistical predictions against symbolic domain constraints. The framework is evaluated on a credit-risk classification task using the Statlog (German Credit Data) dataset, demonstrating that NDR can produce coherent and interpretable explanations consistent with expert-defined logic. While primarily focused on technical integration and deductive validation, the approach lays a foundation for more transparent and norm-compliant AI systems. This work contributes to the growing formalization of XAI by aligning statistical inference with symbolic reasoning, offering a pathway toward more interpretable and verifiable AI decision-making processes. |
| format | Article |
| id | doaj-art-498c9a1eef234ea3991d79495e8bef8b |
| institution | Kabale University |
| issn | 1999-4893 |
| language | English |
| publishDate | 2025-05-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Algorithms |
| spelling | doaj-art-498c9a1eef234ea3991d79495e8bef8b2025-08-20T03:24:29ZengMDPI AGAlgorithms1999-48932025-05-0118630610.3390/a18060306Nomological Deductive Reasoning for Trustworthy, Human-Readable, and Actionable AI OutputsGedeon Hakizimana0Agapito Ledezma Espino1Department of Computer Science & Engineering, Universidad Carlos III de Madrid, 28911 Leganes, SpainDepartment of Computer Science & Engineering, Universidad Carlos III de Madrid, 28911 Leganes, SpainThe lack of transparency in many AI systems continues to hinder their adoption in critical domains such as healthcare, finance, and autonomous systems. While recent explainable AI (XAI) methods—particularly those leveraging large language models—have enhanced output readability, they often lack traceable and verifiable reasoning that is aligned with domain-specific logic. This paper presents Nomological Deductive Reasoning (NDR), supported by Nomological Deductive Knowledge Representation (NDKR), as a framework aimed at improving the transparency and auditability of AI decisions through the integration of formal logic and structured domain knowledge. NDR enables the generation of causal, rule-based explanations by validating statistical predictions against symbolic domain constraints. The framework is evaluated on a credit-risk classification task using the Statlog (German Credit Data) dataset, demonstrating that NDR can produce coherent and interpretable explanations consistent with expert-defined logic. While primarily focused on technical integration and deductive validation, the approach lays a foundation for more transparent and norm-compliant AI systems. This work contributes to the growing formalization of XAI by aligning statistical inference with symbolic reasoning, offering a pathway toward more interpretable and verifiable AI decision-making processes.https://www.mdpi.com/1999-4893/18/6/306explainable Artificial Intelligence (XAI)interpretable machine learningknowledge representationdeductive reasoningsymbolic reasoningtransparent AI systems |
| spellingShingle | Gedeon Hakizimana Agapito Ledezma Espino Nomological Deductive Reasoning for Trustworthy, Human-Readable, and Actionable AI Outputs Algorithms explainable Artificial Intelligence (XAI) interpretable machine learning knowledge representation deductive reasoning symbolic reasoning transparent AI systems |
| title | Nomological Deductive Reasoning for Trustworthy, Human-Readable, and Actionable AI Outputs |
| title_full | Nomological Deductive Reasoning for Trustworthy, Human-Readable, and Actionable AI Outputs |
| title_fullStr | Nomological Deductive Reasoning for Trustworthy, Human-Readable, and Actionable AI Outputs |
| title_full_unstemmed | Nomological Deductive Reasoning for Trustworthy, Human-Readable, and Actionable AI Outputs |
| title_short | Nomological Deductive Reasoning for Trustworthy, Human-Readable, and Actionable AI Outputs |
| title_sort | nomological deductive reasoning for trustworthy human readable and actionable ai outputs |
| topic | explainable Artificial Intelligence (XAI) interpretable machine learning knowledge representation deductive reasoning symbolic reasoning transparent AI systems |
| url | https://www.mdpi.com/1999-4893/18/6/306 |
| work_keys_str_mv | AT gedeonhakizimana nomologicaldeductivereasoningfortrustworthyhumanreadableandactionableaioutputs AT agapitoledezmaespino nomologicaldeductivereasoningfortrustworthyhumanreadableandactionableaioutputs |