On the Readability of Kernel-based Deep Learning Models in Semantic Role Labeling Tasks over Multiple Languages

Sentence embeddings are effective input vectors for the neural learning of a number of inferences about content and meaning. Unfortunately, most of such decision processes are epistemologically opaque as for the limited interpretability of the acquired neural models based on the involved embeddings....

Full description

Saved in:
Bibliographic Details
Main Authors: Daniele Rossini, Danilo Croce, Roberto Basili
Format: Article
Language:English
Published: Accademia University Press 2019-06-01
Series:IJCoL
Online Access:https://journals.openedition.org/ijcol/453
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850262882588033024
author Daniele Rossini
Danilo Croce
Roberto Basili
author_facet Daniele Rossini
Danilo Croce
Roberto Basili
author_sort Daniele Rossini
collection DOAJ
description Sentence embeddings are effective input vectors for the neural learning of a number of inferences about content and meaning. Unfortunately, most of such decision processes are epistemologically opaque as for the limited interpretability of the acquired neural models based on the involved embeddings. In this paper, we concentrate on the readability of neural models, discussing an embedding technique (the Nyström methodology) that corresponds to the reconstruction of a sentence in a kernel space, capturing grammatical and lexical semantic information. From this method, we build a Kernel-based Deep Architecture that is characterized by inherently high interpretability properties, as the proposed embedding is derived from examples, i.e., landmarks, that are both human readable and labeled. Its integration with an explanation methodology, the Layer-wise Relevance Propagation, supports here the automatic compilation of argumentations for the Kernel-based Deep Architecture decisions, expressed in form of analogy with activated landmarks. Quantitative evaluation against the Semantic Role Labeling task, both in English and Italian, suggests that explanations based on semantic and syntagmatic structures are rich and characterize convincing arguments, as they effectively help the user in assessing whether or not to trust the machine decisions.
format Article
id doaj-art-22dc315ecfec4dad9057bb01e6ef603d
institution OA Journals
issn 2499-4553
language English
publishDate 2019-06-01
publisher Accademia University Press
record_format Article
series IJCoL
spelling doaj-art-22dc315ecfec4dad9057bb01e6ef603d2025-08-20T01:55:06ZengAccademia University PressIJCoL2499-45532019-06-0151113110.4000/ijcol.453On the Readability of Kernel-based Deep Learning Models in Semantic Role Labeling Tasks over Multiple LanguagesDaniele RossiniDanilo CroceRoberto BasiliSentence embeddings are effective input vectors for the neural learning of a number of inferences about content and meaning. Unfortunately, most of such decision processes are epistemologically opaque as for the limited interpretability of the acquired neural models based on the involved embeddings. In this paper, we concentrate on the readability of neural models, discussing an embedding technique (the Nyström methodology) that corresponds to the reconstruction of a sentence in a kernel space, capturing grammatical and lexical semantic information. From this method, we build a Kernel-based Deep Architecture that is characterized by inherently high interpretability properties, as the proposed embedding is derived from examples, i.e., landmarks, that are both human readable and labeled. Its integration with an explanation methodology, the Layer-wise Relevance Propagation, supports here the automatic compilation of argumentations for the Kernel-based Deep Architecture decisions, expressed in form of analogy with activated landmarks. Quantitative evaluation against the Semantic Role Labeling task, both in English and Italian, suggests that explanations based on semantic and syntagmatic structures are rich and characterize convincing arguments, as they effectively help the user in assessing whether or not to trust the machine decisions.https://journals.openedition.org/ijcol/453
spellingShingle Daniele Rossini
Danilo Croce
Roberto Basili
On the Readability of Kernel-based Deep Learning Models in Semantic Role Labeling Tasks over Multiple Languages
IJCoL
title On the Readability of Kernel-based Deep Learning Models in Semantic Role Labeling Tasks over Multiple Languages
title_full On the Readability of Kernel-based Deep Learning Models in Semantic Role Labeling Tasks over Multiple Languages
title_fullStr On the Readability of Kernel-based Deep Learning Models in Semantic Role Labeling Tasks over Multiple Languages
title_full_unstemmed On the Readability of Kernel-based Deep Learning Models in Semantic Role Labeling Tasks over Multiple Languages
title_short On the Readability of Kernel-based Deep Learning Models in Semantic Role Labeling Tasks over Multiple Languages
title_sort on the readability of kernel based deep learning models in semantic role labeling tasks over multiple languages
url https://journals.openedition.org/ijcol/453
work_keys_str_mv AT danielerossini onthereadabilityofkernelbaseddeeplearningmodelsinsemanticrolelabelingtasksovermultiplelanguages
AT danilocroce onthereadabilityofkernelbaseddeeplearningmodelsinsemanticrolelabelingtasksovermultiplelanguages
AT robertobasili onthereadabilityofkernelbaseddeeplearningmodelsinsemanticrolelabelingtasksovermultiplelanguages