Enhancing Interpretability of Machine Learning Models over Knowledge Graphs Conference Paper uri icon

abstract

  • Artificial Intelligence (AI) plays a critical role in data-driven decision-making frameworks. However, the lack of transparency in some machine learning (ML) models hampers their trustworthiness, especially in domainslikehealthcare. ThisdemonstrationaimstoshowcasethepotentialofSemanticWebtechnologies in enhancing the interpretability of AI. By incorporating an interpretability layer, ML models can become more reliable, providing decision-makers with deeper insights into the model’s decision-making process. InterpretME effectively documents the execution of an ML pipeline using factual statements within the InterpretME knowledge graph (KG). Consequently, crucial metadata such as hyperparameters, decision trees, and local ML interpretations are presented in both human- and machine-readable formats, facilitating symbolic reasoning on a model’s outcomes. Following the Linked Data principles, InterpretME establishes connections between entities in the InterpretME KG and their counterparts in existing KGs, thus, enhancing contextual information of the InterpretME KG entities. A video demonstrating InterpretME is available online1, and a Jupyter notebook2 for a live demo is published in GitHub.

publication date

  • 2023