O3-B Ontology-based Interpretable Deep Learning for Consumer Complaint Explanation and Analysis

PI: NhatHai Phan

In this project, we will investigate ontology-based explainable deep learning (OBDL) algorithms for textual data to identify key factors, concepts, and hypothesis that significantly contribute to the decision made by deep neural networks. We introduced a novel interpreting framework, called OnML, that learns an interpretable model based on an ontology-based sampling technique to explain agnostic prediction models. Different from existing approaches, our algorithm considers contextual correlation among words, described in domain knowledge ontologies, to generate semantic explanations. To narrow down the search space for explanations, which is a major problem of long and complicated text data, we designed a learnable anchor algorithm, to better extract explanations locally. A set of regulations was further introduced, regarding combining learned interpretable representations with anchors to generate comprehensible semantic explanations. An extensive experiment conducted on real-world datasets shows that our approach generated more precise and insightful explanations compared with existing approaches.