Document Type
Article
Publication Date
3-24-2021
Abstract
A significant and recent development in neural-symbolic learning are deep neural networks that can reason over symbolic knowledge graphs (KGs). A particular task of interest is KG entailment, which is to infer the set of all facts that are a logical consequence of current and potential facts of a KG. Initial neural-symbolic systems that can deduce the entailment of a KG have been presented, but they are limited: current systems learn fact relations and entailment patterns specific to a particular KG and hence do not truly generalize, and must be retrained for each KG they are tasked with entailing. We propose a neural-symbolic system to address this limitation in this paper. It is designed as a differentiable end-to-end deep memory network that learns over abstract, generic symbols to discover entailment patterns common to any reasoning task. A key component of the system is a simple but highly effective normalization process for continuous representation learning of KG entities within memory networks. Our results show how the model, trained over a set of KGs, can effectively entail facts from KGs excluded from the training, even when the vocabulary or the domain of test KGs is completely different from the training KGs.
Repository Citation
Ebrahimi, M.,
Sarker, M. K.,
Bianchi, F.,
Xie, N.,
Eberhart, A.,
Doran, D.,
Kim, H.,
& Hitzler, P.
(2021). Neuro-Symbolic Deductive Reasoning for Cross-Knowledge Graph Entailment. AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering.
https://corescholar.libraries.wright.edu/cse/656
Comments
This work is licensed under CC BY 4.0