Document Type
Article
Publication Date
8-1-2008
Abstract
Knowledge-based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate logic, it is not obvious at all what neural-symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feed-forward core. We show in this paper how the core method can be used to learn first-order logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.
Repository Citation
Bader, S.,
Hitzler, P.,
& Holldobler, S.
(2008). Connectionist Model Generation: A First-Order Approach. Neurocomputing, 71, 2420-2432.
https://corescholar.libraries.wright.edu/cse/98
DOI
10.1016/j.neucom.2007.10.028
Included in
Bioinformatics Commons, Communication Technology and New Media Commons, Databases and Information Systems Commons, OS and Networks Commons, Science and Technology Studies Commons
Comments
Attached is the authors' version of this article. The final, publisher's version can be found at http://dx.doi.org/10.1016/j.neucom.2007.10.028.