Document Type
Article
Publication Date
8-1-2005
Abstract
Artificial neural networks can be trained to perform excellently in many application areas. While they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible - where simple is being understood in some clearly defined and meaningful way.
Repository Citation
Lehmann, J.,
Bader, S.,
& Hitzler, P.
(2005). Extracting Reduced Logic Programs from Artificial Neural Networks. Proceedings of the IJCAI-05 Workshop on Neural-Symbolic Learning and Reasoning.
https://corescholar.libraries.wright.edu/cse/110
Included in
Bioinformatics Commons, Communication Technology and New Media Commons, Databases and Information Systems Commons, OS and Networks Commons, Science and Technology Studies Commons
Comments
Presented at the International Joint Conference on Artificial Intelligence workshop on Neural-Symbolic Learning and Resonance, Edinburgh, Scotland, August 1, 2005.