OntoInsight - A Metric-Guided Tool for Ontology Quality Evaluation with LLM-Powered Recommendations
Document Type
Article
Publication Date
2025
Abstract
Ontologies are foundational to conceptual modeling and semantic systems across diverse domains, yet evaluating and improving their quality remains a complex challenge. Existing tools often focus on syntactic correctness or complex metric reporting, lacking actionable and interpretable feedback, which is not very intuitive. We present an ontology quality evaluation tool, named OntoInsight, that caters to different types of users, from beginners to advanced, with custom recommendations, basic (simple suggestions), and advanced (involving deep technical insights) recommendations. It can handle ontologies of varying size with full ontology evaluation and modular evaluation (useful for large and complex ontologies). The pipeline automates all the stages in the tool, from metric computation (via frameworks such as OQuaRE) and seed-term-based modularization to controlled natural language (CNL) translation and targeted prompt generation for Large Language Models (LLMs). The user has the freedom to configure their own LLM API key and choose the type of evaluation and suggestions they want, according to their needs and expertise. The source code of OntoInsight is available under Apache 2.0 license at https://github.com/kracr/onto-insight.
Repository Citation
Sammi, D.,
Bhushan, L.,
Mutharaju, R.,
& Shimizu, C.
(2025). OntoInsight - A Metric-Guided Tool for Ontology Quality Evaluation with LLM-Powered Recommendations. Conceptual Modeling, ER 2025, 393-411.
https://corescholar.libraries.wright.edu/cse/688
DOI
10.1007/978-3-032-08623-5_21
