Publication Date


Document Type


Committee Members

Michelle Cheatham (Advisor), Derek Doran (Committee Member), Mateen M. Rizki (Committee Member)

Degree Name

Master of Science (MS)


Ontology alignment systems establish the semantic links between ontologies that enable knowledge from various sources and domains to be used by automated applications in many different ways. Unfortunately, these systems are not perfect. Currently, the results of even the best-performing automated alignment systems need to be manually verified in order to be fully trusted. Ontology alignment researchers have turned to crowdsourcing platforms such as Amazon's Mechanical Turk to accomplish this. However, there has been little systematic analysis of the accuracy of crowdsourcing for alignment verification and the establishment of best practices. In this work, we analyze the impact of the presentation of the context of potential matches and the way in which the question is presented to workers on the accuracy of crowdsourcing for alignment verification. Our overall recommendations are that users interested in high precision are likely to achieve the best results by presenting the definitions of the entity labels and allowing workers to respond with true/false to the question of whether or not an equivalence relationship exists. Conversely, if the alignment researcher is interested in high recall, they are better off presenting workers with a graphical depiction of the entity relationships and a set of options about the type of relation that exists, if any.

Page Count


Department or Program

Department of Computer Science and Engineering

Year Degree Awarded