JUCS - Journal of Universal Computer Science 29(8): 866-891, doi: 10.3897/jucs.86745
The evaluation of a semi-automatic authoring tool for knowledge extraction in the AC&NL Tutor
expand article infoAni Grubišić, Slavomir Stankov§, Branko Žitko, Ines Šarić-Grgić, Angelina Gašpar, Emil Brajković|, Daniel Vasić|
‡ University of Split, Split, Croatia§ Unaffiliated, Split, Croatia| University of Mostar, Mostar, Bosnia and Herzegovina
Open Access
Abstract
This paper describes and evaluates the performance of a semi-automatic authoring tool (SAAT) for knowledge extraction in the AC&NL Tutor, highlighting its strengths and weaknesses. We assessed the accuracy of automatic annotation tasks (Part-of-Speech tagging, Name Entity Recognition, Dependency parsing, and Coreference Resolution) performed on a dataset of 160 sentences from unstructured Wikipedia text on a computer. We compared the automatic annotations to the gold standard, created after human post-editing and validation. Human-error analysis included 3769 words, 582 subsentences, 1129 questions, 917 propositions, 1020 concepts, and 667 relations. It resulted in the error type classification and the set of custom rules further used for automatic error identification and correction. The results showed that an average of 68.7% of the error corrections referred to CoreNLP performance and 31.3% to the SAAT extraction algorithms. Our main contributions include an integrated approach to the comprehensive pre-processing of the text, knowledge extraction and visualization; the consolidated evaluation of natural language processing tasks and knowledge extraction output (sentences, subsentences, questions, concept maps) and the newly developed reference dataset. 
Keywords
Natural Language Processing, Knowledge Extraction, Automatic Question Generation, Human-Error Analysis, Gold Standard, AC&NL Tutor, Concept Maps