Experimental Evaluation of Subject Matter Expert-Oriented Knowledge Base Authoring Tools

Citation

Schrag, Robert & Pool, Mike & Chaudhri, Vinay & Kahlert, Robert & Powers, Joshua & Cohen, Paul & Fitzgerald, Julie & Mishra, Sunil. (2022). Experimental evaluation of subject matter expert-oriented knowledge base authoring tools.

Abstract

We describe a large-scale experiment in which non-artificial intelligence subject matter experts (SMEs)ā€”with neither artificial intelligence background nor extensive training in the taskā€”author knowledge bases (KBs) following a challenge problem specification with a strong question-answering component. As a reference for comparison, professional knowledge engineers (KEs) author KBs following the same specification. This paper concentrates on the design of the experiment and its resultsā€”the evaluation of SME- and KEauthored KBs and SME-oriented authoring tools. Evaluation is in terms of quantitative subjective (functional performance) metrics and objective (knowledge reuse) metrics that we define and apply, as well as in terms of subjective qualitative assessment using several sources. While all evaluation styles are useful individually and exhibit collective power, we find that subjective qualitative evaluation affords us insights of greatest leverage for future system/process design. One practical conclusion is that large-scale KB development may best be supported by ā€œmixed-skillsā€ teams of SMEs and KEs collaborating synergistically, rather than by SMEs forced to work alone.


Read more from SRI