VERL: An ontology framework for representing and annotating video events

Citation

Francois, A.R.J., Nevatia, R., Hobbs, J., and Bolles, R.C. VERL: An Ontology Framework for Representing and Annotating Video Events. IEEE Multimedia Magazine, vol. 12, no. 4, pp. 76-86, October 2005.

Abstract

The notion of “events” is extremely important in characterizing the contents of video. An event is typically triggered by some kind of change of state captured in the video, such as when an object starts moving. The ability to reason with events is a critical step toward video understanding. This article describes the findings of a recent workshop series that has produced an ontology framework for representing video events-called Video Event Representation Language (VERL) -and a companion annotation framework, called Video Event Markup Language (VEML). One of the key concepts in this work is the modeling of events as composable, whereby complex events are constructed from simpler events by operations such as sequencing, iteration, and alternation. The article presents an extensible event and object ontology expressed in VERL and discusses a detailed example of applying VERL and VEML to the description of a “tailgating” event in surveillance video.


Read more from SRI