- Services & Solutions
- Clients & Partners
The Representation Space Paradigm Of Concurrent Evolving Object Descriptions
A representation paradigm for instantiating and refining multiple, concurrent descriptions of an object from a sequence of imagery is presented. This paradigm is designed to be used by the perception system of an autonomous robot that needs to describe many types of objects, initially detects objects at a distance and gradually acquires higher resolution data, and continuously collects sensory input. We argue that multiple, concurrent descriptions of an object are necessary because different perceptual tasks are best performed using different representations and because different types of descriptions require different quality data to support their computation. Since the data change significantly over time, the paradigm supports the evolution of descriptions, progressing from crude "blob" descriptions to complete semantic models, such as bush, rock, and tree. To control this accumulation of new descriptions, we introduce the idea of representation space. Representation space is a lattice of representations that specifies the order in which they should be considered for describing an object. Each of the representations in the lattice is associated with an object only after the object has been described multiple times in the representation and the parameters of the representation have been judged to be "stable". We define stability in a statistical sense, enhanced by a set of explanations describing valid reasons for deviations from expected measurements. These explanations may draw on many types of knowledge, including the physics of the sensor, the performance of the segmentation procedure, and the reliability of the matching technique. To illustrate the power of these ideas, we have implemented a system, which we call TraX, that constructs and refines models of outdoor objects detected in sequences of range data.