A technique for unifying spatial and temporal analysis of an image sequence taken by a camera moving in a straight line is presented. The technique is based on a “dense” sequence of images–images taken close enough together to form a solid block of data. Slices of this solid directly encode changes due to motion of the camera. These slices, which have one spatial dimension and one temporal dimension, are more structured than conventional images. This additional structure makes them easier to analyze. We present the theory behind this technique, describe an initial implementations, and discuss our preliminary results.