Simpson’s paradox exemplifies a class of problems that can arise when the logic used to reason about the semantics of propositional sentences does not adequately capture certain dependencies between sentences of interest. This paradox has been known as early as 1903 [YUL03], and has been discussed extensively in the statistical literature [SIM51, DAW79, BLY73, CHU42]. The phenomena that typically give rise to Simpson’s paradox can occur in cases such as destructive testing (e.g., determining the breaking strength of materials in orthogonal directions), and identifying the composition of complex alloys. It has also been reported to occur in “real-life” several times since its discovery [KNA85, WAG82]. One such occurrence received wide attention in 1973 over the appearance of a sex bias in the admission policy for graduate students at the University of Berkeley [BIC75]. Given that automated systems will be expected to recognize and cope with the underlying phenomena of this paradox, it is important to develop effective methods for dealing with them, particularly as it impacts the choice of logics that systems must use to reason about real world problems. Only recently, however, has there been any significant indication that Simpson’s paradox merits serious attention by the AI community [PEA88].
Education & learning publications
A framework for understanding the effects of discovery learning on the transfer of problem-solving skill is presented.A distinction is drawn between applying a learned strategy on a transfer problem versus having to generate a novel strategy to solve a transfer problem. The main premise of the framework is that requiring discovery of a strategy while in training encourages the activation or refinement of procedures that are useful for generating a novel strategy. In general, then, the primary benefit of discovery is that it should facilitate transfer to tasks requiring a novel strategy. Learning by discovery, however, may provide little benefit for tasks that can be completed only by applying the learned strategy. Two experiments provided support and further refinement of this hypothesis. Experiment 1 used a transfer problem that could be solved with the general strategy learned in training but required new move sequences to instantiate the strategy. The results indicated that, when transfer required new move sequences to implement a general strategy learned previously, discovery did not enhance transfer of that strategy. In experiment 2, some transfer problems required using a strategy other than that learned in training. As predicted, in this transfer situation, having to discover a strategy while in training produced better transfer than being provided with a strategy in training. Thus, discovering a strategy provided benefits when a new strategy had to be generated to solve a transfer problem but not when the learned strategy could be applied, albeit with new moves, to the transfer problem. Educational implications are discussed.
Hendrix’s Model for Simultaneous Actions and Continuous Processes: An Introduction and Implementation
This paper presents a self-contained introduction and implementation description to a simulation system for modeling simultaneous action and continuous processes (Hendrix, 1973). The essence of the system is described by a portion of its abstract: “A new methodology for the construction of world models is presented. The central feature of this methodology is a mechanism which makes possible the modeling of (1) simultaneous, interactive processes, (2) processes characterized by a continuum of gradual change, (3) involuntarily activated processes (such as the growing of grass) and (4) times as a continuous phenomena.” and by a recent review, Gains (1975): “This is a fascination paper that will be of interest outside the “artificial intelligence” (AI) context in which it is written, from those concerned with simulating and controlling multi-element systems to those interested in operational definitions of concepts such as causality.” Three robot world models are incrementally developed, each introducing a new modeling concept. World models, including a robot world (with sample output), electrical world, and a Turing world are also presented. The interactive operating environment represented permits the user to inspect and alter the run-time structure. A detailed account of the implementation is presented. Keywords: Artificial Intelligence, Artificial Intelligence Center, AIC