Feng, M., Beck, J. E., & Heffernan, N. T. (2009). Using Learning Decomposition and Bootstrapping with Randomization to Compare the Impact of Different Educational Interventions on Learning. International Working Group on Educational Data Mining.
Lifted first-order probabilistic inference, which manipulates first-order representations directly, has been receiving increasing attention. To date, all lifted inference methods require a model to be shattered against itself and evidence (that is, splitting groups of variables until they are all composed of variables with the exact same properties), before inference starts. In many situations this produces a new model that is not far from propositionalized, therefore canceling the benefits of lifted inference. We present an algorithm, Anytime Lifted Belief Propagation, that corresponds to this intuition by performing shattering during belief propagation inference, on an as-needed basis, starting on the most relevant parts of a model first. The trade-off is having an (exact) bound (an interval) on the query’s belief rather than an exact belief. Bounds are useful when approximate answers are sufficient and, in decision-making applications, can even be enough to determine the decision that would be picked from the exact belief. Moreover, the bounds can be made to converge to the exact solution as inference and shattering converge to the entire model. Interestingly, this algorithm mirrors theorem-proving, helping to close the gap between probabilistic and logic inference.