Belief and Incompleteness

Abstract

Two artificially intelligent (AI) computer agents begin to play a game of chess, and the following conversation ensues:

  • S1: Do you know the rules of chess?
  • S2: Yes.
  • S1: Then you know whether White has a forced initial win or not.
  • S2: Upon reflection, I realize that I must.
  • S1: Then there is no reason to play.
  • S2: No.

Both agents are state-of-the-art constructions, incorporating the latest AI research in chess playing, natural-language understanding, planning, etc. But because of the overwhelming combinatorics of chess, neither they nor the fastest foreseeable computers would be able to search the entire game tree to find out whether White has a forced win. Why then do they come to such an odd conclusion about their own knowledge of the game? In this case, the agents model of belief is not correct. They make the assumption that an agent actually knows all the consequences of his beliefs. S1 knows that chess is a finite game, and thus reasons that, in principle, knowing the rules of chess is all that is required to figure out whether White has a forced initial win. After learning that S2 does indeed know the rules of chess, he comes to the erroneous conclusion that S2 also knows this particular consequence of the rules. and S2 himself, reflecting on his own knowledge in the same manner, arrives at the same conclusion, even though in actual fact he could never carry out the computations necessary to demonstrate it.


Read more from SRI