Monday, June 8, 2009

Magic and Truth Maintenance

The human brain has an amazing ability to construct mental models of physical reality based on the information given to it by the senses. By hacking the machinery that does this model construction, magicians are able to trick the brain into building mental models that are not really impossible, and the result, magic, can be very enjoyable. My favorite example of this is illustrated by Penn & Teller in this video:

You know at some level that Teller has not been cut into three pieces, but your brain can't figure out how else to accept the visual facts it is presented with. Even when you are allowed to see how the trick is done in the second part of the video, your brain has trouble revising and maintaining the truth of the mental model it has constructed.

A recent article in Wired describes Teller's foray into neuroscience and is well worth a read and a watch. The cup and ball trick shown there in the video from "The View" is another illustration of your brain having trouble processing the additional facts of how the trick is actually done. it's much easier for the brain to cheat and suppose that balls can magically appear and disappear. Teller's work has some lessons to teach the semantic web as well, because building models and maintaining their truth in the face of new information is central to the functioning of semantic web reasoning engines, and is also really hard to do well. This is one of the things I learned in my discussion with Atanas Kiryakov of Ontotext.

Semantic software starts with a knowledge model, then adds information triples to build out its base of knowledge. More sophisticated software might add to its knowledge model depending on the new information to be added. For each triple, or fact, semantic software will use an inference engine to add more facts to its collection, thus building up its knowledge model. Maybe you want to add the fact that coconuts are brown. Depending on your knowledge model, your inference engine may already know that coconuts are fruits, and will want to add coconuts to the group of brown fruits. This layer of inference adds your ability to make statements about coconuts, but it adds complexity to the process of adding facts to your knowledge collection. This is why ingest speed is a relevant figure of merit for semantic databases.

Things get complicated if information that has been ingested needs to be changed or deleted. If that happens, the inference engine has to re-infer all the facts that it inferred when the original fact was entered, so that it can change or delete all the inferred facts. For example, suppose you remove the fact that the coconut is a fruit because a new classification calls it a nut. Then you have to remove the brown fruit fact as you add a brown nut fact. This can be quite difficult and may involve many queries on your entire knowledge collection. For some knowledge models may even be impossible to undo inferences, particularly if the database has not kept a record of the ingest process. Whereas the human brain is reasonably good at developing and revising models of reality, software inference engines know nothing of reality and can only determine the consistency of its collection of knowledge. Maintaining consistency in the face of changing data can be computationally impractical. This is one of the main reasons that you don't want to use a semantic database for applications that have lots of transactions.

If you are considering the use of semantic web technology for that new project of yours, you'll want to understand these points, because for all the deductive power you can gain by using a knowledge model, there's also a danger that your application may get bogged down by the burden of maintaining consistency. Machines don't know how to cheat the way your brain does. Magic is not an option.


Contribute a Comment