May 23, 2007

The plausibility corruption principle as an aporetic strategy

The last week of development has been consumed with implementing various aspects of K, a propositional logic system popularized by Saul Kripke. However, mine is no simple implementation, but involves a distinction in the logic core between the retraction of a declaration, and the negation of a declaration. Consider if we declared the following two hypotheses into the engine core:

α: A --> B
β: B --> C

Both of these conditional statements are related in an easily determined way: the apodosis of α is the protasis of β. If both of these are declared in the core, the core can automatically determine the implication that:

γ: A --> C

This is clearly a reasonable implication. The goal is that our user should be able to declare:

δ: ~(A --> C)

And cause a contradiction to occur in the core. But suppose they did not declare δ Suppose instead they retracted γ explicitly. By implication, either α or β would also need to be retracted by the engine core. How does the engine decide which one to retract?

Because this does not involve negation, we cannot opt for a simple disjunctive solution. At this point we are no longer in the realm of mere propositional logic. We have officially entered the realm of aporetics, which focuses on apory (paradox) resolution strategies. The grey ground here comes partially from our distinction between negation and retraction. A declaration of A is a positive assertion which can be altered in one of two ways. Either we can contradict the original declaration and say "~A" and cause a logical paradox in the system, or we can simply retract the original declaration. The retraction still allows for the possibility of "A", and constitutes a sort of clean slate.

Negation, by contrast, is a positive declaration in the following form:

&eta: A --> !

This reads "If A, then there exists a contradiction". The difference between a naive logical negation using boolean operators in Java and this concept of active contradiction is that the latter doesn't permit unacknowledged change in the logical tenets of the system. You cannot declare "A" and a minute later declare "not A", thus overwriting your initial assertion. This is because "not A" might be the unforseen consequence of a complicated set of rules. While this is harmless in an imperative logical environment such as Java, the proper way to deal with negation in a declarative platform is to explicitly declare something to be the condition for a contradiction in the system, and then use standard exception handling to initiate one or more aporetic strategies at runtime.

It might be that the distinction between retraction and negation is lost in text books because so many texts are trying to communicate the absolutely basic relationships between different axioms and arguments. But in the real world, the world of practice, people and machines are busy declaring truths, contradicting themselves, catching their errors and, unless they're simple ideologues, retracting their earlier declarations. The aporetic process is so second nature to us that we've rarely afforded ourselves the opportunity to give it a name. And, while examples in text do not lend themselves well to retraction--for how do you retract a written statement?--today's logical workbench gives us every opportunity to do so.

So, what strategy will we use for resolving the above paradox? One strategy involves maximally consistent subsets, and the other involves assigning plausibility values to all declarations. A combination of these is likely what I will use. These plausibility values will be explicitly determined by the operator as the lattice of logical declarations is framed. The most immediate challenges to completing the implementation of this feature revolves around determining the plausibility of implications, a topic that was not covered in books on aporetics which the author has yet read.

My instinct is to play upon a variant of the "truth corruption" principle that I currently use in logic. The truth corruption principle states that if any part of the logical statement is false, the truth of the statement is corrupted entirely. Within every lie may lie a truth / But truths may never be composed of lies. A variant of this, the "plausibility corruption" principle might state that the plausibility value of an implied declaration is never greater than that of its least plausible implicative. An implication's implicatives are those declarations whose contextual relationship directly led to its inclusion in the latticework of logical statements.

As an aside, I wasn't kidding about creating that ontology--I don't even like the word 'ontology'-- but my esteemed friend Mr. Bringman convinced me that it ranked among my major obstacles in creating this engine and the community around it. I promise that these terms will all be explained in a glossary of terms soon. I'll even add that task to the JIRA.

There must be distinct types of implicative relationship, then. Some implications are bidirectional, tautological, unidirectional, and can involve other implications. It is this implicative latticework that lies at the heart of a real analytics framework. It informs analysis but also distinguishes a truly declarative system from the imperative bias of all modern logical computing languages.

More on this implementation as it unfolds.

April 29, 2007

Coming to an ontology

The problem with the computerized logical systems which I have used prior to today, is this. With almost perfect consistency, the underlying goal of the logic in popular computer systems is to assemble a mechanism to control the flow of change in the system. These systems are imperative in nature.

Imperative systems direct the actions which a computer program performs. They are perhaps the most common and practical systems in use today. Less common, though ultimately no less practical, are logical systems that evaluate the truth of certain logical arguments. Telling a computer how to perform certain functions is useful, but dangerous if the reasoning behind our commands is flawed.

So, I'm interested in building systems that can provide a means to analyse our logics as well as to execute them. That's what I'm trying to build with Arete. However, the effort requires visible public progress as an Open Source project before I can get some of the needed infrastructure to continue more advanced development. This requires an iterative approach to building the platform.

The problem with this iterative approach is that there is scant and often contradictory literature to help me distinguish the two types of logic listed above. Often terms are used interchangeably between the two types of logic to describe what are essentially different things. Theorems, premises, antecedents, consequents, hypotheses, sentences, clauses, instructions, protasis, sumperasma, arguments. Logical literature today is a mixed bag of ambiguity and industry buzz-words.

So, I'm trying today to come to terms, to fashion an ontology I can use moving forward in development. It is fun but demanding work, and I've not yet eaten. I think I should see to that.