« Researching the "bravo core" | Main

Refining the Bravo Core

When I started working on the Bravo core this February, I had only the vaguest concept of how a modal logic platform would behave, and any aporetic concerns occupied a distant second place in my limited attention span. I was surprised, and still am, how unified the concerns of building a modal logic and an aporetics platform have been.

The key concepts that allowed me to fuse these concerns and make progress in either of them only through making progress in each, was that of the contradiction.

The Java language and all programming languages which I have had dealings with thus far do not encapsulate the concept of a logical contradiction as an element of the language. For most practical concerns that we are taught in programming classes, the languages needn't do so. However, in logic the concept of a contradiction is one of absolute importance, because of its innate relationship to truth and falsehood.

Just about any program you can write ultimately needs to make binary choices of how it proceeds, depending on the circumstances of its environment. The behavioral complexity of most programs are usually constrained by the variations in data, and the relationships between your conditional expressions is "taken for granted" by the program. In other words, the programming language and compiler usually trust that you know what you are doing.

This is reasonable, because when you write software code you do so in the context of your program's internals. The relationships are evident to the programmer, because they're kept in that programmer's short term memory (attention). OOP constructs and other techniques are used to create abstraction barriers that shield the programmer from the complexity of having to know how everything works, and the logic is free to become more complex as a result of these encapsulation barriers.

However, encapsulation barriers necessarily permit and in some cases even encourage reproduction of software code, and ultimately of decision making logic. The result of this relationship between encapsulation and executive logic duplication is that the executive logic becomes fixed and very difficult to change. Arguably, better software architecture and finer-grained class libraries could mitigate this problem. However, in languages like Java, the entire environment within which a conditional expression might draw its path of action must be passed around because of the inability to pass the expression itself using a lambda-like semantic common to Scheme and JavaScript. The architectural acrobatics necessary to place a conditional expression within a meaningful environmental context to secure a dependable and sufficiently complex result does not lend itself well to most modern programming languages.

And so, conditional expressions are littered throughout a program's structure, duplicated in different classes, and difficult or impossible to locate or update in a timely manner. Worse, these factors combine to obscure any meaningful relationship that might exist between the conditional expressions, and the very notion of this level of relational analysis is usually left to runtime debugging and long-term software maintenance.

The Arete rule engine addresses all of the issues enumerated above in a variety of ways. First, it provides a robust way to decouple the structure of conditional expressions and logical behavior of a program from the environment within which these expressions must operate. It does this by inverting the relationship between data and logic. In elementary programs, logical code runs in the context of a data-rich environment. In the Arete rule engine, the logical code is broken down into a type of data itself, and is recombined in response to the needs of the environment, as indicated by the original environmental facts, and the rule developers analysis of the relationship between these facts and the logical responses to them. Because complex conditional expressions can be reconstructed from more primitive elements in this simulated way, Arete encourages an even greater degree of code reuse than can be easily gotten inside of a simple Object-Oriented mechanism.

A secondary benefit of this large level of code reuse is the ability to analyze the relationships between our pieces of logic. We can now construct complex implication trees with much less code, and even begin to help the developer pinpoint logical contradictions in the subtle relationships that exist between our conditional expressions. This in turn leads to metarelational analysis which can identify maximally consistent subsets of rule assertions, and even automated aporetic resolution strategies.

But all of this aporetic functionality requires the simple concept of a "contradiction" taking primacy to concepts of "truth" and "falsehood" within the context of the engine environment.

It is not difficult to write a simple rule engine that does nothing more than help you decompose and recombine conditional expressions, but the many benefits of this granularity cannot be realized without the ability to draw relations and implications from your rule declarations.

So, this is the present course of R&D on the Arete project. I am implementing a fully aporetics-capable modal logic platform, based on previous experiences with rule engineering and my early success with Arete's Alpha Core. The second engine core will not replace, but instead complement and refine the workings of the first engine core. Arete's architecture permits me to create multiple operation centers that each do specialized variants of the heavy lifting, depending on what the developer needs. The software in the alpha-06 release is really just a very limited proof-of-concept for a rapidly evolving platform.

I'm trying at each turn to make sure that my architecture is theoretically grounded and that I'm not writing myself into a corner. I'd like to be able to support intensional semantics and possible worlds syntax, which are really necessary for any useful Alethic or Tense logic implementation.