October 31, 2007

Refining the Bravo Core

When I started working on the Bravo core this February, I had only the vaguest concept of how a modal logic platform would behave, and any aporetic concerns occupied a distant second place in my limited attention span. I was surprised, and still am, how unified the concerns of building a modal logic and an aporetics platform have been.

The key concepts that allowed me to fuse these concerns and make progress in either of them only through making progress in each, was that of the contradiction.

The Java language and all programming languages which I have had dealings with thus far do not encapsulate the concept of a logical contradiction as an element of the language. For most practical concerns that we are taught in programming classes, the languages needn't do so. However, in logic the concept of a contradiction is one of absolute importance, because of its innate relationship to truth and falsehood.

Just about any program you can write ultimately needs to make binary choices of how it proceeds, depending on the circumstances of its environment. The behavioral complexity of most programs are usually constrained by the variations in data, and the relationships between your conditional expressions is "taken for granted" by the program. In other words, the programming language and compiler usually trust that you know what you are doing.

This is reasonable, because when you write software code you do so in the context of your program's internals. The relationships are evident to the programmer, because they're kept in that programmer's short term memory (attention). OOP constructs and other techniques are used to create abstraction barriers that shield the programmer from the complexity of having to know how everything works, and the logic is free to become more complex as a result of these encapsulation barriers.

However, encapsulation barriers necessarily permit and in some cases even encourage reproduction of software code, and ultimately of decision making logic. The result of this relationship between encapsulation and executive logic duplication is that the executive logic becomes fixed and very difficult to change. Arguably, better software architecture and finer-grained class libraries could mitigate this problem. However, in languages like Java, the entire environment within which a conditional expression might draw its path of action must be passed around because of the inability to pass the expression itself using a lambda-like semantic common to Scheme and JavaScript. The architectural acrobatics necessary to place a conditional expression within a meaningful environmental context to secure a dependable and sufficiently complex result does not lend itself well to most modern programming languages.

And so, conditional expressions are littered throughout a program's structure, duplicated in different classes, and difficult or impossible to locate or update in a timely manner. Worse, these factors combine to obscure any meaningful relationship that might exist between the conditional expressions, and the very notion of this level of relational analysis is usually left to runtime debugging and long-term software maintenance.

The Arete rule engine addresses all of the issues enumerated above in a variety of ways. First, it provides a robust way to decouple the structure of conditional expressions and logical behavior of a program from the environment within which these expressions must operate. It does this by inverting the relationship between data and logic. In elementary programs, logical code runs in the context of a data-rich environment. In the Arete rule engine, the logical code is broken down into a type of data itself, and is recombined in response to the needs of the environment, as indicated by the original environmental facts, and the rule developers analysis of the relationship between these facts and the logical responses to them. Because complex conditional expressions can be reconstructed from more primitive elements in this simulated way, Arete encourages an even greater degree of code reuse than can be easily gotten inside of a simple Object-Oriented mechanism.

A secondary benefit of this large level of code reuse is the ability to analyze the relationships between our pieces of logic. We can now construct complex implication trees with much less code, and even begin to help the developer pinpoint logical contradictions in the subtle relationships that exist between our conditional expressions. This in turn leads to metarelational analysis which can identify maximally consistent subsets of rule assertions, and even automated aporetic resolution strategies.

But all of this aporetic functionality requires the simple concept of a "contradiction" taking primacy to concepts of "truth" and "falsehood" within the context of the engine environment.

It is not difficult to write a simple rule engine that does nothing more than help you decompose and recombine conditional expressions, but the many benefits of this granularity cannot be realized without the ability to draw relations and implications from your rule declarations.

So, this is the present course of R&D on the Arete project. I am implementing a fully aporetics-capable modal logic platform, based on previous experiences with rule engineering and my early success with Arete's Alpha Core. The second engine core will not replace, but instead complement and refine the workings of the first engine core. Arete's architecture permits me to create multiple operation centers that each do specialized variants of the heavy lifting, depending on what the developer needs. The software in the alpha-06 release is really just a very limited proof-of-concept for a rapidly evolving platform.

I'm trying at each turn to make sure that my architecture is theoretically grounded and that I'm not writing myself into a corner. I'd like to be able to support intensional semantics and possible worlds syntax, which are really necessary for any useful Alethic or Tense logic implementation.

October 27, 2007

Researching the "bravo core"

It's been a long, fine summer and autumn, and working on the Arete rule engine has been a lot of fun and a great challenge. Most nights it's all I can do to make dinner and try to get in a few hours of coding before sleep and preparations for the day job beckon. Weekends at the library, camped out somewhere with a WAN and power outlet and not too much noise is usually the only chance I have to get any serious work done. Much less blogging.

So, the result is that I have made more advancements in the Arete and CORM projects than I have shared with the public.

The first, and most significant advancement has been the purchase of $750 worth of server and network hardware, and the installation and hardening of the Solaris 10 OS on these machines. I had to teach myself Solaris administration, which was not trivial. Establishing the machines on the network in a secure manner was also not trivial. I registered a domain and set up port forwarding so that I could access these machines remotely, not without help and encouragement from Catharine.

Then, two weeks ago, I made the most important upgrade to the system to date. This was the purpose of the entire hardware and system setup.

I installed Subversion on my Solaris server.

Now Catharine and I can work on our three projects in truly collaborative terms.

As an aside, I have fought for a long time to get SVN at the University of Minnesota, but there is no funding at the University of Minnesota for Open Source research and development. They like to use the software, but don't have the organizational vision to foster a developer community here. At least not in the Office of Information Technology.

I hold out hope that the Institute of Technology could pick up the torch on this and move it forward. I know the students are interested, but I don't know whether the departments have any idea how to integrate the Open Source methods with their classroom instructional methods. I'm taking two computer science classes this autumn (nothing glorious, but rather reclaiming a bit of dignity on my transcript from early years when I was working to support myself as an undergrad and underperforming in college), and I can see how using standardized unit testing frameworks would really ease the lives of the TAs in the computer science departments. If there were a central directory for people to upload their packaged code, the TAs could effectively grade by running standard tests against student's homework assignment code, which would precisely mirror the method we use in Open Source: implement the test code. You're done when my tests run.

/end rant

The result of all this effort is a new sense of acceleration in R&D. In particular, CORM is getting a lot of my attention in the last three weeks. I've been absolutely amazed by the discovery of the JScience.org project, and immediately moved to integrate it into the CORM project, as it implements the entire quantity pattern much better than Arlow recommended, and does a decent first shot at the Money archetype pattern.

The most drastic changes I've made to my plans for the CORM project have been completely scrapping the Rules archetype pattern and moving to implement the rules in Arete. I think that Arlow's model fails to respect certain inversion of control principles that help us test and write good code. As such, we've inverted the paradigm and decided that CORM will remain a purely passive data storage model. All business logic will be implemented in a rule container, and the CORM objects will reside within that container and under its management.

But none of this is why I'm writing today. Today I'm writing to introduce the second official engine core of the Arete project. The first engine core was appropriately called "Alpha", and I'm calling the second logic core "Bravo". The Bravo Core implements the foundation for both a modal logic platform and for a real Aporetic analysis platform. The purpose of an aporetics platform is to analyze relationships between rules and find the subtle contradictions that arise in our short-sighted implementations, then to employ resolution strategies to isolate and resolve these apories.

The basis of such a system is a simple implementation of "K", Kripke's logic. In "K" I made the concept of Contradiction primary to that of boolean truth within the context of the Bravo Core, and by doing so managed to disconnect the imperative structure of Java from that of my rule assertions. In other words, truth and falsehood are no longer merely boolean values in the system, but are conditions for contradiction. The system will allow you to declare something to be true, and a second later to declare it to be false, but will throw a Contradiction to alert you to the discontinuity in your logic. This replaces the standard behavior of assignment operators in modern programming languages.

Anyway, I tagged a release today. Finally. It's at the Arete Download page.

Example code is limited to test classes right now, but the Bravo Core is fully tested and operational.

Continue reading "Researching the "bravo core"" »

May 13, 2007

Work on the new modal logic core is underway

Now that the project structure overhaul is done, I can focus my attention on the modal logic core. My first steps in this arena are wobbly, as they involve a complete overhaul of the core algorithms that undergird our system, and even a reevaluation of the API terminology.

Before I can instantiate an alethic system, I must first determine the core architecture of an extensible propositional logic language, based on "K". I'm exploiting Java's built-in concept of exceptions to handle the behavior of contradictions in the logic core, and encapsulating contradiction as a foundational principle of the entire logical system. This idea comes directly from Garson's text on modal logic.

Right now I'm experimenting with the concept of a sentence, and trying to build propositions, contradictions, and conditionals all as sentences. One challenge is that a conditional uses two sentences and I wish to keep the API distinct for the time being from the imperative logic API that I've been using thus far. So, I might be spending some time with the Organon to find meaningful vocabulary. I'll also be returning to the logical lexicon I established earlier this month.

Categories

Links