Sunday, April 24, 2005

Nerve cells and Ideals and Ideal Programming

I've started writing a little about my concepts of Ideals and Ideal Programming for the conceptualization of software agents.  It will be quite some time before I flesh out these concepts sufficiently for them to sound coherent, but I expect to incrementally make a little progress now and then.
 
One comment I'd like to make now is that there may in fact be at least some parallel between the concept of an Ideal and a biological nerve cell.  A nerve cell has some number of dendrites that enable the nerve cell to receive electrical impulses, and one axon which is used to send out an electrical impulse.  Nerve cells can be quite short (in the brain) or quite long in some parts of the body.  The basic idea here that relates to an Ideal is that unlike a traditional software component or layer of abstraction, an Ideal can take inputs across many layers of software.  A difference from a nerve cell is that an Ideal can send out messages across many levels of software.
 
Although in practice messages between processes running on different computers must travel through many layers of software (e.g., a classic TCP/IP "stack"), there is no need for the higher-level applications to have any knowledge of those lower layers.  That layering is incidental to the structuring of a distributed application itself.  An Ideal would in fact transcend actual application logic layers.
 
In reality, Ideals would not actually be transcending layers of traditional software components because those layers would no longer actually exist.  The new "layering" would be more abstract than real since it would be a statistical artifact of the sum of all Ideals that happen to have dendrites or axons in the vicinity of various modular components.
 

Saturday, April 23, 2005

Difficulty with Google Alerts for software agents

Three months ago I registered to receive a "Google Alert" for the term "software agent".  To date, I've received only two alert messages.  The first, more than a month ago, was for a paper dated 1995.  The second, received this morning, was for a web page that has no references to the term "software agent", but Google's cached page header says that "These terms only appear in links pointing to this page".  Furthermore the main text on the page (and also the text in Google's results list) is "Sorry there has been an error. The article you were looking for could not be found".  I know that I myself have authored numerous web pages that Google is capable of seeing, and I've gotten no alerts on them.  So far, the Google Alert feature has not been very useful, at least for me.
 
As a test, I just now registered an alert for the terms "intelligent agent" (no quotes).
 
I'm thinking that maybe I might need my own crawler and text data mining capability so that I myself can do a better job of tracking the evolution of the emerging software agent field.  I don't feel up to it, but I'm not seeing the kind of tools I really need.
 
In truth, this is in fact a great application for software agent technology itself, but we presently have neither the tools nor infrastructure in place to easily implement such an application.  The fact that even Google with all its brain power and financial resources has not mastered even simple alerts speaks volumes for the "state of the art".
 
The bottom line is that much research is needed in distributed computing, machine intelligence, and software agent technology before we can even begin to make a dent in some of these problems.
 

Monday, April 11, 2005

FIPA to become an IEEE Computer Society standards committee

FIPA, the Foundation for Intelligent Physical Agents, has opted to pull in its horns and become an IEEE Computer Society standards committee, namely the "FIPA Standards Committee."

It's very difficult to say if this is a turn for the better, or a turn for the worse for the software agent community, but it's certainly a needed evolutionary step.

My main concern is that far too many sub-domains of the field of software agent technology are still in desperate need of much deeper research and that so many of the standardization efforts are simply premature.

-- Jack Krupansky

Sunday, April 10, 2005

The Nature of Identity

I've written a very rough draft white paper on issues related to , entitled "The Nature of Identity". The four key issues that relate to software agents are identiying the agent itself, identifying resources that the agent wishes to access, indetifying entities that the agent wishes to interact with, and identifying the entity on whose behalf the agent is acting. Certainly there are issues of and as well.

A software agent needs to have an identity that is derivative to the entity on whose behalf the agent is acting. That's not to say that an entity interacting with the agent could necessarily gain access to the identity of the controlling entity.

-- Jack Krupansky

Wednesday, April 06, 2005

Goals versus Tasks

One of the key distinguishing characteristics between a traditional computer program and a software agent is that a program is focused on performing tasks whereas an agent works towards goals. That's a very important distinction, but it's also very difficult to deeply comprehend, let alone put into practice.

A task usually has the form "do X" or "do X using Y". A task is very prescriptive. A task-oriented computer program is essentually pre-programmed with all of the instructions needed to perform that task. A program is essentially a solution contrived by a developer who has analyzed a problem.

A goal usally has the form "satisfy X [and Y and Z...]". A goal is more descriptive than prescriptive. A goal is more about what to accomplish rather than how to go about it. A goal-oriented software agent is free to make unpredictable choices and follow novel paths, provided only that those paths finally accomplish the stated goal(s). A goal-oriented approach is advisable when the resources and paths are not known ahead of time with any degree of uncertainty. An agent is essentially the embodiment of a refinement of a problem statement, with solutions to be sought and evaluated on a dynamic basis.

A program is focused on pursuing a pre-programmed solution, whereas an agent focuses on dynamically refining the problem statement and seeking a solution that matches the refined problem statement and the current problem environment.

There is in fact a vast gray area between these two concepts, and as yet we have very few tools, techniques, or guidelines for analyzing problems and solutions to determine which is which and which has more merit in a given situation.

Just as a parting example, an anti-locking brake system for a motor vehicle is closer to being goal oriented than task oriented. There is no fixed sequence of instructions to execute and feedback and adaptation are critical requirements. A simple cruise-control system also is goal-oriented rather than being strictly task oriented, with no fixed sequence that will achieve the result of a relatively stable speeed. On the other hand, monitoring a news feed for a set of fixed keywords is more task-oriented since there is essentially no feedback or adaptation required.

This is only the starting point of this discussion.

-- Jack Krupansky

Sunday, April 03, 2005

Evolution out of the Code Swamp

As I've ruminated about the challenges of developing autonomous software agents over the past seven years, the one key obstacle that I keep coming back to is code, specifically, hand-designed, hand-written, hand-tested code. I simply can't imagine widespread development and deployment of reliable and flexible autonomous software agents using hand-designed, hand-written, and hand-tested software. It just ain't going to happen. Yes, people will try to do it anyway. Yes, some elite developers can in fact achieve success in narrowly targeted niches, but developing software agents by hand is very clearly not the way to go. We must drag ourselves out of "The Code Swamp" if we want to be serious about designing, developing, deploying, and maintaining software agent technology. The idea that we are going to manually design software that can cope with truly dynamic environments, is simply not credible.

There are any number of paths that we can take, but genetic or evolutionary programming is certainly one of the most promising. Constraint programming is another. My preliminary ideas on Ideal Programming are an effort to start moving more dramatically in the right direction.

We're far from a critical mass today, but the only way we're going to get there is to make sure that we're on a path that leads out of and away from "The Code Swamp."

-- Jack Krupansky

DART from Nasa: Demonstration for Autonomous Rendezvous Technology

From a post on Slashdot, I found a reference to an experimental program at Nasa called "Demonstration for Autonomous Rendezvous Technology, or DART". This is a great example of autonomous decision-making by computer software. The Nasa site notes:
The Demonstration for Autonomous Rendezvous Technology, or DART, is a flight demonstrator vehicle designed to test technologies required to locate and rendezvous with other spacecraft. The DART mission is unique in that all of the operations will be autonomous - there will be no astronaut onboard at the controls, only computers programmed to perform functions. Developed by Orbital Sciences Corporation of Dulles, Va., the DART vehicle will be launched on a Pegasus rocket to test rendezvous, close proximity operations and its control
between the vehicle and a stationary satellite in orbit.

This is more of a robotics application than a software agent, but the basic concepts are still relevent. In truth, as difficult as orbital rendevous is, it is a relatively well-defined problem, whereas much of what we hope to achieve in the realm of software agents is to cope with very dynamic environments.

-- Jack Krupansky