Given the immense challenges of enabling computer software to operate autonomously and to be capable of dealing with highly dynamic environments, it would seem almost an insurmountable task. On top of the raw programming challenge we would ask a simple question: How can the programming of software agents be made easy? And I would extend that question a little further:
How can we make the programming of software agents so easy that even children can easily do it? That would seem to be a very tall order, in fact, a seemingly preposterous idea. But, I do think it is reasonable, and the solution to the original problem may actually enable children to program agents as well.
Let me explain.
Years ago, the mere thought of children programming computers was too absurd to even consider, other than for a few odd geniuses. Then came MIT researcher
Seymour Papert who invented the
Logo programming language and system, which actually enabled children to write simple programs that would do amazing things with a primitive robot called "the turtle". It was a real breakthrough.
Alan Kay was impressed by the Logo project and strived to enable his Smalltalk system to be suitable for children as well, including its latest rendition which is called
Squeak and is billed as a "
media authoring tool". Significant attention has been given to
using Squeak as a learning tool for children.
What occurs to me is that children
and adults (e.g., professionals) need an agent programming tool as easy to use as Logo and Squeak. It's not so much that adult professionals can't master the intricacies of programming adaptive autonomous agent software, but that the process is so complex, tedious, and error prone that a dramatically simpler metaphor is needed. In fact, some would argue that a human simply cannot directly program an agent that would have to cope with a wide range of
emergent phenomena.
Neither Logo nor Squeak provide the actual metaphor needed for programming software agents, but they do provide clues, the most important of which is that we need to factor the problem into two parts, the overall metaphor that can be embodied in the agent system software, and then the essential "controls" that would be implemented by that software, and available to the user to effectively control or "program" the agent.
To use the example of a robot, all of the raw capabilities of the robot, such as how to use an effector to pick up and carry an object, would be pre-programmed into the robot's system software, but the planning of what to pick up and when to do it and what to do with the object once picked up would be in the domain of controlling or "programming" the robot.
The essence of a programming system for software agents would be the ability to define goals and overall plans for how to achieve those goals, defining a set of sub-goals and "constraints" and priorities that must be met to achieve that goal, and then to feed all of that information ("the agent program") into an analyzer that structures the information in a form that can be processed by an "evolutionary programming" algorithm.
The essence of an evolutionary programming algorithm is that it is a "search" function that repeatedly tries to find a path from a starting point to a specified end state. The parameters that are fed into the algoritm effectively guide it.
The bottom line is that instead of programming an agent the way the Logo turtle is programmed, with a sequence of steps ("do this, then do that"), the agent programmer focuses on defining the "guidance" that the evolutionary programming algorithm needs to sift through potential solutions to "find" an acceptable solution.
Of course there willbe a lot of "trial and error" as we slowly try to grasp how to go about programming an agent. Simulators and training environments will allow us to experiment wildly without causing any harm.
Frankly, many children will do far better at discovering how to best control software agents than most "professional" adults who carry around too many rigid biases and too much "intellectual baggage" that must painfully be discarded to go back into child-like discovery-mode.
The underlying agent programming system will in fact have some rudimentary simulation capability built it so that it can in fact at least partially evaluate potential solutions before picking the one to pursue. Better guidance from the programmer may not be required, but may permit the agent system to perform better. The agent system can also give the programmer feedback so that the "program" can be updated to correct inefficiencies.
In addition, the agent system can also learn from its "experience" and feed guidance back into future executions.
In summary, although the goal is to make it feasible and easy for professionals to program software agents, the path to get there may well have the side effect of enabling children to program software agents as easily.
Postscript: The "dumb" Logo turtle could be turned into a full-fledged autonomous robot, and then a Logo-like language could be used to enable even children to "program" the turtle to "discover" things in its environment and then interact with what has been discovered. In particular, the neo-turtle could be taught to "play" with objects and even people. In fact, the children could "teach" their turtles how to interact with other turtles. This could be truly amazing stuff.