Sunday, February 27, 2005

Software Agent Manifesto

I've taken a first stab at enumerating all the capabilities that I think need to be put into place before we can even begin to see some dramatic, wide-spread adoption of software agent technology. I call it my "Software Agent Manifesto".

Granted, it's a work in progress, but we do need to look at the big picture so that we can have at least a little confidence that we're moving in the right direction.

Some additional, more recent thoughts are included on my "Random Thoughts on Software Agents" page.

Please feel free to comment on any of this, either here in the blog or via email.

Agent programming for children

Given the immense challenges of enabling computer software to operate autonomously and to be capable of dealing with highly dynamic environments, it would seem almost an insurmountable task. On top of the raw programming challenge we would ask a simple question: How can the programming of software agents be made easy? And I would extend that question a little further: How can we make the programming of software agents so easy that even children can easily do it? That would seem to be a very tall order, in fact, a seemingly preposterous idea. But, I do think it is reasonable, and the solution to the original problem may actually enable children to program agents as well.

Let me explain.

Years ago, the mere thought of children programming computers was too absurd to even consider, other than for a few odd geniuses. Then came MIT researcher Seymour Papert who invented the Logo programming language and system, which actually enabled children to write simple programs that would do amazing things with a primitive robot called "the turtle". It was a real breakthrough.

Alan Kay was impressed by the Logo project and strived to enable his Smalltalk system to be suitable for children as well, including its latest rendition which is called Squeak and is billed as a "media authoring tool". Significant attention has been given to using Squeak as a learning tool for children.

What occurs to me is that children and adults (e.g., professionals) need an agent programming tool as easy to use as Logo and Squeak. It's not so much that adult professionals can't master the intricacies of programming adaptive autonomous agent software, but that the process is so complex, tedious, and error prone that a dramatically simpler metaphor is needed. In fact, some would argue that a human simply cannot directly program an agent that would have to cope with a wide range of emergent phenomena.

Neither Logo nor Squeak provide the actual metaphor needed for programming software agents, but they do provide clues, the most important of which is that we need to factor the problem into two parts, the overall metaphor that can be embodied in the agent system software, and then the essential "controls" that would be implemented by that software, and available to the user to effectively control or "program" the agent.

To use the example of a robot, all of the raw capabilities of the robot, such as how to use an effector to pick up and carry an object, would be pre-programmed into the robot's system software, but the planning of what to pick up and when to do it and what to do with the object once picked up would be in the domain of controlling or "programming" the robot.

The essence of a programming system for software agents would be the ability to define goals and overall plans for how to achieve those goals, defining a set of sub-goals and "constraints" and priorities that must be met to achieve that goal, and then to feed all of that information ("the agent program") into an analyzer that structures the information in a form that can be processed by an "evolutionary programming" algorithm.

The essence of an evolutionary programming algorithm is that it is a "search" function that repeatedly tries to find a path from a starting point to a specified end state. The parameters that are fed into the algoritm effectively guide it.

The bottom line is that instead of programming an agent the way the Logo turtle is programmed, with a sequence of steps ("do this, then do that"), the agent programmer focuses on defining the "guidance" that the evolutionary programming algorithm needs to sift through potential solutions to "find" an acceptable solution.

Of course there willbe a lot of "trial and error" as we slowly try to grasp how to go about programming an agent. Simulators and training environments will allow us to experiment wildly without causing any harm.

Frankly, many children will do far better at discovering how to best control software agents than most "professional" adults who carry around too many rigid biases and too much "intellectual baggage" that must painfully be discarded to go back into child-like discovery-mode.

The underlying agent programming system will in fact have some rudimentary simulation capability built it so that it can in fact at least partially evaluate potential solutions before picking the one to pursue. Better guidance from the programmer may not be required, but may permit the agent system to perform better. The agent system can also give the programmer feedback so that the "program" can be updated to correct inefficiencies.

In addition, the agent system can also learn from its "experience" and feed guidance back into future executions.

In summary, although the goal is to make it feasible and easy for professionals to program software agents, the path to get there may well have the side effect of enabling children to program software agents as easily.

Postscript: The "dumb" Logo turtle could be turned into a full-fledged autonomous robot, and then a Logo-like language could be used to enable even children to "program" the turtle to "discover" things in its environment and then interact with what has been discovered. In particular, the neo-turtle could be taught to "play" with objects and even people. In fact, the children could "teach" their turtles how to interact with other turtles. This could be truly amazing stuff.

Saturday, February 26, 2005

Nature-Inspired Computing

A new book entitled "Handbook of Research on Nature Inspired Computing for Economy and Management" is in the process of being edited by Dr. Jean-Philippe Rennard, Senior Professor, Grenoble Graduate School of Business, Grenoble, France. From his blurb:
The advance of computer science and the remarkable growth of computing power over the last thirty years have made the computer a fantastic tool to cope with complexity. The emergence of nature inspired computing is one of the most amazing achievements of these researches. The universality of the computing techniques inherited from nature, i.e. their applications as well in biology, physics, engineering as in economy or management, clearly demonstrates their depth. These new tools are used in two different ways, both highly promising for economy and management: emergent (bottom-up) simulations and optimization. Emergent simulations lead to a better understanding of complex interactions and to original theoretical approaches. Nature inspired algorithms for optimization lead to efficient, supple and adaptable tools.

Nature-inspired computing has some real potential for helping to fuel to shift to software agent technology. Whether the model is swarms, colonies, or or other natural metaphors, software agents seem well-suited for attacking complexity and modeling extremely dynamic systems.

Friday, February 25, 2005

Amazing agent papers to be presented at the AAMAS-05 conference

The list of accepted papers for the 2005 Conference on Autonomous Agents and Multiagent Systems is just out.  It's quite an amazing and long list that shows the breadth and depth of interest in software agents, but also illustrates the extent to which the overall technology really isn't there yet, let alone ready for prime-time commercial deployment.
 

Regulation of Multi-Agent Systems

There will be an interesting workshop on the regulation of multi-agent systems entitled "Agents, Norms and Institutions for Regulated Multiagent Systems" at the 2005 Conference on Autonomous Agents and Multiagent Systems in July in Utrecht, The Netherlands. As the workshop blurb notes:

Multi-agent systems are often understood as complex entities where amultitude of agents interact, usually with some intended individual or collective purpose. Such a view usually assumes some form of structure,or set of norms or conventions that articulate or restrain interactions in order to make them more effective in attaining those goals, more certain for participants or more predictable. The engineering of effective regulatory mechanisms is a key problem for the design of open complex multi-agent systems, so that in recent years it has become a rich and challenging topic for research and development.

Of the many possible ways of looking at the problem of regulating multi-agent systems, this workshop focuses on a normative approach, based on the use of Norms in Artificial Institutions. Lately there has been an explosion of new approaches, both theoretical and practical, exploring the use of norms as a flexible way to constrain and/or impose behaviour and these are reflected in specifications of norm languages, agent-mediated electronic institutions, contracts, protocols and policies.

The workshop invites specialists from various fields to discuss conceptual, formal and technical aspects that bear upon theformalization, design, construction and deployment of regulated multi-agent systems. The workshop intends to bring together active researchers to present and debate recent developments.

See the workshop web page for further details.

Multi-Agent Systems and Complexity

There will an interesting sesssion on Multi-Agent Systems and Complexity at the Complexity, Science and Society conference coming up in September in Liverpool, UK. Modeling of software agents and their interactions becomes an enormous issue as software agents proliferate and the interactions among them become more intense. They become known as a Complex Adaptive System (CAS). The conference session blurb notes that:
Agent models have long been applied in economics and the social sciences as models of complex phenomena. In recent years, the design and study of systems of software agents has arisen in Computer Science, where it promises to change the prevailing object-oriented paradigm in software engineering (see: Luck, McBurney and Preist 2003, Zambonelli and Parunak 2003). Such agent models treat the individual agents as intelligent, autonomous entities engaged in purposeful interaction with one another, and study both the decision-making processes of the individual agents and the mechanisms for interaction between them. This deeply theoretical and very applied work in Computer Science has created the possibility of significantly more sophisticated multi-agent computer models of real-world complex, adaptive systems.

Conversely, it is possible to conceive of complex computational systems, such as the Internet, as systems of interacting, intelligent agents. The design, management and control of these systems may therefore benefit from learnings in the social and physical sciences regarding complex, adaptive systems. Several major computer hardware vendors have recently announced initiatives in which these ideas figure prominently: HP's utility computing, IBM's on demand computing, and Sun's N1 systems.

This conference session aims to explore these ideas from both directions: multi-agent systems (MAS) as models of complex phenomena, and complex computational systems viewed as systems of interacting agents. Because both threads involve several theoretical and applied disciplines, the session hopes to generate multi-disciplinary conversations, debate and exchange.

Amazing agent papers to be presented at the AAMAS-05 conference

The list of accepted papers for the 2005 Conference on Autonomous Agents and Multiagent Systems is just out. It's quite an amazing and long list that shows the breadth and depth of interest in software agents, but also illustrates the extent to which many research questions remain open and the overall industrial-scale technology really isn't there yet, let alone ready for wide-spread prime-time commercial deployment.

In addition to the formal papers to be presented at the scheduled sessions, quite a number of papers will also be presented informally as "posters".

The intensity of the research efforts is truly amazing and certainly thought-provoking and exciting, but I'm also disappointed that we still have quite a distance to go before a lot of our visions can be made into reality.

Thursday, February 24, 2005

Environments for multi-agent systems

A workshop entitled "Environments for Multiagent Systems (E4MAS'05)" will be held at the next Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) to be held July 25-26, 2005 in Utrecht, The Netherlands. As the workshop blurb notes, with all the attention given to software agents themselves, not near enough attention is given to the environments in which these agents operate:
There is a general agreement in the multiagent systems research community that the environment where agents are situated in is an essential part of any multiagent system. Yet, most researchers and developers either fail to integrate the environment as a first-order abstraction in models and tools for multiagent systems, or minimize the environment's role within the overall system. Functionality associated with the environment is typically limited to message transport or broker infrastructure.

Researchers working in the domain of situated multiagent systems have demonstrated how agents can exploit the environment to share information and coordinate their behavior. For example, digital pheromones and gradient fields can guide agents in their local context and as such facilitate the coordination of a community of agents in a decentralized way. Several practical applications have shown that the environment can contribute to manage complex problems, such as supply chains systems, network management, manufacturing control or multiagent simulation. Clearly, if we limit the functionality of the environment to only message transport, we neglect a rich potential of possibilities for the paradigm of multiagent systems.

The goals of the E4MAS workshop series are to promote the environment as a first-order abstraction in multiagent systems and to further develop the discussion forum on environments.

More details can be found on the workshop web page, as well as the page for scope and main topic areas.

There is also a white paper entitled "Environments for Multiagent Systems State-of-the-Art and Research Challenges."

Wednesday, February 23, 2005

Machine-understandable Web service descriptions

There is an intriguing workshop at the next World Wide Web Conference (WWW 2005) entitled WEB SERVICE SEMANTICS: TOWARDS DYNAMIC BUSINESS INTEGRATION which concerns itself with how to do a much more robust level of integration of semantic web services (SWS).  They describe the workshop as:
The description of Web services in a machine-understandable fashion is expected to have a great impact in the areas of e-Commerce and Enterprise Application Integration, as it can enable dynamic and scalable cooperation between independently developed systems and organisations.  These potential benefits have led to the establishment of an important class of research activities, both in industry and academia, aimed at the practical deployment of declarative, semantically rich service and process descriptions and their use across the Web service lifecycle.
This research, which draws on a variety of fields such as knowledge representation, automated software engineering, process modeling, workflow, and software agents, is happening under several headings, including Semantic Web services (SWS), Grid services and Semantic Grid services, and (some aspects of) Service-Oriented Computing.  For ease of reference, in this call we refer to this general area of work as Semantic Web services (SWS).  We note that here, "Semantic Web" does not denote any particular set of standards, although much work in this area does build on products of the Semantic Web activity at W3C.  In addition, many SWS efforts are aligned with rapidly developing commercial Web service standards such as WSDL and UDDI.
I find this particularly intriguing since one of my pet beefs is that there is too much hand-coded logic (code) floating around that almost assures that software will be buggy, poorly integrated, inflexible, and destined to perform poorly.  Machine-understandable descriptions are clearly an important paradigm for the design of future software system architectures.
 
For more details, see the workshop web site.
 

Monday, February 21, 2005

Is ontology overrated?

Clay Shirky has a session at the upcoming March 14-17, 2005 O'Reilly Emerging Technology Conference on the topic "Ontology is Overrated: Links, Tags, and Post-hoc Metadata". He argues that ontology was needed for physical books (the infamous card catalog) simply to keep track of the books, but with modern technology such a catalog is even more difficult to maintain and simply adds little if any value. As he says:
As we have learned from the Web, when data is decoupled from physical presence, it is fluid enough to be grouped differently by different readers, and on different days. The Web's main virtue, in handling data, is to transmute organization from an a priori, content-based judgment to one that can be ad hoc, context-based, socially embedded, and constantly altered. The Web frees us from needing to argue about whether The Book of 5 Rings "is" a business book or a primer on war--it is plainly both, and not only are we freed from making that judgment firmly or in advance, we are freed from needing to make it explicit at all.
He does have some good points, but I still suspect that ontology does have some relevance to the web of the future. It may simply be that ontologies really belong to the realm of active software agents and that ontologies will be used in more of a dynamic matching mode rather than the kind of author-driven static tagging that Shirky argues against. In other words, it would be better to use data mining tools to dynamically classify content, especially since our classification strategies will evolve over time.

So, I think I would retitle the topic as "Ontology: Misused".

Sunday, February 20, 2005

How do we teach a software agent?

If software agents are to become truly "intelligent", the question arises of how to endow them with knowlege.

The immediate reply is to directly state that feeding them pre-digested knowledge is not the way to go, but teaching them how to learn on their own seems much more promising.

So, the question can be restated as "How do we endow software agents with the ability to learn?".

Sure, we will in fact "feed" agents plenty of pre-digested knowledge, in much the same way we feed ourselves knowledge, using books, text, numbers, diagrams, images, media, etc.

The real essence of the point is that intelligence is much more than simply a large library of knowledge. The learning process is all about figuring out how to integrate and mesh all of that knowledge to form a cognitive structure that can support reasoning, logic, intuition, creativity, decision-making, planning, execution of plans, flexibility, risk assessment, risk-taking, and learning itself.

Saturday, February 19, 2005

Three-Level Agent Interaction Negotiation and Connection

In order to maximize the flexibility and robustness of agent-to-agent negotiation and binding, I propose a three-level scheme.

Level One is the level of the agents themselves.

Level Two is the level of intermediaries that are able to work with agents as their clients. Each agent would have some number of Level Two intermediary agents with whom it has established a level of trust and with whom it is willing to work.

Level Three is the level of intermediaries for the intermediaries. This is the level at which "first contact" occurs between two agents. Each level two intermediary agent has some number of intermediary-to-intermediary agents (I2I agents) with whom it has established a level of trust and with whom it is willing to work.

Level One agents offering services would "advertise" to their Level Two intermediary agents who in turn advertise to their Level Three intermediary-to-intermediary agents who keep track of those advertised services.

Level One agents seeking services would notify their trusted Level Two intermediary agents of their interest. Those trusted Level Two intermediary agents would in turn notify their trusted Level Three intermediary-to-intermediary agents of the services that their Level One client agent seeks. Each Level Three agent would query its catalog of advertised services and proceed to competitively negotiate a "connection" (interaction contract). One or more I2I agents would become "primary contractors" and others might become "backup contractors".

Two Level One agents would never interact purely in a direct manner, but rather through their respective Level Two intermediaries. If a connection is disrupted, the Level Two intermediaries would then seek to "fix" the disruption. The fix may in fact require negotiating a new agent-to-agent connection. The Level One agents would be notified of all disruptions using an object-oriented event notification and given the opportunity to continue a fail-safe new connection or to abort the connection if appropriate. The Level One agents could be configured to blindly accept all re-negotiated connections. In other words, the developer of a Level One agent would never need to "worry" about the robustness of any connection. In fact, the whole point of the three-level arrangement is to maximize the odds of a successful connection and to maximize the odds that a connection can be renegotiated if disrupted.

Level Two intermediary agents may also seek to re-negotiate a connection based on performance. In fact, a host system might signal intermediary agents to downgrade or upgrade connectivity based on load measurements.

Software Agents and Grid Computing

Software agents are frequently mentioned in the context of Grid Computing. For example, there is an upcoming conference entitled "2nd International Conference on GRID SERVICES ENGINEERING AND MANAGEMENT (GSEM'05)" to be held in Erfurt, Germany on September 19-22, 2005.

The conference blurb notes that:

The Grid has emerged as a global platform to support on-demand virtual organizations for coordinated sharing of distributed data, applications and processes. Service orientation of the Grid also makes it a promising platform for seamless and dynamic development, integration and deployment of service-oriented applications. The application components can be discovered, composed and delivered within a Grid of services, which are loosely coupled to create dynamic business processes and agile applications spanning organizations and computing platforms. The technologies contributing to such Grids of services include Service-Oriented Computing, Semantic Web, Grid Computing, Software Engineering, Business Process Technology, and Agent Technology.
Let us know what you think about the potential of grid computing as a platform for software agent technology. Comment here on the blog, or drop us an email.

How intelligent does a software agent need to be to be considered an intelligent software agent?

The term Intelligent Agent (or Intelligent Software Agent) is thrown around as if it had some real meaning, but there is no standard definition for the term.  So, the question remains:  How "intelligent" does a software agent need to be to be considered an Intelligent Software Agent?
 
Let us know what you think, either by commenting here in the blog, or by dropping us an email.

Jack Krupansky

Jack Krupansky runs this blog, as well as it's associated web site, www.Agtivity.com.

Jack is the principal of Base Technology, a sole proprietorship focused on development of advanced software technology and also offers software consulting services. He is the inventor of the Liana C/C++-like object-oriented programming language.

Click here to view his resume.

Agtivity - Advancing the Science of Software Agent Technology

Agtivity is dedicated to turning the construction and deployment of Software Agent Technology (Intelligent Agents, Intelligent Software Agents, Autonomous Agents, Autonomous Software Agents, Multi-Agent Systems) into a science rather than folklore and ad-hoc art and craft.

We have been following the emerging field of software agents (autonomous agents and multi-agent systems) since 1996. The field has always looked very promising, but our visions and expectations have almost always outstripped reality. Our commitment is unwavering, but the field is simply not yet ready for prime-time.

Our main web site is located at www.Agtivity.com.

Please check out our very extensive agent links page for Software Agent Technology.

Please peruse our book list for Software Agent Technology.

Please feel free to comment here on the blog, or drop us an email.