Wednesday, April 25, 2012

Tested software agent server with Apache Solr trunk (4.x)

Last week I made some enhancements to the new Base Technology Software Agent Server and tested it accessing the recently released Apache Solr 3.6 enterprise search platform. My test was quite simple, one test that added a few documents to a Solr collection, and the other test performing a few queries of that collection, all via the HTTP protocol, using XML to send data and receive results.
 
Earlier this week I downloaded the latest "nightly trunk build" for the next generation of Solr, referred to simply as "Solr trunk" or "4.x". My tests from Solr 3.6 worked fine except for one test case that checked the raw XML text and there one one nuance of difference - in 3.6 a zero-result query generates an XML empty-element tag for the "result" element, but in Solr 4.x a start tag and separate end tag are generated. No big deal.
 
As alluded to last week, I added the option to disable "writing" to the web (HTTP POST.) This option defaults to "disabled", which is safest. You need to set the "implicitly_deny_web_write_access" property to "false" in the agentserver.properties file in order to send documents to Solr from an agent running in the software agent server, but this is not needed if you are simply trying to query an already indexed document collection, which is most of what I was interested in anyway. Having the ability for an agent to actually add documents to Solr was simply an added benefit.

Sunday, April 22, 2012

I'll be talking about the agent server at the next NYC Semantic Web Meetup

I'll be giving a 3-minute elevator pitch for the Base Technology Software Agent Server at the upcoming NYC Semantic Web Meetup, on Thursday, April 26, 2012. That won't be enough time to go into any details, but hopefully will pique a little interest.
 
In preparation, I have refined my short summary as well as a more detailed summary.

Thursday, April 19, 2012

Tested software agent server with Solr 3.6

I just ran a couple of simple tests to see how well the Base Technology software agent server could connect to Apache Solr 3.6 (open source enterprise search platform) which was just released last week. I did have to make a few changes to the agent server code, to add support for the HTTP POST verb and to permit HTTP GET to bypass the web page cache manager of the agent server.
 
Originally, I was going to access Solr via the SolrJ interface (Solr for Java), but I figured I would start with direct HTTP access to see how bad it would be. It wasn't so bad at all. I may still add support for SolrJ, but one downside is that it wouldn't be subject to the same administrative web access controls that normal HTTP access is. I'll have to think about it some more, but I could probably encapsulate the various SolrJ methods as if they were the comparable HTTP access verbs (GET for query, POST for adding documents, etc.) so that the administrative controls would work just as well with SolrJ. At least that's the theory.
 
For now, at least I verified that a software agent can easily add documents to and query a Solr server running Solr 3.6.
 
The code changes are already up on GitHub.
 
I do need to add a new option, "enable_writable_web", which permits agents to do more than just GET from the web. I had held off on implementing POST since it is one thing to permit agents to read from the web, but permitting them to write to the web is a big step that adds some risk for rogue and buggy agents. For example, with one POST command you can delete all documents from a Solr server. Powerful, yes, dangerous, also yes.
 
I also need to make "enable_writable_web" a per-user and even per-agent option so that an agent server administrator can allow only some users or agents to have write access to the web. There will probably be two global settings for the server, one for the default for all users, and one which controls whether any users can ever have write access to the web. The goal is to make the agent server as safe as possible by default, but to allow convenient access when needed and acceptable as well.
 
Unfortunately, after all of that, it turns out that Solr has a "stream.body" feature that allow documents to be added and deleted using an HTTP GET verb. Oh well, that's life. You can't cover all bases all of the time.

Tuesday, April 17, 2012

Scripting Language for the Base Technology Software Agent Server

Much of the processing for software agents in the Base Technology Software Agent Server occurs automatically down inside the server software itself, but occasionally a modest amount of procedural scripting is needed, as well as expressions for numerous parameters. The scripting language for the agent server is based on Java, with a number of simplifications and extensions.
 
Expressions and most statements follow Java fairly closely. Those who are familiar with expressions and statements in C, C++, and Java, should feel right at home. The ++, +=, and ? : operators are supported, for example.
 
Since the purpose of scripts is primarily "small snippets of code", there is no support for defining classes and other complex structures that are supported by Java and C++. On the flip side, since lists and collection of named data are so common, the 'list' and 'map' types are built into the language, and the built-in "web" type dramatically facilitates access to web resources. The expectation is that any complex structures should be constructed by developing collections of agents themselves. Also, simple map objects are very convenient for collecting related information, without any of the tedium of defining and implementing full-blown classes. That said, I am sure that some future stage of the agent server will add support for classes, in some form.
 
The scripting language does not have a 'new' operator, but lists and maps can be trivially constructed without it using list and map literals, more reminiscent of JavaScript than Java. Also, typed fields are automatically initialized to their specified type, so there is no need for 'new' simply to initialize an empty list or map.
 
As far as data types, the scripting language supports most of the Java primitive types, as well as built-in support for strings, lists, maps, and web objects. The various keywords for the types of integers supported by Java are supported, but are all mapped to one long integer type. Similarly, the various keywords for float and double are supported, but are all mapped to double. The 'char' type is supported but mapped to string. String literals may be enclosed in quotes or apostrophes (single quote.)
 
The scripting language also includes support for dates and money, although that support is not yet fully implemented. Strings can simply be mapped between ISO date/time format and RFC date/time format as well as integers.
 
Currently, scripts are simply named functions with no parameters. Syntactically, they are a Java block but without the braces. Nesting of statements and blocks with local variables is supported, much as in Java.
 
Unlike Java, elements of lists and maps map be referenced as if they simple arrays rather than require the explicit 'get' and 'put' method calls (which are still supported for compatibility and due to their familiarity.) Substrings or sub-ranges of strings and lists can easily be extracted using a pair of subscripts to specify the range.
 
Characters and substrings of strings can be accessed directly using square bracket subscripts as well rather than use the 'charAt' method (which is still available for compatibility.)
 
Another simplification is to treat the 'length' and 'size' methods identically regardless of whether the object is a string, list, or map. Further, the useless, empty parentheses can be omitted.
 
As an additional simplification, the value associated with a string map key can be directly accessed with the traditional C/C++/Java dot notation as if it were a class field. This includes both reading and assignment.
 
Quite a few of the Java string operations are supported, in addition to operations such as 'between' to find two substrings and return the text between them. The 'before', 'after' and 'between' operations also have regex-forms to allow very powerful but concise string manipulation, which makes it very easy to extract data from web pages, text files, and XML documents even without using the built-in HTML and XML parsing features. It is also easyt to extract word lists from HTML, XML, text, etc., as well as to generate strings from list and map structures.
 
One significant difference from the Java String class is that substrings can be modified in-place. Individual characters can be replaced, characters can be deleted, inserted, and substrings can be replaced with other substrings whose length may differ. Since the original string is modified, a copy of the string is made whenever it is stored in a variable.
 
Another improvement for strings is that the relational operators can be applied directly to string values, as opposed to resorting to the 'equals' method. The 'equals' method is still supported, as is the 'equalsIgnoreCase' method.
 
The Java library is a bit confusing as to when 'add', 'put', and 'set' are to be used in different classes, so we treat all three identically for list, map, and string.
 
All text is presumed to be UTF-8 encoded UNICODE. Explicit character codes code be embedded within string literals as in Java.
 
The so-called bit-wise operators (&, |, ^, ~) are not supported at this time, but the logical boolean operators (&&, ||, !) are supported. The shifting and bit rotation operators are also not supported at this time, although they may resurface in a future release.
 
One minor nuance is that types are all lower case since they are all built-in primitive types. This includes int, long, string, list, map, and web. There are no "boxed" types as in Java, nor any need for them.
 
Also missing is Java's extensive class library and third-party libraries. But, the built-in 'web' type greatly simplifies access to web resources, including HTML web pages, text files, XML files, RSS feeds, and REST API web services. A very rich set of functions and methods are already built into the scripting language and its runtime, especially with the flexibility of the built-in list and map types. Over time additional types and functions will be added as their need becomes apparent.
 
User-defined functions are supported within an agent. Unlike normal scripts which are Java-style blocks without the enclosing braces, a user-defined function has something very similar to the function header and block block syntax of Java.
 
The developer of an agent may optionally decide that selected scripts and functions are to be "public", which means that they can be called directly using the REST API. This presumes that the user provides the proper user ID and password, so this feature should be reasonably secure. And, of course, the developer can opt to make none of the scripts or functions public for maximum security.

Sunday, April 01, 2012

Agent Server is a product without a name

My new agent server does not have a formal name. I have done this intentionally. I don't actually consider it a true "product" or commercial "service" at this point in time. It has no real packaging (okay, it does have a downloadable zip file and a home on GitHub) and you can't "buy" or "subscribe" to it per se. From my perspective it is still "raw" technology. Yes, it is packaged to make it easy for people to access the technology, but it is certainly not "ready for prime time."
 
I vaguely considered whether to give it a name, but decided that all such "marketing" effort would be a distraction from focusing on getting the technology working and available for evaluation. Any kind of true marketing is still down the road.
 
For now I use one of the following descriptive names to refer to the technology:
  • Agent Server
  • Software Agent Server
  • Base Technology Agent Server
  • Base Technology Software Agent Server
  • Any of the above with a suffix of "Stage 0" (e.g., "Base Technology Agent Server - Stage 0")
Once I start getting some feedback on the technology and refine it a bit more, then and only then will I be ready to engage in some "real" marketing, starting with brand identity, naming, product positioning, messaging, and promotion.

Business model for Agent Server

Although subject to change and evolution, the currently expected business model will be primarily a traditional open source model:
  1. The software, including full source, is "free" and freely available (on GitHub) under the Apache License Version 2.0.
  2. Consulting and contract work to support, extend, and customize the software will be the primary source of revenue.
In addition, I have some preliminary thoughts for a longer-term plan to offer a subscription-based "agent grid" network of web-based servers that are tailored to offering commercial-grade agent server support for organizations that do not wish to host and support the agent server software on their own machines. But, this option is still way off in the hazy future.
 
For now, I seek customer/partners such as large data providers (or any organization who offers a web-based API to their services) who have a strong interest in software agents that facilitate consumption of their data in a way that is compatible with their own business model.
 
-- Jack Krupansky

REST API for interacting with Agent Server

The Base Technology Software Agent Server supports a full-featured REST API for all interactions with developers, users, and administrative control. This API uses the HTTP protocol with its POST, GET, PUT, and DELETE verbs and passes information via the URL path, URL query parameters, and JSON structures. In fact, the complete definition for a software agent can be expressed in a single JSON structure. All information about the server, users, and agents is expressed in JSON, but there is a "format" option to use XML, CSV, or text for various API calls.
 
Administrative controls include starting, stopping, pausing, and resuming the agent server, as well as throttle controls for operations such as web and email access, and the ability to disable and enable individual users and their agents.
 
Individual users can define their own agents, but it is expected that developers will provide the definitions of agents, including their internal scripts, and that users will then instantiate and control those definitions with specific parameter settings.
 
There is one other form of interaction: email notifications. Agents can notify users of conditions, events, options, and choices via email. This could be information-only, optionally require confirmation or a yes/no choice, or even a selection from a list of options. The user may make their selection by clicking a link in the email message, which is actually a REST API call to signal the agent about that choice selection.
 
It is expected and hoped that others will create web and mobile device-based user interfaces. Or, that partners will contract for the development of customized user interfaces for specific applications of software agents.
 
The underlying concept here is that the agent server provides an easy to use REST API that enables the use of software agent technology for a very wide range of applications rather than providing a complete packaged solution for a limited set of applications.

Agent Server is open source Java on GitHub under Apache License Version 2.0

Stage 0 of the Base Technology Agent Server is 100% Java developed using Eclipse and ant, and is licensed as open source under the Apache License Version 2.0. The full source code (and downloadable zip file) is available on GitHub.
 
You may read, download, modify, and even redistribute the agent server and its source code according to the terms of the Apache License Version 2.0, without needing any payments or agreements to be signed.
 
That said, the agent server is not quite ready for prime time, so be prepared for bugs and other issues if you do take an advanced preview. Everything is "as-is."

Components of a software agent

The model of a software agent supported by the Base Technology Agent Server is relatively simple but enables both a sophisticated level of processing and automatic processing by the underlying infrastructure of the agent server. The feature areas or components of an agent in this model are:
  1. Parameters - needed to parameterize the behavior of each agent
  2. Inputs - other agents upon whose outputs this agent depends
  3. Timers - to control periodic behavior of the agent (its "heartbeat", so to speak)
  4. Conditions - expressions which must be true for the agent to take action
  5. Scripts - procedural code to respond to specific events
  6. Memory - internal storage that persists for the life of the agent
  7. Scratchpad - temporary storage that is not guaranteed to persist, such as if a server is rebooted
  8. Outputs - a collection of data fields to be made available to the environment and other agents
  9. Notifications - conditions under which the user will be notified of events.

Saturday, March 31, 2012

Two kinds of agent: data source agents and normal agents

Stage 0 my my new Agent Server supports two distinct kinds of agent:
  • Data Source Agents – read web resources (HTML pages, XML files, text files, RSS feeds, etc.) on a periodic basis and extract useful information and output it in a form suitable for consumption by other agents. A single data source agent can in fact access multiple web resources on different time scales and aggregate the extracted information.
  • Normal Agents – consume the output of other agents, both data source agents and other normal agents, and in turn generate their own output. No timers are needed since activation of agents is automatic based on their dependency graphs.
Actually, there is not real distinction between these "types" of agent, other than simply whether the agent happens to use any timers. In fact, you can have hybrid agents that consume some web resources on a periodic basis using timers and output from other agents as it becomes available.

The agent server, a place for agents to live

When Semantic Web pioneer Prof. James Hendler famously asked "Where are all the Intelligent Agents?" five years ago, the response was mixed and I would say rather muddled. Basically the answer was that we are making progress, but we are far from being "there" yet. Now, five years later, I have some running code as the beginnings of a better answer: agents need a special kind of operational environment in order to flourish; they need an agent server, which is what I am now working on. Five years ago I suggested that software agents needed a rich semantic infrastructure in order to flourish. My initial cut at an agent server is certainly not as semantically rich as I suggested, but I have made it a lot easier to develop and deploy relatively simple software agents, which is the first required step.
 
As rudimentary as it is, my Stage 0 Agent Server makes it dirt-simple to construct and deploy agents that periodically read data from HTML web pages, XML files, text files, JSON REST APIs, and the outputs of other agents, etc. The agents are long-lived and their state is persistent, even if the server must be shutdown and rebooted, all with zero effort on the part of the developer.
 
It is certainly not ready for prime time, but is definitely a candidate for giving agents a place to live.

Moving forward with developing a software agent server

Back in the middle of January I ruminated about the possibility that after 15 years of thought and research, maybe I was finally on the verge of being ready to actually make some forward progress with developing a software agent server. About a week later I started writing some serious code in Java and two months later I now have a preliminary working version of an agent server. It is still far from finished and I would not want anybody to actually start trying to use it, but I do have open source code and a downloadable Java zip file up on github. I call it "the Base Technology Agent Server – Stage 0." Call it pre-alpha quality. after I get some preliminary feedback from some people, fill in some gaps, and finish some documentation, then I will officially make it public. For now, people can actually take a peek if they are adventurous enough:
 
 
I hope to get the introductory doc and tutorial in at least marginally usable shape within a week or so.

Saturday, January 14, 2012

Should I or shouldn't I?

I've been poking at the edges of technology and business opportunities for software agent technology for quite a number of years now, but have been very hesitant to pull the trigger and actually try to do something to realize the potential of all of the grand visions of agents. Initially it had all seemed so promising, and that remains true, but so much of it has been more of a research program with tangible results always tantalyzingly out of reach. But, now, I'm finally at the point where I am seriously considering whether enough of the requisite technology components may be in place to at least start to move forward. I'm still not convinced or decided, but it at least feels a lot more encouraging than at any point in the past 15 years, other than the initial euphoria I had back in 1997.
 
When I step back and look at all of the pieces of technology that would be required to come together to implement true intelligent agents or at least functional, industrial-strength software agents, the view is breathtaking and quite daunting. There is still tons of hard-core research needed and a lot of the technology is simply not ready for prime time.
 
But lately I've been taking a slightly different perspective, and trying to focus on identifying a realistic subset of the vision and technology that I actually could make very real progress on in the here and now. I've made some great progress and this approach looks promising, but there is still too much that is still vague and foggy.
 
There are two critical questions that I face: 1) have I identified a small enough subset of the problem that I can actually implement in fairly short order, and 2) will that subset have the critical mass needed to be successful from both a technology and marketing perspective.
 
Unfortunately, this whole area is still highly speculative and even if I can and do build a product, it would be more of "a solution in search of a problem" than a clear market need that I can simply plug in to fill. That is probably the biggest concern holding me back on the business side of the equation.
 
Ultimately, I may simply decide that I could build something, but then decide not to.
 
Or, I may decide that I will learn enough from the experience and accumulate enough valuable technology buzz words to put on my resume that the technology effort may be more than worth the business risk.
 
Right now, my bias is towards starting to write some code next week.
 
I'm also currently looking around for some new consulting or contracting work and struggling with the question of whether I'd rather have the certainty of a decent income versus the risk of pursuing a new technology venture. I really would prefer the latter, but I simply do not yet have a solid fix on success in that direction that I would be willing to bet the farm on.

Monday, December 26, 2011

The agent learning loop

The great difficulty with software agents is the issue of how to program them to behave in an intelligent manner. Programming intelligence is difficult, tricky, and error-prone, when it is practical at all. Sometimes we are actually able to come up with simple, clever heuristics that seem to approximate intelligence (e.g., ELIZA), or maybe even heuristics for a very limited and constrained domain that come very close to approximating human-level intelligence (e.g., Deep Blue or Watson), but all too often our attempts at programming intelligence fall woefully short, are merely amusing, or are outright lame or horribly dysfunctional when situated in the real world. We will continue to pursue the vision of intelligent machines through programmed intelligence, but ultimately there is only one true path to true intelligence: the ability to learn.
 
Computer software that mimics intelligence focuses primarily on programming a library of encoded information and patterns that represent knowledge. That can enable a computer to answer questions or respond to environmental conditions, but only in a pre-programmed sense. The beauty of human-level intelligence is that the human mind has the ability to learn, to teach itself new facts, to recognize new patterns, to actually produce new knowledge.
 
We can also produce computers that embody quite a fair amount of the processing that occurs in the human mind, but we are still stymied by the vast ability of the mind to learn and produce knowledge and know-how itself.
 
Part of our lack of progress on the learning front is the simple fact that much of the demand for intelligent machines has been simply to replace humans for relatively mindless and rote activities. In other words, a focus on the economics of predictable production rather than creative and intuitive activities.
 
I would like to propose the overall sequence for a path forward towards intelligent machines. I call it the agent learning loop:
  1. We (humans) program an agent or collection of agents with some "basic" intelligence (knowledge and pattern recognition abilities.)
  2. We (humans) program those agents with the "basic" ability to "learn."
  3. We (humans) also program those agents with the ability to communicate their knowledge and learnings with other agents, as well as to simply be able to observe the activity of other agents and learn from their successes and failures.
  4. We (humans) program those agents with the ability to create new intelligent agents with comparable abilities of intelligence and abilities to learn. These agents are then able to learn and "know" incrementally more than us, their creators.
  5. These agents then create new agents, incrementally more intelligent than themselves, and the "loop" is repeated at step #1.
The theory is that each iteration of the loop incrementally increases the intelligence of the agents.
 
One key here is that multiple agents are needed at each step and that delegation, collaboration, and competition are critical factors in learning.
 
There is also a significant degree of Darwinian evolution in play here as well. True learning involves the taking of some degree of risk, such as with intuitive leaps, and sometimes even random selection when alternatives seem comparable in value, or even random selection on occasion when the choice might seem purely "rational." With a single agent risk is risky, but with multiple agents alternatives can be exploited in parallel. Agents that learn poorly or incorrectly will be at a disadvantage in the next generation and likely die off, although in some cases short-term "poor" behavior can sometimes flip over in future generations to have unexpected value as the environment itself evolves.
 
Communications between agents is critical, as is the ability to learn from failures. In fact, agents may "learn" to seek failure as a faster path to "successful" knowledge.

Friday, November 25, 2011

The trick of knowledge

Computational agents can be considered intelligent to the extent that they utilize human-level knowledge in their behavior. How to do that is the great difficulty. I submit that the trick of knowledge is going beyond mere possession of the facts of knowledge to the ability to know how to apply knowledge. So, if we want to encode knowledge in a form that is useful to computational agents, that encoding must also include an encoding of the knowledge of how to apply that knowledge. Sure, we can hard-wire that latter knowledge, but that may be difficult, error prone, and probably much less flexible or adaptable to evolution of the environment. And even if we are successful at that hard-wiring, that hard-wired knowledge must be properly parameterized to be used in a complex environment.
 
It is worth noting that even the knowledge of how to apply knowledge needs its own knowledge of how to apply that knowhow, and so on seemingly ad infinitum. Clearly at some level there must be hard-wired knowledge. Picking that level is a central challenge, but does highlight the need for a rich knowledge-based infrastructure.
 
In any case, the trick of knowledge is not in what you know, but in your ability to apply that knowledge. Maybe that is the essence of intelligence itself.

Thursday, July 14, 2011

The big problem with storage

As I continue to ponder the question of how to make real progress with software agent technology and a knowledge web, the big problem I keep coming back to is what I will call "The Big Problem with Storage", namely, how to achieve a degree of persistence in the digital networking domain comparable in robustness and reliability to storage in the physical world, and then to go a leap beyond that to achieve truly robust and reliable digital storage. Ultimately this includes communications reliability as well, but we can tolerate a little connectivity flakiness, but storage flakiness is not so tolerable since it generally cannot be recovered. What is needed is a fully redundant and diversified network storage scheme that is 100% robust and reliable so that people can have complete confidence that information and media stored on a digital network is even safer than the best storage in the real world.
 
I have written a proposal for a Vision for a Distributed Virtual Personal Data Storage, but it certainly doesn't appear as if even my limited proposal will happen any time soon.
 
I would add to this requirement that we are in desperate need of connectivity options that are far more reliable than the best offered today. Wired connectivity is probably the most reliable connectivity we have, but has diversity problems. Wireless has greater potential for diversity, but has coverage issues. The sad fact is that if you truly want "always-on" connectivity to your data you need to maintain a local copy on your local computer/network.
 

Sunday, June 26, 2011

Human Surrogate Travel

Surrogate travel is the concept of using a remotely-controlled robot to simulate travel and sensual experiences in a remote location. The user can move the robot around and listen and see what is around the robot. But this may be a significant logistical challenge given today's robotic and communications technology. So, why not use an actual human in place of the robot? The human robot would have one of more video cameras and microphones to provide sensual experiences to the user as well as a headset and microphone for communications with the user, so the user could audibly direct the human robot to move in a semi-mechanical or intelligent manner and the human robot could give the human user feedback as well.
 
This intermediate form of surrogate travel would be much more technologically feasible at the present time and in some cases maybe even more economical as well as more flexible. It might also be more socially acceptable than a free-roving robot.

Where are all the intelligent agents?

So, where are all the intelligent agents? The question keeps popping up and the list of excuses remains long and the final answer is always some variant of "coming soon." My own personal answer is that intelligent agents are critically dependent on having a very rich intelligent semantic infrastructure. In other words, factor a lot of the intelligence out of individual agents and leverage the merged intelligence in a common, shared rich intelligent semantic infrastructure so that individual agents can be relatively dumb in their implementation but appear to be quite intelligent in operation.
 
In short, there are lots of tools and services and even data out there, but it is all too disjoint and nebulous and not coherent and cohesive and integrated enough to constitute the kind of deep integrated rich intelligent semantic infrastructure that is needed to make software agents grow like weeds. So, maybe, but not necessarily, we have all the pieces but they are not arranged in a critical mass where software agents can readily sprout.

-- Jack Krupansky

Richness of semantic infrastructure

Making intelligent software agents both powerful and easy to construct, manage, and maintain will require a very rich semantic infrastructure. Without such a rich semantic infrastructure, the bulk of the intelligence would have to be inside the individual agents, or very cleverly encoded by the designer, or even more cleverly encoded in an armada of relatively dumb distributed agents that offer collective intelligence, but all of those approaches would put intelligent software agents far beyond the reach of average users or even average software professionals or average computer scientists. The alternative is to leverage all of that intellect and invest it in producing an intelligent semantic infrastructure that relatively dumb software agents can then feed off of. Simple-minded agents will effectively gain intelligence by being able to stand on the shoulders of giants. How to design and construct such a rich semantic infrastructure is an open question.
 
Some of the levels of richness that can be used to characterize a semantic infrastructure:
  • Fully Automatic – intelligent actions occur within the infrastructure itself without any explicit action of agents
  • Goal-Oriented Processing – infrastructure processes events and conditions based on goals that agents register
  • Goal-Oriented Triggering – agents register very high-level goals and the infrastructure initiates agent activity as needed
  • Task-Oriented Triggering – agents register for events and conditions and are notified, much as database triggers
  • Very High-Level Scripting – agents have explicit code to check for conditions, but little programming skill is needed
  • Traditional Scripting – agents are scripted using scripting languages familiar to today's developers
  • Hard-Coded Agents – agents are carefully hand-coded for accuracy and performance using programming languages such as Java or C++
  • Web Services – agents rely on API-level services provided by carefully selected and coded intelligent web servers
  • Proprietary Services – Only a limited set of services are available to the average agent on a cost/license basis
  • Custom Network – a powerful distributed computing approach, but expensive, not leveraged, difficult to plan, operate, and maintain
This is really only one dimension of richness, a measure of how information is processed. Another dimension would be the richness of the information itself, such as data, information, knowledge, wisdom, and various degrees within each of those categories. In other words, what units of information are being processed by agents and the infrastructure. The goal is to get to some reasonably high-level form of knowledge as the information unit. The Semantic Web uses URIs, triples, and graphs, which is as good a starting point as any, but I suspect that a much higher-level unit of knowledge is needed to achieve a semantic infrastructure rich enough to support truly intelligent software agents that can operate at the goal-oriented infrastructure level and be reasonably easy to conceptualize, design, develop, debug, deploy, manage, and maintain, and to do all of that with a significantly lower level of skill than even an average software professional. End-users should be able to build and use such intelligent agents.

Friday, April 09, 2010

Dumb question about intelligent agents

How dumb could a software agent be and still be considered an intelligent agent, presuming that it can communicate with and take advantage of the services of other, more intelligent software agents?

This still begs the question of how we define or measure the intelligence of a specific software agent. Do we mean the raw, native intelligence contained wholly within that agent, or the effective intelligence of that agent as seen from outside of that agent and with no knowledge as to how the agent accomplishes its acts of intelligence?

We can speak of the degree to which a specific agent leverages the intelligence of other agents. Whether we can truly measure and quantify this leverage is another matter entirely.

In humans we see the effect that each of us can take advantage of the knowledge (and hence to some degree the intelligence) of others. Still, we also speak of the intelligence of the individual.

Maybe a difference is that with software agents, they are much more likely to be highly interconnected at a very intimate level, compared to normal humans, so that agents would typically operate as part of a multi-mind at a deeper level rather than as individuals loosely operating in social groups as humans do. Or, maybe it is a spectrum and we might have reasons for choosing to design or constrain groups of agents to work with varying degrees of interconnectivity, dependence, and autonomy.

So, maybe the answer to the question is that each agent can be extremely dumb or at least simple-minded, provided that it is interconnected with other agents into a sufficiently interconnected multi-mind.

But even that answer begs the question, leading us to ponder what the minimal degree of interconnectivity is that can sustain intelligence.

-- Jack Krupansky

Dumb question

How dumb could a software agent be and still be considered intelligent, presuming that it can communicate with and take advantage of the services of other, more intelligent software agents?

This still begs the question of how we define or measure the intelligence of a specific software agent. Do we mean the raw, native intelligence contained wholly within that agent, or the effective intelligence of that agent as seen from outside of that agent and with no knowledge as to how the agent accomplishes its acts of intelligence?

We can speak of the degree to which a specific agent leverages the intelligence of other agents. Whether we can truly measure and quantify this leverage is another matter entirely.

In humans we see the effect that each of us can take advantage of the knowledge (and hence to some degree the intelligence) of others. Still, we also speak of the intelligence of the individual.

Maybe a difference is that with software agents, they are much more likely to be highly interconnected at a very intimate level, compared to normal humans, so that agents would typically operate as part of a multi-mind at a deeper level rather than as individuals loosely operating in social groups as humans do. Or, maybe it is a spectrum and we might have reasons for choosing to design or constrain groups of agents to work with varying degrees of interconnectivity, dependence, and autonomy.

So, maybe the answer to the question is that each agent can be extremely dumb or at least simple-minded, provided that it is interconnected with other agents into a sufficiently interconnected multi-mind.

But even that answer begs the question, leading us to ponder what the minimal degree of interconnectivity is that can sustain intelligence.

-- Jack Krupansky

Saturday, March 13, 2010

Updated State of the Art for Software Agent Technology

I just updated my web page for State of the Art for Software Agent Technology. I originally wrote it in 2004 and the world has changed a bit since then. Alas, I do not have a lot of great progress to report. As I wrote in this year's update:

The technology sector has evolved significantly since I originally wrote this page in 2004, but software agent technology has stagnated somewhat, at least from a commercial perspective. Research continues, but the great hopes for software agent technology, including my own, have been deferred.

For example, the European Commission AgentLink initiative published its Agent Technology Roadmap in 2004 and an update in 2005, but there have not been any updates in the five years since then.

A lot of the effort in software agents field was simply redirected to the Semantic Web, Web Services, and plug-ins for Web browsers and Web servers. Rather than seeing dramatic advances in intelligent agents, we have seen incremental improvements in relatively dumb but smart features embedded in non-autonomous Web software such as browsers and server software.

Again, there has been a lot of progress, but no where near enough to say "Wow! Look at this!"

My real bottom line is simply that a lot more research is needed:

I hate to say it, but for now the field of software agents remains primarily in the research labs and the heads of those envisioning its future. There have been many research projects and many of them have made great progress, but the number of successful commercial ventures is still quite limited (effectively nonexistent.) There are still many issues and unsolved problems for which additional research is needed.

Nonetheless, I do remain hopeful and quite confident that software agent technology will in fact be the wave of the future, at some point, just not yet or any time soon.

-- Jack Krupansky

Friday, February 26, 2010

What is the unit of agency?

A software agent is a piece of computer software that exhibits the quality of agency, but that begs two more fundamental questions:

  1. What is agency?
  2. What is the unit of agency?

An alternative formulation would be:

How can we distinguish qualities of software that constitute agency from qualities that would not constitute agency?

Ideally, we would like to identify sub-qualities of agency so that we ultimately can judge the quality of the agency qualities of a software agent.

I actually do currently have a definition of agency on my web site:

Agency is the capacity of an entity to continually sense its environment, make decisions based on that sensory input, and to act out those decisions in its environment without (in general) requiring control by or permission from entities with which the entity is associated.

The hallmarks of agency are reactivity (timely response to changes in the environment), goal-oriented (not simply responding to the environment according to a pre-determined script), autonomy (having its own agenda), interactive (with its environment and other entities), flexibility, and adaptability.

An entity which has the qualities associated with agency is referred to as an agent.

An agent which operates within the realm of software systems is referred to as a software agent.  Agency, being an agent, or having the qualities of agency do not imply anything to do with software.

But, I am not entirely happy with that definition and I am thinking about how to refine it.

Another way of phrasing the headline question is to ask what the smallest and simplest agent would look like.

-- Jack Krupansky

Wednesday, December 30, 2009

Conference on Brain Informatics (BI)

I frequently receive conference announcements in my in-box and rarely do they inspire me much at all, but the announcement for a conference on "Brain Informatics" certainly caught my attention. The announcement for "2010 International Conference on Brain Informatics (BI 2010)" or "Brain Informatics 2010" tells us that:

Brain Informatics (BI) has recently emerged as an interdisciplinary research field that focuses on studying the mechanisms underlying the human information processing system (HIPS). It investigates the essential functions of the brain, ranging from perception to thinking, and encompassing such areas as multi-perception, attention, memory, language, computation, heuristic search, reasoning, planning, decision-making, problem-solving, learning, discovery, and creativity. The goal of BI is to develop and demonstrate a systematic approach to achieving an integrated understanding of both macroscopic and microscopic level working principles of the brain, by means of experimental, computational, and cognitive neuroscience studies, as well as utilizing advanced Web Intelligence (WI) centric information technologies.

It goes on to say that:

BI represents a potentially revolutionary shift in the way that research is undertaken. It attempts to capture new forms of collaborative and interdisciplinary work. In this vision, new kinds of BI methods and global research communities will emerge, through infrastructure on the wisdom Web and knowledge grids that enables high speed and distributed, large-scale analysis and computations, and radically new ways of sharing data/knowledge.

And:

Brain Informatics 2010 provides a leading international forum to bring together researchers and practitioners from diverse fields, such as computer science, information technology, artificial intelligence, Web intelligence, cognitive science, neuroscience, medical science, life science, economics, data mining, data and knowledge engineering, intelligent agent technology, human computer interation, complex systems, and system science, to explore the main research problems in BI lie in the interplay between the studies of human brain and the research of informatics. On the one hand, one models and characterizes the functions of the human brain based on the notions of information processing systems. WI centric information technologies are applied to support brain science studies. For instance, the wisdom Web and knowledge grids enable high-speed, large-scale analysis, simulation, and computation as well as new ways of sharing research data and scientific discoveries. On the other hand, informatics-enabled brain studies, e.g., based on fMRI, EEG, MEG significantly broaden the spectrum of theories and models of brain sciences and offer new insights into the development of human-level intelligence on the wisdom Web and knowledge grids.

The announcement provides another summary for "Brain Informatics (BI)":

Brain Informatics (BI) is an emerging interdisciplinary and multi-disciplinary research field that focuses on studying the mechanisms underlying the human information  processing system (HIPS). BI investigates the essential functions of the brain, ranging from perception to thinking, and encompassing such areas as multi-perception, attention, memory, language, computation, heuristic search, reasoning, planning, decision-making, problem-solving, learning, discovery, and creativity.  One goal of BI research is to develop and demonstrate a systematic approach to an integrated understanding of macroscopic and microscopic level working principles of the brain, by means of experimental, computational, and cognitive neuroscience studies, as well as utilizing advanced Web Intelligence (WI) centric information technologies.  Another goal is to promote new forms of collaborative and interdisciplinary work.  New kinds of BI methods and global research communities will emerge, through infrastructure on the wisdom Web and knowledge grids that enables high speed and distributed, large-scale analysis and computations, and radically new ways of data/knowledge sharing.

Conference topics include:

  • Thinking and perception-centric investigations of HIPS:
    • Human reasoning mechanisms (e.g., principles of human deductive/inductive reasoning, common-sense reasoning, decision making, and problem solving)
    • Human learning mechanisms (e.g., stability, personalized user/student models)
    • Emotion, heuristic search, information granularity, and autonomy related issues in human reasoning and problem solving
    • Human higher cognitive functions and their relationships
    • Human multi-perception mechanisms and visual, auditory, and tactile information processing
    • Methodologies for systematic design of cognitive experiments
    • Investigating spatiotemporal characteristics and flow in HIPS and the related neural structures and neurobiological process
    • Cognitive architectures; their relations to fMRI/EEG/MEG
    • HIPS meets complex systems
    • Modeling brain information processing mechanisms (e.g., neuro-mechanism, mathematical, cognitive and computational models of HIPS).
  • Information technologies for the management and use of brain data:
    • Human brain data collection, pre-processing, management, and analysis
    • Databasing the brain and constructing data brain models
    • Data brain modeling and formal conceptual models of human brain data
    • Multi-media brain data mining and reasoning
    • Multi-aspect analysis in fMRI/EEG/MEG activations
    • Simulating spatiotemporal characteristics and flow in HIPS
    • Developing brain data grids and brain research support portals
    • Knowledge representation and discovery in neuroimaging
    • Multimodal information fusion for brain image interpretation
    • Statistical analysis and pattern recognition in neuroimaging
  • Applications
    • Neuro-economics and neuro-marketing
    • Brain-Computer-Interface (BCI)
    • Brain/Cognition inspired artificial systems
    • Wisdom Web systems based on new cognitive and computational models
    • MCI and AD diagnosis
    • e-Science and e-Medicine

-- Jack Krupansky

 

Sunday, November 01, 2009

Philosophy and Ethics of Social Reality

I just ran across an interesting conference announcement, SOCREAL 2010: Second International Workshop on Philosophy and Ethics of Social Reality. The conference summary is:

In the past two decades, a number of logics and game theoretical analyses have been proposed and combined to model various aspects of social interaction among agents including individual agents, organizations, and individuals representing organizations. The aim of SOCREAL Workshop is to bring together researchers working on diverse aspects of such interaction in logic, philosophy, ethics, computer science, cognitive science and related fields in order to share issues, ideas, techniques, and results.

Topics will include:

  • Language (or communication) as part of social reality
  • Speech acts (or communicative acts) as what shape social reality
  • Moral commitments (and conflicts) in social interaction
  • Logic and game theory as tools for studying social reality
  • (Organized) collective agency
  • Norms and normative systems
  • Social institutional facts and their dynamics

From my own perspective, presently, software agents operate at a rather primitive level with little more than basic data transfer and simple control, but eventually software agents will evolve into intelligent agents whose activity is more in the line of social behavior, including ethics and the behavior of groups and even organizations and institutions of software agents. And, of course, software agents are acting as agents for other entities, whether computational, or human. There certainly is a lot of ground to be broken. It is at least heartening that people are beginning to scratch the surface of the potential for social reality of computational entities.

Eventually, somebody will realize that these social agents are communicating in a language and that language has semantics and that there is a potential for a great semantic abyss between the various communities of social agents, as well as a vast semantic abyss between these computational agents and their real world "masters".

Great challenges and great opportunities.

-- Jack Krupansky

Friday, October 16, 2009

Quantum artificial life

Just a note to myself to eventually look into the concept of quantum artificial life. Not sure what it really is. Doesn't even have a Wikipedia page yet. And a Google search yields little.

Assuming that it really does have some basis in quantum mechanics, two questions pop up:

  1. How can you model and "work" with a system at such a small scale that its characteristics are indeterminate.
  2. In theory, scaling up a quantum-scale system to a macro-scale system means that we flip from that absolute indeterminism to a relative statistical determinism.

Hmmm...

-- Jack Krupansky

Wednesday, September 09, 2009

Interesting conference workshop on Complexity, Evolution, and Emergent Intelligence

I was just reading the call for papers announcement for a workshop entitled "Workshop on Complexity, Evolution and Emergent Intelligence" at the AI*IA 09 Eleventh Conference of the Italian Association of Artificial Intelligence scheduled on December 12, 2009 in Reggio Emilia, Italy which covers a variety of topics related to complex systems and "aims at bringing together scientists who work from different perspectives, from basic science to applications, on the common theme of systems composed by many components that interact non-linearly."

The focus is on complex systems which "very often exhibit interesting features, as self-organisation, robustness, surprising collective processes and occasionally intelligence."

A workshop goal is to achieve closer interactions between the communities of Complex Systems Science (CSS) and Artificial Intelligence (AI):

Recent developments -- for example in the context of agent-based modelling, distributed and/or evolutionary computation -- represent new opportunities for further exploring and strengthening these scientific interactions and connections.

The workshop will pay close attention to the combination of intelligence and complex interactions:

As already suggested, the contemporary presence of intelligence and complex interactions may not be casual but, instead, able to disclose deeper links between the two characteristics. Are there universal patterns of organization in complex systems, from pre-biotic replicators to evolved beings, to artificial objects? Do these structures allow effective computational processes to develop?

Key questions are how robust structures which develop in such systems are, how information is incorporated into these structures and how computation emerges. The study of complex systems is also interested in determining the contributions of selection, chance and self-organization to the functioning and evolution of complex structures.

Topics of interest for the workshop include:

  • Agent based models
  • Cellular automata
  • Evolutionary computation
  • Information processing
  • Network properties
  • Self-organisation, emergent behaviours
  • Tangled hierarchies, description levels, reciprocal causality
  • Adaptation/exaptation
  • Evolution and co-evolution
  • Robustness, criticality
  • Pattern formation, pattern recognition, collective intelligence
  • Non linear dynamics, edge of chaos
  • The emergence of mind
  • Bio-inspired methods

I am most intrigued with tangled hierarchies and the emergence of mind, but it is all quite fascinating stuff.

-- Jack Krupansky

Wednesday, August 26, 2009

Coordination paradigm for modeling ensembles of software agents

Just a mental bookmark to look into the so-called coordination paradigm for modeling the interaction of ensembles of software agents. I do not have a great definition yet, but it involves the modeling of concurrent
and distributed computations and systems based on the concept of coordination which enables the parts to act as a whole.

My hunch is that the trick is that we are not trying to model the agents per se but some "the whole is greater than the sum of the parts" functional capability that is emergent from the ensemble and not strictly present and observable in the individual agents of the ensemble.

I have another hunch that we need to differentiate between explicit coordination where the agents know about the greater function of the ensemble and how theay each fit into "the big picture" versus implicit coordination where the greater function is truly an emergent phenomenon that none of the agents could have known about in advance and may not even know about as it is in progress.

-- Jack Krupansky

Thursday, June 25, 2009

Plant intelligence

Although it is tempting to posit that human-level intelligence might be the be-all and end-all for intelligent software agents, there is the possibility that more primitive levels of "intelligence" may have significant utility and other benefits, in much the manner that varying levels of intelligence are useful in human social organizations. Besides human-level intelligence, we could also consider non-human animal-level intellugence, especially for more primitive operations. After all, is "searching" that much different from "hunting", and are humans really that much better at hunting than many animal species? Taking this progression to the next (lower) level, does the plant kingdom have anything to offer in terms of capabilities that might be useful in software agents? My hunch is that the answer is yes, or at least maybe.

I am not suggesting that plants could provide a model for matching or exceeding human-level intelligence, but there are plenty or lower-level operations and infrastructure needs that might in fact benefit from what we might learn from study of the plant kingdom. After all, plants root, grow, reproduce, disperse seeds, and co-operate with other plants in a fashion, suggesting forms of networking and distributed processing, at least at a primitive level. Besides, plant have mastered the process of harnessing the energy of the sun, a feat that we continue to struggle with.

Whether plants have a human-like or animal-like "mind" or "brain" is debatable, but maybe irrelevant. What is relevant is the forms of processing that plants can perform and how that processing is controlled.

The real potential may be not for the more "intelligent" of agent needs, but in the need for more robust, durable, and resilient "grunt" agent needs and needs within the infrastructure to support the intelligent agents.

The plant kingdom may be able to provide some interesting metaphors for information processing.

The more interesting angle might be that we could construct hybrid metaphors that combine aspects of human, animal, and plant "intelligence" that might not be possible or practical in the "real" world.

Whether or not we are able to use plant-like capabilities in agents themselves, my hunch is that the infrastructure and environment in which agents operate could very well benefit from being more plant-like. Visualize that as agents as animals in a jungle.

I have not dug too deeply into this area, yet.

Here are a couple of references I have stumbled across:

-- Jack Krupansky

Friday, May 08, 2009

Second edition of An Introduction to MultiAgent Systems by Michael Wooldridge coming soon

Professor Michael Wooldridge of the University of Liverpool is about to come out with the Second edition of An Introduction to MultiAgent Systems.

It is listed on Amazon, but as "This title has not yet been released. You may pre-order it now and we will deliver it to you when it arrives." with a suggested release date of July 7, 2009. It is also listed on Wiley's web site. I would love to leaf through the book, but I am not going to pay $60 for a book.

The description from Wiley:

The study of multi-agent systems (MAS) focuses on systems in which many intelligent agents interact with each other.  These agents are considered to be autonomous entities such as software programs or robots.  Their interactions can either be cooperative (for example as in an ant colony) or selfish (as in a free market economy).  This book assumes only basic knowledge of algorithms and discrete maths, both of which are taught as standard in the first or second year of computer science degree programmes.  A basic knowledge of artificial intelligence would useful to help understand some of the issues, but is not essential.

The book's main aims are:

  • To introduce the student to the concept of agents and multi-agent systems, and the main applications for which they are appropriate
  • To introduce the main issues surrounding the design of intelligent agents
  • To introduce the main issues surrounding the design of a multi-agent society
  • To introduce a number of typical applications for agent technology

Michael has emailed out a blurb for the book (also available on his web page) that introduces it as follows:

Multiagent systems are an important paradigm for understanding and building distributed systems, where it is assumed that the computational components are autonomous: able to control their own behaviour in the furtherance of their own goals.  The first edition of An Introduction to Multiagent Systems was the first contemporary textbook in the area, and became the standard undergraduate reference work for the field. This second edition has been extended with substantial new material on recent developments in the field, and has been revised and updated throughout. It provides a comprehensive, coherent, and readable introduction to the theory and practice of multiagent systems, while presenting a wealth of discussion topics and pointers into more advanced issues for those wanting to dig deeper.

The blurb notes some key new features:

  • dedicated new chapters on recent research directions and results:
    • ontologies
    • computational social choice/voting
    • coalition formation
    • auctions
    • bargaining
    • argumentation;
  • "mind maps" for every chapter, to illustrate key concepts and ideas
    • an essential study and revision aid
  • 590 literature references, revised, updated, and extended to reflect the state of the art in agent research and development;
  • extensive glossary of terms.

I took a brief look at the table of contents and arrived at the following tentative conclusions:

  1. There has been a lot of progress in the past seven years.
  2. Software agent technology has still not matured to the stage where it is ready for prime time general use. I continue to believe that much of the technologies need to be transparently embedded in the underlying infrastructure to simplify development of much more robust large-scale applications.
  3. Open multi-agent systems are still an unresolved challenge.
  4. Although Semantic Web technologies are covered to some extent in Chapter 6 ("Understanding Each Other") on ontologies (XML, RDF, OWL, etc.), the centrality of the world-wide Semantic Web and Linked Data to the longer-term future of software agent technology not elaborated in any great detail and is still much further out in the future. The Semantic Web needs to evolve as well.

The main sections of the book are:

  • Part I Setting the Scene
    • Chapter 1 Introduction
  • Part II Intelligent Autonomous Agents
    • Chapter 2 Intelligent Agents
    • Chapter 3 Deductive Reasoning Agents
    • Chapter 4 Practical Reasoning Agents
    • Chapter 5 Reactive and Hybrid Agents
  • Part III Communication and Cooperation
    • Chapter 6 Understanding Each Other
    • Chapter 7 Communicating
    • Chapter 8 Working Together
    • Chapter 9 Methodologies
    • Chapter 10 Applications
  • Part IV Multiagent Decision Making
    • Chapter 11 Multiagent Interactions
    • Chapter 12 Making Group Decisions
    • Chapter 13 Forming Coalitions
    • Chapter 14 Allocating Scarce Resources
    • Chapter 15 Bargaining
    • Chapter 16 Arguing
    • Chapter 17 Logical Foundations
  • Coda
  • Appendix A -- A History Lesson
  • Appendix B -- Afterword 

The blurb tells us that the book is:

Designed and written specifically for computing undergraduates, the book comes with a rich repository of online teaching materials, including lecture slides.

Overall, the book is a great introduction to the current state of the art of software agent technology, both in theory and practice.

Need to go check out those lecture slides!

-- Jack Krupansky

Monday, April 20, 2009

Software agents for virtual browsing and virtual presence

With so many places to go and so many things to see and do on the Web, it is getting almost impossible to keep up with the proliferation of interesting information out there. We need some help. A hefty productivity boost is simply not good enough. We need a lot of help. Browser add-ons, better search engines, and filtering tools are simply not enough. Unfortunately, the next few years holds more of the same.

But, longer term we should finally start to see credible advances in software agent technology which help to extend our own minds so that we can engage in virtual browsing and have a virtual presence on the Web so that we can effectively reach and touch a far broader, deeper, and richer lode of information than we can with personal browsing and our personal presence.

Twitter asks us what we are doing right now, but our online activity and presence with the aid of software agents will be a thousand or ten thousand or even a million or ten million times greater than we can personally achieve today. What are each of us interested in? How about everything?! Why not?

The gradual evolution of the W3C conception of the Semantic Web will eventually reach a critical mass where even relatively dumb software agents can finally appear to behave in a relatively intelligent manner that begins to approximate our own personal activity and personal presence on the Web.

It may take another five to ten years, but the long march in that direction is well underway.

The biggest obstacle right now is not the intelligence of an individual software agent per se, but the need to encode a rich enough density of information in the Semantic Web so that we can realistically develop intelligent software agents that can work with that data. We will also need an infrastructure that mediates between the actual data and the agents.

-- Jack Krupansky

Saturday, March 14, 2009

What are the biological requirements for intelligence?

For some time I have wondered about the differences between plants and animals, two distinct "kingdoms." Maybe someday I'll have enough spare time to look into the matter (so to speak.) A variation of that question popped into my mind today: What are the biological requirements for intelligence? Man evolved intelligence in the animal kingdom. What specifically enabled that evolution of intelligence in man? Not the "pop", superficial explanations, but what exactly is it that permits man to exhibit intelligence? Put another way, why were plants unable to evolve in a parallel manner into "intelligent" individuals? Are there in fact biological requirements for intelligence that only the animal kingdom has to offer? Or, could intelligence, in theory, occur in plants through some path of evolution within the plant kingdom? In any case, in short, what exactly are the biological requirements for intelligence? And I do mean intelligence in the sense of human-level intelligence. That does beg the question of other forms of "intelligence" that may be wholly incomparable to human intelligence.

Now, this also broaches on the question of machine intelligence, computational intelligence, or artificial intelligence. If in fact there are biological requirements for intelligence, can those requirements in fact be met by non-biological entities such as computers as we know them. Of course that does beg the question of whether we could simply develop a computer program which is a simulator for biological life. That then raises the question of whether plants could evolve a machine-like structure which in fact was such a simulator for animal life.

In any case, we are left with the question of what the requirements would be for human-level intelligence in machines, and whether there may be biological functions that cannot easily (or maybe even possibly) be simulated in machines.

By "machines", I mean computers as we know them today, a device which can execute what we call computer programs or computer software.

That begs two questions. First, are there radically difference computer software architectures that might enable programming of human-level intelligence? Second, are there radically different device architectures which would permit software architectures that cannot easily (or maybe even possibly at all) be developed with computer devices as we know them.

To phrase the initial question another way, could we in theory genetically engineer plants to develop forms of intelligence?

More abstractly, could another "kingdom" develop which was neither plant nor animal, but capable of exhibiting human-level intelligence? Maybe in another solar system, another galaxy, or a parallel universe? Or, is there in fact some fundamentally basic requirement for intelligence which even in theory can only be satisfied within the animal kingdom?

One final question... What biological requirements would need to be met for artificial devices, presumably capable of reproduction by themselves, to in fact be considered "biological" and a new "kingdom" paralleling the animal and plant kingdoms?

-- Jack Krupansky

Sunday, March 01, 2009

Amazon Kindle - if a software agent reads a book aloud is that a performance or the creation of a derivative work?

The recent uproar over the read-aloud feature of the new Amazon Kindle book reading device has raised some fascinating questions related to the definition and interpretation of the concepts of a performance and a derivative work, as well as the concept of licensed use. I would add that this dispute also raises the issue of the role and status of software agents.

An article in Ars Technica by Julian Sanchez entitled "Kindles and "creative machines" blur boundaries of copyright" does a decent jobs of covering both the pros and cons and legal nuances of the "rights" for electronically reading a book aloud.

I have read a lot of the pro and con arguments, but I am not prepared to utter a definitive position at this time.

I would note that there is a "special" context for the entire debate: the ongoing "culture war" between the traditional world view of people, places, and things and the so-called "digital" world view, whether it be online with the Web or interactive within a computer system. Clearly there are parallels between the real and "virtual" worlds, but also there are differences. Rational people will recognize and respect the parallels even as they recognize and respect the differences. Alas, there is a point of view that insists that the virtual worlds (online and interactive) should not be constrained in any way by the real-world world view.

The simple truth is that the real and virtual worlds can in fact coexist separately, but the problem comes when we try to blend the two worlds and pass artifacts between them. Then, the separateness breaks down. The Kindle is a great example, with real-world books being "passed" into the digital world and then the act of electronically reading them aloud passing back from the digital world to the real world.

It is also interesting to note that many books are now actually created in the virtual world (word processing, storage, transmission, digital printing) even if not intended specifically as so-called e-books, so that physical books themselves in fact typically originated in a virtual world. Clearly the conception of the book occurs in the mind of the author and the editors, but the actual "assembly" of all of the fragments from the minds of authors and editors into the image of the book occurs in the virtual world.

In any case, my interest is in the role of software agents. A software agent is a computer program which possesses the quality of agency or acting for another entity. The Kindle read-aloud feature is clearly a software agent. Now, the issue is whose agent is it. The consumer? Amazon? The book author? The publisher?

The superficially simple question is who "owns" the software agent.

We speak of "buying" books, even e-books, but although the consumer does in fact "buy" the physical manifestation, they are in fact only licensing the "use" of the intellectual property embodied in that physical representation. You do in fact "own" the ones and zeros of the e-book or the paper and ink of the meatspace book, but you do not own all uses except as covered by the license that you agreed to at the time of acquisition of the bits. Clearly not everyone likes or agrees with that model, but a license is a contract and there are laws related to contracts. Clearly there are also disputes about what the contract actually covers or what provisions are enforceable. That is why we have courts.

So, the consumer owns the bits of the read-aloud software agent, and the consumer may have some amount of control over the behavior of that software agent, but ownership and interaction are not the same thing.

I would suggest that the read-aloud software agent still belongs to Amazon since it remains a component of the Kindle product. A Kindle reading a book aloud is not the same as a parent reading a book to a child or a teacher reading to a class (or the reading in the movie The Reader), in particular because it is Amazon's agent that is doing the reading.

An interesting variation would be an open source or public domain version of Kindle as downloadable software for the PC, or software with features different from Kindle for that matter. Who "owns" any software agents embedded in that software? Whose agent is doing the performance? Whose agent is creating derivative works? To me, the immediate answer is who retains the intellectual property rights to the agent. In the Kindle case, Amazon is not attempting to transfer all rights. Even if they did, there is the same question as with file-sharing software, whether there is some lingering implied liability that goes along even when ownership is transferred.

Another open issue would be software agents which completely generate content from scratch dynamically, not from some input such as an e-book data stream. Who owns that content? I would suggest that the superficial answer is that the owner of the agent owns "created" (non-derivative) content, except as they may have licensed transfer of ownership of such content.

Another issue is whether a "stream" can be considered a representation. I would think so. One could also consider it a performance of an implied representation. Whether each increment of data in the stream is stored may not be particularly relevant. The stream has most of the "effect" of a full representation.

Another issue is trying to discover the intent or spirit of the law as opposed to the exact letter of the law. Sure, there are plenty of loopholes and gotchas that do in fact matter when in a courtroom, but ultimately I would think that it is the intentions that matter the most to society. Unless, you are a proponent of a "free" digital world that is unencumbered by any constraints of the real world and seeks to exploit loopholes simply because "they are there."

In any case, my point is not to settle the matter, but to raise the issues of performances and creation of derivative works in the realm of software agents, both for developers of software agent technology and those who seek to deploy it. And we have this issue of what lingering liability tail connects software agents and their creators.

-- Jack Krupansky

Tuesday, February 10, 2009

Oops... definition of social agent

The good news is that somehow, I have managed to be result #1 in Google for the term social agent. The bad news is that my Web page that purports to define that term simply said "A social agent is ... TBD." How lame! DOH! That page has gotten a fair number of hits, probably mostly by academic researchers in software agent technology and their students. One finally sent me an email sarcastically complimenting me for saving him so much effort and that my mother should be proud of me. Well, I fixed the problem. I did some research and derived my own definition for the term social agent. Actually, there are two somewhat distinct uses:

  • (1) A social agent is a software agent which exhibits a significant degree of interdependence with other software agents which results in or from the formation of communities of software agents within the full population of software agents to which the social agents belong, where each community has rules for behavior within the community.

  • (2) A social agent is a software agent or robot which is capable of social communication with human beings.

See: http://www.agtivity.com/def/social_agent.htm

What is frustrating about this is that by failing to have a reasonable definition on that Web page I have been losing out on opportunities to be cited as a source for definition of that term. There is not even a Wikipedia article for it.

-- Jack Krupansky

Sunday, January 25, 2009

Computational social choice, social choice theory, and social choice mechanisms

Social choice mechanisms will no doubt be crucial to the operation of large and complex agent systems. Software agents will need to make choices and will need to affect outcomes in multi-agent interactions. Voting is one example. The emerging sub-field of computational social choice is an attempt to adapt the tools and techniques of social choice theory to the realm of computational entities.

I myself have not explored this area beyond the very superficial, but it does show promise.

Some of the specific topic areas are:

  • Algorithmic aspects of voting rules
  • Computational barriers to strategic behaviour
  • Collective decision-making in multi-agent systems
  • Preference elicitation and communication issues in voting
  • Fair division
  • Computational aspects of weighted voting games
  • Collective decision-making in combinatorial domains
  • Logic-based formalisms for social choice problems
  • Belief and judgement aggregation
  • Social software

The overall topic will be covered in a future special issue of Springer's Journal of Autonomous Agents and Multi-Agent Systems ("Special Issue on Computational Social Choice".)

Keywords: computational social choice, social choice theory, social choice mechanisms, social choice problems, collective decision-making.

-- Jack Krupansky

Saturday, January 17, 2009

Exploring New Interaction Designs Made Possible by the Semantic Web

The Journal of Web Semantics has issued a call for papers for a special issue on the topic of "Exploring New Interaction Designs Made Possible by the Semantic Web." They tell us that they:

... seek papers that look at the challenges and innovate possible solutions for everyday computer users to be able to produce, publish, integrate, represent and share, on demand, information from and to heterogeneous data sources. Challenges touch on interface designs to support end-user programming for discovery and manipulation of such sources, visualization and navigation approaches for capturing, gathering and displaying and annotating data from multiple sources, and user-oriented tools to support both data publication and data exchange. The common thread among accepted papers will be their focus on such user interaction designs/solutions oriented linked web of data challenges. Papers are expected to be motivated by a user focus and methods evaluated in terms of usability to support approaches pursued.

Offering some background, they inform us that:

The current personal computing paradigm of single applications with their associated data silos may finally be on its last legs as increasing numbers move their computing off the desktop and onto the Web. In this transition, we have a significant opportunity – and requirement – to reconsider how we design interactions that take advantage of this highly linked data system. Context of when, where, what, and whom, for instance, is increasingly available from mobile networked devices and is regularly if not automatically published to social information collectors like Facebook, LinkedIn, and Twitter. Intriguingly, little of the current rich sources of information are being harvested and integrated. The opportunities such information affords, however, as sources for compelling new applications would seem to be a goldmine of possibility. Imagine applications that, by looking at one's calendar on the net, and with awareness of whom one is with and where they are, can either confirm that a scheduled meeting is taking place, or log the current meeting as a new entry for reference later. Likewise, documents shared by these participants could automatically be retrieved and available in the background for rapid access. Furthermore, on the social side, mapping current location and shared interests between participants may also recommend a new nearby location for coffee or an art exhibition that may otherwise have been missed. Larger social applications may enable not only the movement of seasonal ills like colds or flus to be tracked, but more serious outbreaks to be isolated. The above examples may be considered opportunities for more proactive personal information management applications that, by awareness of context information, can better automatically support a person's goals. In an increasingly data rich environment, the tasks may themselves change. We have seen how mashups have made everything from house hunting to understanding correlations between location and government funding more rapidly accessible. If, rather than being dependent upon interested programmers to create these interactive representations, we simply had access to the semantic data from a variety of publishers, and the widgets to represent the data, then we could create our own on-demand mashups to explore heterogeneous data in any way we chose. For each of these types of applications, interaction with information -- be it personal, social or public -- provides richer, faster, and potentially lighter-touch ways to build knowledge than our current interaction metaphors allow.

Finally, they pose their crucial question:

What is the bottleneck to achieving these enriched forms of interaction?

For which they propose the answer:

Fundamentally, we see the main bottleneck as a lack of tools for easy data capture, publication, representation and manipulation.

They provide a list of challenges to be addressed in the issue, including but not restricted to:

  • approaches to support integrating data that is readily published, such as RSS feeds that are only lightly structured.
  • approaches to apply behaviors to these data sources.
  • approaches to make it as easy for someone to create and to publish structured data as it is to publish a blog.
  • approaches to support easy selection of items within resources for export into structured semantic forms like RDF.
  • facilities to support the pulling in of multiple sources; for instance, a person may wish to pull together data from three organizations. Where will they gather this data? What tools will be available to explore the various sources, align them where necessary and enable multiple visualizations to be explored?
  • methods to support fluidity and acceleration for each of the above: lowering the interaction cost for gathering data sources, exploring them and presenting them; designing lightweight and rapid techniques.
  • novel input mechanisms: most structured data capture requires the use of forms. The cost of form input can inhibit that data from being captured or shared. How can we reduce the barrier to data capture?
  • evaluation methods: how do we evaluate the degree to which these new approaches are effective, useful or empowering for knowledge builders?
  • user analysis and design methods: how do we understand context and goals at every stage of the design process? What is different about designing for a highly personal, contextual, and linked environment?

In addition to traditional, full-length papers, they are also soliciting shorter papers as well as one to two page short, forward-looking more speculative papers addressing the challenges outlined above. I am tempted to submit one of the latter, possibly based on my proposal for The Consumer-Centric Knowledge Web - A Vision of Consumer Applications of Software Agent Technology - Enabling Consumer-Centric Knowledge-Based Computing. Or, maybe a stripped-down version of that vision that is more in line with the "reach" of the current, RDF-based vision of the Semantic Web.

-- Jack Krupansky