Tested software agent server with Apache Solr trunk (4.x)
Advancing the Science of Software Agent Technology
How dumb could a software agent be and still be considered an intelligent agent, presuming that it can communicate with and take advantage of the services of other, more intelligent software agents?
This still begs the question of how we define or measure the intelligence of a specific software agent. Do we mean the raw, native intelligence contained wholly within that agent, or the effective intelligence of that agent as seen from outside of that agent and with no knowledge as to how the agent accomplishes its acts of intelligence?
We can speak of the degree to which a specific agent leverages the intelligence of other agents. Whether we can truly measure and quantify this leverage is another matter entirely.
In humans we see the effect that each of us can take advantage of the knowledge (and hence to some degree the intelligence) of others. Still, we also speak of the intelligence of the individual.
Maybe a difference is that with software agents, they are much more likely to be highly interconnected at a very intimate level, compared to normal humans, so that agents would typically operate as part of a multi-mind at a deeper level rather than as individuals loosely operating in social groups as humans do. Or, maybe it is a spectrum and we might have reasons for choosing to design or constrain groups of agents to work with varying degrees of interconnectivity, dependence, and autonomy.
So, maybe the answer to the question is that each agent can be extremely dumb or at least simple-minded, provided that it is interconnected with other agents into a sufficiently interconnected multi-mind.
But even that answer begs the question, leading us to ponder what the minimal degree of interconnectivity is that can sustain intelligence.
How dumb could a software agent be and still be considered intelligent, presuming that it can communicate with and take advantage of the services of other, more intelligent software agents?
This still begs the question of how we define or measure the intelligence of a specific software agent. Do we mean the raw, native intelligence contained wholly within that agent, or the effective intelligence of that agent as seen from outside of that agent and with no knowledge as to how the agent accomplishes its acts of intelligence?
We can speak of the degree to which a specific agent leverages the intelligence of other agents. Whether we can truly measure and quantify this leverage is another matter entirely.
In humans we see the effect that each of us can take advantage of the knowledge (and hence to some degree the intelligence) of others. Still, we also speak of the intelligence of the individual.
Maybe a difference is that with software agents, they are much more likely to be highly interconnected at a very intimate level, compared to normal humans, so that agents would typically operate as part of a multi-mind at a deeper level rather than as individuals loosely operating in social groups as humans do. Or, maybe it is a spectrum and we might have reasons for choosing to design or constrain groups of agents to work with varying degrees of interconnectivity, dependence, and autonomy.
So, maybe the answer to the question is that each agent can be extremely dumb or at least simple-minded, provided that it is interconnected with other agents into a sufficiently interconnected multi-mind.
But even that answer begs the question, leading us to ponder what the minimal degree of interconnectivity is that can sustain intelligence.
I just updated my web page for State of the Art for Software Agent Technology. I originally wrote it in 2004 and the world has changed a bit since then. Alas, I do not have a lot of great progress to report. As I wrote in this year's update:
The technology sector has evolved significantly since I originally wrote this page in 2004, but software agent technology has stagnated somewhat, at least from a commercial perspective. Research continues, but the great hopes for software agent technology, including my own, have been deferred.
For example, the European Commission AgentLink initiative published its Agent Technology Roadmap in 2004 and an update in 2005, but there have not been any updates in the five years since then.
A lot of the effort in software agents field was simply redirected to the Semantic Web, Web Services, and plug-ins for Web browsers and Web servers. Rather than seeing dramatic advances in intelligent agents, we have seen incremental improvements in relatively dumb but smart features embedded in non-autonomous Web software such as browsers and server software.
Again, there has been a lot of progress, but no where near enough to say "Wow! Look at this!"
My real bottom line is simply that a lot more research is needed:
I hate to say it, but for now the field of software agents remains primarily in the research labs and the heads of those envisioning its future. There have been many research projects and many of them have made great progress, but the number of successful commercial ventures is still quite limited (effectively nonexistent.) There are still many issues and unsolved problems for which additional research is needed.
Nonetheless, I do remain hopeful and quite confident that software agent technology will in fact be the wave of the future, at some point, just not yet or any time soon.
A software agent is a piece of computer software that exhibits the quality of agency, but that begs two more fundamental questions:
An alternative formulation would be:
How can we distinguish qualities of software that constitute agency from qualities that would not constitute agency?
Ideally, we would like to identify sub-qualities of agency so that we ultimately can judge the quality of the agency qualities of a software agent.
I actually do currently have a definition of agency on my web site:
Agency is the capacity of an entity to continually sense its environment, make decisions based on that sensory input, and to act out those decisions in its environment without (in general) requiring control by or permission from entities with which the entity is associated.
The hallmarks of agency are reactivity (timely response to changes in the environment), goal-oriented (not simply responding to the environment according to a pre-determined script), autonomy (having its own agenda), interactive (with its environment and other entities), flexibility, and adaptability.
An entity which has the qualities associated with agency is referred to as an agent.
An agent which operates within the realm of software systems is referred to as a software agent. Agency, being an agent, or having the qualities of agency do not imply anything to do with software.
But, I am not entirely happy with that definition and I am thinking about how to refine it.
Another way of phrasing the headline question is to ask what the smallest and simplest agent would look like.
I frequently receive conference announcements in my in-box and rarely do they inspire me much at all, but the announcement for a conference on "Brain Informatics" certainly caught my attention. The announcement for "2010 International Conference on Brain Informatics (BI 2010)" or "Brain Informatics 2010" tells us that:
Brain Informatics (BI) has recently emerged as an interdisciplinary research field that focuses on studying the mechanisms underlying the human information processing system (HIPS). It investigates the essential functions of the brain, ranging from perception to thinking, and encompassing such areas as multi-perception, attention, memory, language, computation, heuristic search, reasoning, planning, decision-making, problem-solving, learning, discovery, and creativity. The goal of BI is to develop and demonstrate a systematic approach to achieving an integrated understanding of both macroscopic and microscopic level working principles of the brain, by means of experimental, computational, and cognitive neuroscience studies, as well as utilizing advanced Web Intelligence (WI) centric information technologies.
It goes on to say that:
BI represents a potentially revolutionary shift in the way that research is undertaken. It attempts to capture new forms of collaborative and interdisciplinary work. In this vision, new kinds of BI methods and global research communities will emerge, through infrastructure on the wisdom Web and knowledge grids that enables high speed and distributed, large-scale analysis and computations, and radically new ways of sharing data/knowledge.
And:
Brain Informatics 2010 provides a leading international forum to bring together researchers and practitioners from diverse fields, such as computer science, information technology, artificial intelligence, Web intelligence, cognitive science, neuroscience, medical science, life science, economics, data mining, data and knowledge engineering, intelligent agent technology, human computer interation, complex systems, and system science, to explore the main research problems in BI lie in the interplay between the studies of human brain and the research of informatics. On the one hand, one models and characterizes the functions of the human brain based on the notions of information processing systems. WI centric information technologies are applied to support brain science studies. For instance, the wisdom Web and knowledge grids enable high-speed, large-scale analysis, simulation, and computation as well as new ways of sharing research data and scientific discoveries. On the other hand, informatics-enabled brain studies, e.g., based on fMRI, EEG, MEG significantly broaden the spectrum of theories and models of brain sciences and offer new insights into the development of human-level intelligence on the wisdom Web and knowledge grids.
The announcement provides another summary for "Brain Informatics (BI)":
Brain Informatics (BI) is an emerging interdisciplinary and multi-disciplinary research field that focuses on studying the mechanisms underlying the human information processing system (HIPS). BI investigates the essential functions of the brain, ranging from perception to thinking, and encompassing such areas as multi-perception, attention, memory, language, computation, heuristic search, reasoning, planning, decision-making, problem-solving, learning, discovery, and creativity. One goal of BI research is to develop and demonstrate a systematic approach to an integrated understanding of macroscopic and microscopic level working principles of the brain, by means of experimental, computational, and cognitive neuroscience studies, as well as utilizing advanced Web Intelligence (WI) centric information technologies. Another goal is to promote new forms of collaborative and interdisciplinary work. New kinds of BI methods and global research communities will emerge, through infrastructure on the wisdom Web and knowledge grids that enables high speed and distributed, large-scale analysis and computations, and radically new ways of data/knowledge sharing.
Conference topics include:
-- Jack Krupansky
I just ran across an interesting conference announcement, SOCREAL 2010: Second International Workshop on Philosophy and Ethics of Social Reality. The conference summary is:
In the past two decades, a number of logics and game theoretical analyses have been proposed and combined to model various aspects of social interaction among agents including individual agents, organizations, and individuals representing organizations. The aim of SOCREAL Workshop is to bring together researchers working on diverse aspects of such interaction in logic, philosophy, ethics, computer science, cognitive science and related fields in order to share issues, ideas, techniques, and results.
Topics will include:
From my own perspective, presently, software agents operate at a rather primitive level with little more than basic data transfer and simple control, but eventually software agents will evolve into intelligent agents whose activity is more in the line of social behavior, including ethics and the behavior of groups and even organizations and institutions of software agents. And, of course, software agents are acting as agents for other entities, whether computational, or human. There certainly is a lot of ground to be broken. It is at least heartening that people are beginning to scratch the surface of the potential for social reality of computational entities.
Eventually, somebody will realize that these social agents are communicating in a language and that language has semantics and that there is a potential for a great semantic abyss between the various communities of social agents, as well as a vast semantic abyss between these computational agents and their real world "masters".
Great challenges and great opportunities.
Just a note to myself to eventually look into the concept of quantum artificial life. Not sure what it really is. Doesn't even have a Wikipedia page yet. And a Google search yields little.
Assuming that it really does have some basis in quantum mechanics, two questions pop up:
Hmmm...
I was just reading the call for papers announcement for a workshop entitled "Workshop on Complexity, Evolution and Emergent Intelligence" at the AI*IA 09 Eleventh Conference of the Italian Association of Artificial Intelligence scheduled on December 12, 2009 in Reggio Emilia, Italy which covers a variety of topics related to complex systems and "aims at bringing together scientists who work from different perspectives, from basic science to applications, on the common theme of systems composed by many components that interact non-linearly."
The focus is on complex systems which "very often exhibit interesting features, as self-organisation, robustness, surprising collective processes and occasionally intelligence."
A workshop goal is to achieve closer interactions between the communities of Complex Systems Science (CSS) and Artificial Intelligence (AI):
Recent developments -- for example in the context of agent-based modelling, distributed and/or evolutionary computation -- represent new opportunities for further exploring and strengthening these scientific interactions and connections.
The workshop will pay close attention to the combination of intelligence and complex interactions:
As already suggested, the contemporary presence of intelligence and complex interactions may not be casual but, instead, able to disclose deeper links between the two characteristics. Are there universal patterns of organization in complex systems, from pre-biotic replicators to evolved beings, to artificial objects? Do these structures allow effective computational processes to develop?
Key questions are how robust structures which develop in such systems are, how information is incorporated into these structures and how computation emerges. The study of complex systems is also interested in determining the contributions of selection, chance and self-organization to the functioning and evolution of complex structures.
Topics of interest for the workshop include:
I am most intrigued with tangled hierarchies and the emergence of mind, but it is all quite fascinating stuff.
Just a mental bookmark to look into the so-called coordination paradigm for modeling the interaction of ensembles of software agents. I do not have a great definition yet, but it involves the modeling of concurrent
and distributed computations and systems based on the concept of coordination which enables the parts to act as a whole.
My hunch is that the trick is that we are not trying to model the agents per se but some "the whole is greater than the sum of the parts" functional capability that is emergent from the ensemble and not strictly present and observable in the individual agents of the ensemble.
I have another hunch that we need to differentiate between explicit coordination where the agents know about the greater function of the ensemble and how theay each fit into "the big picture" versus implicit coordination where the greater function is truly an emergent phenomenon that none of the agents could have known about in advance and may not even know about as it is in progress.
Although it is tempting to posit that human-level intelligence might be the be-all and end-all for intelligent software agents, there is the possibility that more primitive levels of "intelligence" may have significant utility and other benefits, in much the manner that varying levels of intelligence are useful in human social organizations. Besides human-level intelligence, we could also consider non-human animal-level intellugence, especially for more primitive operations. After all, is "searching" that much different from "hunting", and are humans really that much better at hunting than many animal species? Taking this progression to the next (lower) level, does the plant kingdom have anything to offer in terms of capabilities that might be useful in software agents? My hunch is that the answer is yes, or at least maybe.
I am not suggesting that plants could provide a model for matching or exceeding human-level intelligence, but there are plenty or lower-level operations and infrastructure needs that might in fact benefit from what we might learn from study of the plant kingdom. After all, plants root, grow, reproduce, disperse seeds, and co-operate with other plants in a fashion, suggesting forms of networking and distributed processing, at least at a primitive level. Besides, plant have mastered the process of harnessing the energy of the sun, a feat that we continue to struggle with.
Whether plants have a human-like or animal-like "mind" or "brain" is debatable, but maybe irrelevant. What is relevant is the forms of processing that plants can perform and how that processing is controlled.
The real potential may be not for the more "intelligent" of agent needs, but in the need for more robust, durable, and resilient "grunt" agent needs and needs within the infrastructure to support the intelligent agents.
The plant kingdom may be able to provide some interesting metaphors for information processing.
The more interesting angle might be that we could construct hybrid metaphors that combine aspects of human, animal, and plant "intelligence" that might not be possible or practical in the "real" world.
Whether or not we are able to use plant-like capabilities in agents themselves, my hunch is that the infrastructure and environment in which agents operate could very well benefit from being more plant-like. Visualize that as agents as animals in a jungle.
I have not dug too deeply into this area, yet.
Here are a couple of references I have stumbled across:
Professor Michael Wooldridge of the University of Liverpool is about to come out with the Second edition of An Introduction to MultiAgent Systems.
It is listed on Amazon, but as "This title has not yet been released. You may pre-order it now and we will deliver it to you when it arrives." with a suggested release date of July 7, 2009. It is also listed on Wiley's web site. I would love to leaf through the book, but I am not going to pay $60 for a book.
The description from Wiley:
The study of multi-agent systems (MAS) focuses on systems in which many intelligent agents interact with each other. These agents are considered to be autonomous entities such as software programs or robots. Their interactions can either be cooperative (for example as in an ant colony) or selfish (as in a free market economy). This book assumes only basic knowledge of algorithms and discrete maths, both of which are taught as standard in the first or second year of computer science degree programmes. A basic knowledge of artificial intelligence would useful to help understand some of the issues, but is not essential.
The book's main aims are:
- To introduce the student to the concept of agents and multi-agent systems, and the main applications for which they are appropriate
- To introduce the main issues surrounding the design of intelligent agents
- To introduce the main issues surrounding the design of a multi-agent society
- To introduce a number of typical applications for agent technology
Michael has emailed out a blurb for the book (also available on his web page) that introduces it as follows:
Multiagent systems are an important paradigm for understanding and building distributed systems, where it is assumed that the computational components are autonomous: able to control their own behaviour in the furtherance of their own goals. The first edition of An Introduction to Multiagent Systems was the first contemporary textbook in the area, and became the standard undergraduate reference work for the field. This second edition has been extended with substantial new material on recent developments in the field, and has been revised and updated throughout. It provides a comprehensive, coherent, and readable introduction to the theory and practice of multiagent systems, while presenting a wealth of discussion topics and pointers into more advanced issues for those wanting to dig deeper.
The blurb notes some key new features:
I took a brief look at the table of contents and arrived at the following tentative conclusions:
The main sections of the book are:
The blurb tells us that the book is:
Designed and written specifically for computing undergraduates, the book comes with a rich repository of online teaching materials, including lecture slides.
Overall, the book is a great introduction to the current state of the art of software agent technology, both in theory and practice.
Need to go check out those lecture slides!
With so many places to go and so many things to see and do on the Web, it is getting almost impossible to keep up with the proliferation of interesting information out there. We need some help. A hefty productivity boost is simply not good enough. We need a lot of help. Browser add-ons, better search engines, and filtering tools are simply not enough. Unfortunately, the next few years holds more of the same.
But, longer term we should finally start to see credible advances in software agent technology which help to extend our own minds so that we can engage in virtual browsing and have a virtual presence on the Web so that we can effectively reach and touch a far broader, deeper, and richer lode of information than we can with personal browsing and our personal presence.
Twitter asks us what we are doing right now, but our online activity and presence with the aid of software agents will be a thousand or ten thousand or even a million or ten million times greater than we can personally achieve today. What are each of us interested in? How about everything?! Why not?
The gradual evolution of the W3C conception of the Semantic Web will eventually reach a critical mass where even relatively dumb software agents can finally appear to behave in a relatively intelligent manner that begins to approximate our own personal activity and personal presence on the Web.
It may take another five to ten years, but the long march in that direction is well underway.
The biggest obstacle right now is not the intelligence of an individual software agent per se, but the need to encode a rich enough density of information in the Semantic Web so that we can realistically develop intelligent software agents that can work with that data. We will also need an infrastructure that mediates between the actual data and the agents.
For some time I have wondered about the differences between plants and animals, two distinct "kingdoms." Maybe someday I'll have enough spare time to look into the matter (so to speak.) A variation of that question popped into my mind today: What are the biological requirements for intelligence? Man evolved intelligence in the animal kingdom. What specifically enabled that evolution of intelligence in man? Not the "pop", superficial explanations, but what exactly is it that permits man to exhibit intelligence? Put another way, why were plants unable to evolve in a parallel manner into "intelligent" individuals? Are there in fact biological requirements for intelligence that only the animal kingdom has to offer? Or, could intelligence, in theory, occur in plants through some path of evolution within the plant kingdom? In any case, in short, what exactly are the biological requirements for intelligence? And I do mean intelligence in the sense of human-level intelligence. That does beg the question of other forms of "intelligence" that may be wholly incomparable to human intelligence.
Now, this also broaches on the question of machine intelligence, computational intelligence, or artificial intelligence. If in fact there are biological requirements for intelligence, can those requirements in fact be met by non-biological entities such as computers as we know them. Of course that does beg the question of whether we could simply develop a computer program which is a simulator for biological life. That then raises the question of whether plants could evolve a machine-like structure which in fact was such a simulator for animal life.
In any case, we are left with the question of what the requirements would be for human-level intelligence in machines, and whether there may be biological functions that cannot easily (or maybe even possibly) be simulated in machines.
By "machines", I mean computers as we know them today, a device which can execute what we call computer programs or computer software.
That begs two questions. First, are there radically difference computer software architectures that might enable programming of human-level intelligence? Second, are there radically different device architectures which would permit software architectures that cannot easily (or maybe even possibly at all) be developed with computer devices as we know them.
To phrase the initial question another way, could we in theory genetically engineer plants to develop forms of intelligence?
More abstractly, could another "kingdom" develop which was neither plant nor animal, but capable of exhibiting human-level intelligence? Maybe in another solar system, another galaxy, or a parallel universe? Or, is there in fact some fundamentally basic requirement for intelligence which even in theory can only be satisfied within the animal kingdom?
One final question... What biological requirements would need to be met for artificial devices, presumably capable of reproduction by themselves, to in fact be considered "biological" and a new "kingdom" paralleling the animal and plant kingdoms?
The recent uproar over the read-aloud feature of the new Amazon Kindle book reading device has raised some fascinating questions related to the definition and interpretation of the concepts of a performance and a derivative work, as well as the concept of licensed use. I would add that this dispute also raises the issue of the role and status of software agents.
An article in Ars Technica by Julian Sanchez entitled "Kindles and "creative machines" blur boundaries of copyright" does a decent jobs of covering both the pros and cons and legal nuances of the "rights" for electronically reading a book aloud.
I have read a lot of the pro and con arguments, but I am not prepared to utter a definitive position at this time.
I would note that there is a "special" context for the entire debate: the ongoing "culture war" between the traditional world view of people, places, and things and the so-called "digital" world view, whether it be online with the Web or interactive within a computer system. Clearly there are parallels between the real and "virtual" worlds, but also there are differences. Rational people will recognize and respect the parallels even as they recognize and respect the differences. Alas, there is a point of view that insists that the virtual worlds (online and interactive) should not be constrained in any way by the real-world world view.
The simple truth is that the real and virtual worlds can in fact coexist separately, but the problem comes when we try to blend the two worlds and pass artifacts between them. Then, the separateness breaks down. The Kindle is a great example, with real-world books being "passed" into the digital world and then the act of electronically reading them aloud passing back from the digital world to the real world.
It is also interesting to note that many books are now actually created in the virtual world (word processing, storage, transmission, digital printing) even if not intended specifically as so-called e-books, so that physical books themselves in fact typically originated in a virtual world. Clearly the conception of the book occurs in the mind of the author and the editors, but the actual "assembly" of all of the fragments from the minds of authors and editors into the image of the book occurs in the virtual world.
In any case, my interest is in the role of software agents. A software agent is a computer program which possesses the quality of agency or acting for another entity. The Kindle read-aloud feature is clearly a software agent. Now, the issue is whose agent is it. The consumer? Amazon? The book author? The publisher?
The superficially simple question is who "owns" the software agent.
We speak of "buying" books, even e-books, but although the consumer does in fact "buy" the physical manifestation, they are in fact only licensing the "use" of the intellectual property embodied in that physical representation. You do in fact "own" the ones and zeros of the e-book or the paper and ink of the meatspace book, but you do not own all uses except as covered by the license that you agreed to at the time of acquisition of the bits. Clearly not everyone likes or agrees with that model, but a license is a contract and there are laws related to contracts. Clearly there are also disputes about what the contract actually covers or what provisions are enforceable. That is why we have courts.
So, the consumer owns the bits of the read-aloud software agent, and the consumer may have some amount of control over the behavior of that software agent, but ownership and interaction are not the same thing.
I would suggest that the read-aloud software agent still belongs to Amazon since it remains a component of the Kindle product. A Kindle reading a book aloud is not the same as a parent reading a book to a child or a teacher reading to a class (or the reading in the movie The Reader), in particular because it is Amazon's agent that is doing the reading.
An interesting variation would be an open source or public domain version of Kindle as downloadable software for the PC, or software with features different from Kindle for that matter. Who "owns" any software agents embedded in that software? Whose agent is doing the performance? Whose agent is creating derivative works? To me, the immediate answer is who retains the intellectual property rights to the agent. In the Kindle case, Amazon is not attempting to transfer all rights. Even if they did, there is the same question as with file-sharing software, whether there is some lingering implied liability that goes along even when ownership is transferred.
Another open issue would be software agents which completely generate content from scratch dynamically, not from some input such as an e-book data stream. Who owns that content? I would suggest that the superficial answer is that the owner of the agent owns "created" (non-derivative) content, except as they may have licensed transfer of ownership of such content.
Another issue is whether a "stream" can be considered a representation. I would think so. One could also consider it a performance of an implied representation. Whether each increment of data in the stream is stored may not be particularly relevant. The stream has most of the "effect" of a full representation.
Another issue is trying to discover the intent or spirit of the law as opposed to the exact letter of the law. Sure, there are plenty of loopholes and gotchas that do in fact matter when in a courtroom, but ultimately I would think that it is the intentions that matter the most to society. Unless, you are a proponent of a "free" digital world that is unencumbered by any constraints of the real world and seeks to exploit loopholes simply because "they are there."
In any case, my point is not to settle the matter, but to raise the issues of performances and creation of derivative works in the realm of software agents, both for developers of software agent technology and those who seek to deploy it. And we have this issue of what lingering liability tail connects software agents and their creators.
The good news is that somehow, I have managed to be result #1 in Google for the term social agent. The bad news is that my Web page that purports to define that term simply said "A social agent is ... TBD." How lame! DOH! That page has gotten a fair number of hits, probably mostly by academic researchers in software agent technology and their students. One finally sent me an email sarcastically complimenting me for saving him so much effort and that my mother should be proud of me. Well, I fixed the problem. I did some research and derived my own definition for the term social agent. Actually, there are two somewhat distinct uses:
(1) A social agent is a software agent which exhibits a significant degree of interdependence with other software agents which results in or from the formation of communities of software agents within the full population of software agents to which the social agents belong, where each community has rules for behavior within the community.
(2) A social agent is a software agent or robot which is capable of social communication with human beings.
See: http://www.agtivity.com/def/social_agent.htm
What is frustrating about this is that by failing to have a reasonable definition on that Web page I have been losing out on opportunities to be cited as a source for definition of that term. There is not even a Wikipedia article for it.
Social choice mechanisms will no doubt be crucial to the operation of large and complex agent systems. Software agents will need to make choices and will need to affect outcomes in multi-agent interactions. Voting is one example. The emerging sub-field of computational social choice is an attempt to adapt the tools and techniques of social choice theory to the realm of computational entities.
I myself have not explored this area beyond the very superficial, but it does show promise.
Some of the specific topic areas are:
The overall topic will be covered in a future special issue of Springer's Journal of Autonomous Agents and Multi-Agent Systems ("Special Issue on Computational Social Choice".)
Keywords: computational social choice, social choice theory, social choice mechanisms, social choice problems, collective decision-making.
The Journal of Web Semantics has issued a call for papers for a special issue on the topic of "Exploring New Interaction Designs Made Possible by the Semantic Web." They tell us that they:
... seek papers that look at the challenges and innovate possible solutions for everyday computer users to be able to produce, publish, integrate, represent and share, on demand, information from and to heterogeneous data sources. Challenges touch on interface designs to support end-user programming for discovery and manipulation of such sources, visualization and navigation approaches for capturing, gathering and displaying and annotating data from multiple sources, and user-oriented tools to support both data publication and data exchange. The common thread among accepted papers will be their focus on such user interaction designs/solutions oriented linked web of data challenges. Papers are expected to be motivated by a user focus and methods evaluated in terms of usability to support approaches pursued.
Offering some background, they inform us that:
The current personal computing paradigm of single applications with their associated data silos may finally be on its last legs as increasing numbers move their computing off the desktop and onto the Web. In this transition, we have a significant opportunity and requirement to reconsider how we design interactions that take advantage of this highly linked data system. Context of when, where, what, and whom, for instance, is increasingly available from mobile networked devices and is regularly if not automatically published to social information collectors like Facebook, LinkedIn, and Twitter. Intriguingly, little of the current rich sources of information are being harvested and integrated. The opportunities such information affords, however, as sources for compelling new applications would seem to be a goldmine of possibility. Imagine applications that, by looking at one's calendar on the net, and with awareness of whom one is with and where they are, can either confirm that a scheduled meeting is taking place, or log the current meeting as a new entry for reference later. Likewise, documents shared by these participants could automatically be retrieved and available in the background for rapid access. Furthermore, on the social side, mapping current location and shared interests between participants may also recommend a new nearby location for coffee or an art exhibition that may otherwise have been missed. Larger social applications may enable not only the movement of seasonal ills like colds or flus to be tracked, but more serious outbreaks to be isolated. The above examples may be considered opportunities for more proactive personal information management applications that, by awareness of context information, can better automatically support a person's goals. In an increasingly data rich environment, the tasks may themselves change. We have seen how mashups have made everything from house hunting to understanding correlations between location and government funding more rapidly accessible. If, rather than being dependent upon interested programmers to create these interactive representations, we simply had access to the semantic data from a variety of publishers, and the widgets to represent the data, then we could create our own on-demand mashups to explore heterogeneous data in any way we chose. For each of these types of applications, interaction with information -- be it personal, social or public -- provides richer, faster, and potentially lighter-touch ways to build knowledge than our current interaction metaphors allow.
Finally, they pose their crucial question:
What is the bottleneck to achieving these enriched forms of interaction?
For which they propose the answer:
Fundamentally, we see the main bottleneck as a lack of tools for easy data capture, publication, representation and manipulation.
They provide a list of challenges to be addressed in the issue, including but not restricted to:
In addition to traditional, full-length papers, they are also soliciting shorter papers as well as one to two page short, forward-looking more speculative papers addressing the challenges outlined above. I am tempted to submit one of the latter, possibly based on my proposal for The Consumer-Centric Knowledge Web - A Vision of Consumer Applications of Software Agent Technology - Enabling Consumer-Centric Knowledge-Based Computing. Or, maybe a stripped-down version of that vision that is more in line with the "reach" of the current, RDF-based vision of the Semantic Web.