Friday, December 22, 2006

Trust No One?

I'm sitting here in New York City on vacation (yes, Manhattan *is* relaxing for me) and may go see De Niro's new movie about the CIA, The Good Shepard. The motto in the movie is "Trust no one." That reminds me of one of the central, vital, and unsolved issues for software agent technology: trust. Sure, we have all manner of authentication and authorization identity systems, but ultimately they are extremely limited in scope and simply don't even come close to addressing the concerns of open software agent systems, let alone the issues related to integrating the online cyber world and the physical world and the human social world. Yes, it is true that ultimately there is no absolute trust, but in practice trust is a spectrum of degrees of trust, with a multitude of orthogonal spectrums of purposes of trust. Ultimately, we want to analyze the risk of engaging in any act based on trust. Even if all parties are absolutely honest, mistakes can be made and perceptions can be mistaken, so simply trying to authenticate identity and provenance of "facts" to the nth degree is insufficient to get to an absolute form of trust.

Much of the technological interest in trust to date has focused on trust regarding identity, but a great looming problem for social computing is trust related to the truth of alleged facts and alleged knowledge claims. We talk about establishing contracts between software entities, but this presupposes that we trust that the parties are committed to honoring the terms of the contracts. Once again, it may not be so much a matter of judging sincerity of commitment, but judging the risk that the terms might not be honored due to mistakes or misperceptions or misinterpretation of the terms or ambiguous interpretation of the terms or a cascade of trust failures from other entities and other contracts, and also judging the risk of the consequences of failure modes for the contracts.

We humans in our real physical and social worlds have massive difficulties with trust, either being too trustful, or not trustful enough, so one open question is whether our engineered social computing systems can do a much better job at the trust thing, even theoretically.

We got ourselves into a global "war on terror" because we trusted the agenda of the Neoconservatives. Not to mention the fact that we trusted that previous administrations were "doing the right thing" on the counterterrorism front.

We got ourselves into the quagmire of Iraq because we once again trusted the judgment of the Neoconservatives and we trusted their vetting of intelligence about Iraqi possession of weapons of mass destruction. We also trusted that WMDs were the actual motivation and agenda for the Neoconservatives.

We are presently engaged in a stare-down contest and saber-rattling with Iran because we trust that the Neoconservatives are now right about Iran's commitment to turn a civilian nuclear energy program into a nuclear weapons development and deployment program. We also trust that the Neoconservatives are 100% correct in their assertion that Iran is the main source of funding and arms for Hezbollah in Lebanon. How likely is it that all of the "trust" at stake is likely to be found to be justified a few years from now? And so much more justified than the mistaken trust on the "war on terror" and Iraq?

If we humans are capable of such lousy performance on the trust front in the real physical and social worlds when so many lives and no much money is at stake, how exactly do we expect to transcend these massive failures and incompetencies as we build trust systems in the online cyber world?

This is a massive, open, unresolved issue.

One technique is the statistical approach, such as used to address computer viruses and malware, but this is of no help when software agents are struggling with knowledge that is too limited to be statistically measurable, such as a contract between a pair of agents involving a small number of knowledge claims. Not to mention that the statistical approach is not 100% reliable. Although high statistical reliability can assure that large-scale systems can work overall, it is no solace for the statistically insignificant individuals who are greatly harmed by the statistically insignificant "errors". We need approaches that will simultaneously address the large-scale issues and the individual-level issues. Ultimately, we need systems that can work efficiently and reliably at the level of individual pairs of software agents and contracts.

The Good Shepard approach of trusting no one does in fact have some merit and applicability to social computing, but just as the protagonist in the movie ran into difficulties by taking an absolute negative stance on trust, we need to assure that software agents in social computing systems are based on analysis of risk rather than assumption that truth and trust are binary on or off qualities.

-- Jack Krupansky

1 Comments:

At 4:46 PM MST , Anonymous Anonymous said...

Wow,you raised a very valid point on "TRUST",which most of the tech savy people I believe are not paying attention to. It is making me wonder about all the services that people are expecting from these agents, how do they know that what they are getting is fool proof. Thank you for showing the other side of the coin.

SR

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home