Up: What's an Agent, Anyway? Previous: What's an agent?


Conclusion

Given the above background and desiderata for what I call an agent, I find little justification for most of the commercial offerings that call themselves agents. Most of them tend to excessively anthropomorphize the software, and then conclude that it must be an agent because of that very anthropomorphization, while simultaneously failing to provide any sort of discourse or `social contract' between the user and the agent. Most are barely autonomous, unless a regularly-scheduled batch job counts. Many do not degrade gracefully, and therefore do not inspire enough trust to justify more than trivial delegation and its concomittant risks.

Yet the appeal of a truly intelligent system, a sort of mental slave that one doesn't have to feel bad about using as a slave, persists. Like artificial intelligence before it, the fairly loose specification of the meaning of `agent' and the use of a word with a nontechnical meaning in a technical way has blurred the boundaries and made the concept available for appropriation.

It should be clear by now that I consider many of the uses of `agent' to be misnomers, either accidentally or in a deliberate attempt to cash in on a fad or on ancient dreams of true intelligent assistants. Yet I also argue that even true `agent' systems, such as Julia, deserve careful scrutiny. Systems such as Julia provoke discussion of sociological and emotional interactions with computational tools. This implies that explicit attention to how users will perceive such systems is warranted, or we may make systems that are not as useful as they could be. Consider that even an agent as arguably useful as Julia is went unappreciated by one potential user, Lara, by being good enough at Turing-competence that Lara thought Julia was human, but bad enough at being a conversationalist that Lara thought she was simply a boring human fixated on hockey. As more human interaction moves into essentially cyberspace-like realms, and as the boundaries between human and machine behavior become blurrier, more and more programs will have to be held up to scrutiny. There may come a time when one's programs may well be subjected to the same sort of behavioral analysis that one might expect applied to a human: Is this program behaving appropriately in its social context? Is it causing emotional distress to those it interacts with? Is it being a `good citizen'?

I believe that Julia and her domain of muds are both important early examples of where human/computer interaction may be leading, and that they hint at both the problems and the opportunities wating farther down the path. But getting there with the fewest number of false steps will take careful observation and analysis of how people interact with such programs, and with environments such as muds that so closely situate people and programs to the extent that their boundaries are perceptibly blurred. It will also take due diligence to avoid polluting and diluting the concepts required to talk about such systems, lest we see the same hype, crash, and burn phenomenon that happened with artificial intelligence happen once again.

Up: What's an Agent, Anyway? Previous: What's an agent?


Lenny Foner