The moment we accept the problem and grant that agents may not use the same terms to mean the same things, we need a way for an agent to discover what another agent means when it communicates. In order for this to happen, every agent will need to publicly declare exactly what terms it is using and what they mean. This specification is commonly referred to as the agent’s ontology [Gruber 1993]. If it were written only for people to understand, this specification could be just a glossary. However, meaning must be accessible to other software agents. This requires that the meaning be encoded in some kind of formal language. This will enable a given agent to use automated reasoning to accurately determine the meaning of other agents’ terms. For example, suppose Agent 1 sends a message to Agent 2 and in this message is a pointer to Agent 1’s ontology. Agent 2 can then look in Agent 1's ontology to see what the terms mean, the message is successfully communicated, and the agent’s task is successfully performed. At least this is the theory. In practice there is a plethora of difficulties. The holy grail is for this to happen consistently, reliably, and fully automatically. Most of these difficulties arise from various sources of heterogeneity. For example, there are many different ontology representation languages, different modeling styles and inconsistent use of terminology, to name a few.

« The moment we accept the... »

A quote saved on Feb. 26, 2013.


Top related keywords - double-click to view: