Where Are The Semantics In The Semantic Web? http://dl.acm.org/citation.cfm?id=958674

In total we have 30 quotes from this source:

 Axiomatic semantics

The axiomatic semantics for a language helps to ascribe a real world semantics to expressions in that language, in that it limits the possible models or interpretations that the set of axioms may have.

#language  #semantics  #real-world-semantics  #world-semantics  #interpretation 
 The general case of automatically...

The general case of automatically determining the meaning of Web content is somewhere between intractable and impossible. Thus, a human will always be hardwiring some of the semantics into Web applications. The question is what is hardwired and what is not? The shopping agent applications essentially hardwire the meaning of all the terms and procedures. The hardwiring enables the machine to “know” how to use the content. The hardwiring approach is not robust to changes in Web content. [...] The alternative to hardwiring is allowing the machine to process the semantics specifications directly. [...] In these cases, instead of hardwiring the semantics of the terms representing Web content, the semantics of the representation languages are made public and hardwired into the inference engines used by the applications.

#web-content  #semantics  #meaning  #machine  #applications  #web-applications 
 ...it is useful to collectively...

...it is useful to collectively regard shopping agents as a degenerate case of the Semantic Web. Shopping agents work in the complete absence of any explicit account of the semantics of Web content because the meaning of the Web content that the agents are expected to encounter can be determined by the human programmers who hardwire it into the Web application software. [...] Shopping agents can work even if there is no automatic processing of semantics; it can be done without any formal representation of semantics; it can even be done without any explicit representation of semantics at all. The key to enabling shopping agents to automatically use Web content is that the meaning of the Web content that the agents are expected to encounter can be determined by the human programmers who hardwire it into the Web application software. [...] We argued that today’s Web shopping agents satisfy the above definition for the Semantic Web. We also acknowledged that most people would say that these examples do not satisfy their vision of the Semantic Web. We resolved this conflict by regarding these applications collectively as a degenerate case of the Semantic Web, and we explored what it is about these applications that enables them to make use of today’s Web content.

#web-content  #Semantic-Web  #shopping-agents  #semantics 
 Semantics may be implicit, existing...

Semantics may be implicit, existing only in the minds of the humans who communicate and build Web applications. They may also be explicit and informal, or they may be formal. The further we move along the continuum, the less ambiguity there is and the more likely it is to have robust correctly functioning Web applications. For implicit and informal semantics, there is no alternative to hardwiring the semantics into Web application software. In the case of formal semantics, hardwiring remains an option, in which case the formal semantics serve the important role in reducing ambiguity in specifying Web application behavior, compared to implicit or informal semantics. There is also the new possibility of using automated inference to process the semantics at runtime. This would allow for much more robust Web applications, in which agents automatically learn something about the meaning of terms at runtime.

#web-applications  #semantics  #formal-semantics 
 Informal semantics

At a further point along the continuum, the semantics are explicit and are expressed in an informal manner, e.g., a glossary or a text specification document. Given the complexities of natural language, machines have an extremely limited ability to make direct use of informally expressed semantics. This is mainly for humans. There are many examples of informal semantics, usually found in text specification documents. • The meaning of tags in HTML such as

, which means second level header; • The meaning of expressions in modeling languages such as UML (Unified Modeling Language) [OMG 2000], and the original specification of RDF Schema [W3C 1999]; • The meaning of terms in the Dublin Core [Weible & Miller 2000]

Typically, the semantics expressed in informal documents are hardwired by humans in working software. Compiler writers use language definition specifications to write compilers. [...] The main disadvantage of implicit semantics is that there is still much room for ambiguity. This decreases one’s confidence that two different implementations (say of RDF Schema) will be consistent and compatible. Implementations may differ in subtle ways.

#meaning  #semantics  #language  #documents 
 The moment we accept the...

The moment we accept the problem and grant that agents may not use the same terms to mean the same things, we need a way for an agent to discover what another agent means when it communicates. In order for this to happen, every agent will need to publicly declare exactly what terms it is using and what they mean. This specification is commonly referred to as the agent’s ontology [Gruber 1993]. If it were written only for people to understand, this specification could be just a glossary. However, meaning must be accessible to other software agents. This requires that the meaning be encoded in some kind of formal language. This will enable a given agent to use automated reasoning to accurately determine the meaning of other agents’ terms. For example, suppose Agent 1 sends a message to Agent 2 and in this message is a pointer to Agent 1’s ontology. Agent 2 can then look in Agent 1's ontology to see what the terms mean, the message is successfully communicated, and the agent’s task is successfully performed. At least this is the theory. In practice there is a plethora of difficulties. The holy grail is for this to happen consistently, reliably, and fully automatically. Most of these difficulties arise from various sources of heterogeneity. For example, there are many different ontology representation languages, different modeling styles and inconsistent use of terminology, to name a few.

#ontology  #messages  #task  #specification  #language  #meaning 
 The most widely accepted defining...

The most widely accepted defining feature of the Semantic Web is machine-usable content. By this definition, the Semantic Web is already manifest in shopping agents that automatically access and use Web content to find the lowest air fares, or book prices. But where are the semantics? Most people regard the Semantic Web as a vision, not a reality—so shopping agents should not “count”. To use Web content, machines need to know what to do when they encounter it. This in turn, requires the machine to “know” what the content means (i.e. its semantics). The challenge of developing the Semantic Web is how to put this knowledge into the machine. The manner in which this is done is at the heart of the confusion about the Semantic Web.

#Semantic-Web  #web-content  #Web  #machine  #shopping-agents  #semantics 
 4 questions for analyzing a sw system

Question 1: What is hardwired and what isn’t? Question 2: How much agreement is there among different Web sites in their use of terminology and in the similarity of the concepts being referred to? Question 3: To what extent are the semantics of the content clearly specified? Is it implicit, explicit and informal, or formal? Question 4: Are agreements and/or semantics publicly declared?

#semantics  #Web-sites  #concept  #terminology  #similarity 
 Formal semantics for human processing

Yet further along the continuum, we have explicit semantics expressed in a formal language. However, they are intended for human processing only. We can think of this as formal documentation, or as formal specifications of meaning. Some examples of this are: 1. Modal logic is used to define the semantics of ontological categories such as rigidity and identity [Guarino et al. 1994]. These are for the benefit of humans, to reduce or eliminate ambiguity in what is meant by these ideas. [...] Formal semantics for human processing can go a long way to eliminating ambiguity, but because there is still a human in the loop, there is ample scope for errors.

#semantics  #human-processing  #explicit-semantics  #formal-specification  #processing  #formal-semantics 
 Model-theoretic semantics

“A model-theoretic semantics for a language assumes that the language refers to a 'world', and describes the minimal conditions that a world must satisfy in order to assign an appropriate meaning for every expression in the language”. [W3C 2002a] It is used as a technical tool for determining when proposed operations on the language preserve meaning.

#language  #meaning  #world  #semantics 
 What is good is what...

What is good is what works. For many applications there is no need for machines to automatically determine the meaning of terms; the human can simply hardwire this meaning into the software. Web shopping agents “know” how to find the fare for a given trip, or the price of a book. Every browser knows what

means: it is a second level header. There is no need to do inference; it is sufficient to hardwire the meaning of

into the browser. We believe that in the short and possibly medium term, approaches that do not make use of machine processible semantics are likely to have the most impact on the development of the Semantic Web.

#browser  #Semantic-Web  #shopping-agents  #semantics  #header  #software 
 We anticipate that progress in...

We anticipate that progress in development of the Semantic Web will take place by: 1. Moving along the semantic continuum from less clearly specified (implicit) semantics to more clearly specified (formal) semantics. 2. Reducing the amount of hardwiring that is necessary, and/or changing which parts are hardwired and which are not. This will entail a corresponding increase in the amount of automated inference to infer the meaning of Web content, thus enabling agents on the Semantic Web to do correctly perform their tasks. The importance of compelling use cases to drive the demand for this cannot be underestimated. 3. Increasing the amount of public standards and agreements, thus reducing the negative impact of today’s pervasive heterogeneities. 4. Developing technologies for semantic mapping and translation for the many cases where integration is necessary, but it is not possible to reach agreements.

#Semantic-Web  #semantics  #Web  #web-content  #use-cases 
 Machine usable content presumes that...

Machine usable content presumes that the machine knows what to do with information on the Web. One way for this to happen is for the machine to read and process a machine-sensible specification of the semantics of the information. This is a robust and very challenging approach, and largely beyond the current state of the art. A much simpler alternative is for the human Web application developers to hardwire the knowledge into the software so that when the machine runs the software, it does the correct thing with the information. In this second situation, machines already use information on the Web. [...] So, we still lack an adequate characterization of what distinguishes the future Semantic Web from what exists today.

#machine  #Web  #software  #information  #Semantic-Web 
 A more robust approach is...

A more robust approach is to formally represent the semantics and allow the machine to process it to dynamically discover what the content means and how to use it — we call this machine processible semantics. This may be an impossible goal to achieve in its full generality, so we will restrict this discussion to the following specific question: How can a machine (i.e., software agent) learn something about the meaning of a term that it has never before encountered?

#semantics  #machine  #generality  #goal 
 The most frequently quoted defining...

The most frequently quoted defining feature of the Semantic Web is machine usable Web content. Fundamentally, this requires that machines must “know” how to recognize the content they are looking for, and they must “know” what to do when they encounter it. This “knowledge” requires access to the meaning (i.e., semantics) of the content, one way or the other. But what does that mean? The manner in which the machine can access the semantics of Web content is at the heart of the confusion about the Semantic Web.

#Semantic-Web  #web-content  #machine  #Web  #meaning  #content 
 We ask three questions about...

We ask three questions about how semantics may be specified: 1. Are the semantics explicit or implicit? 2. Are the semantics expressed informally or formally? 3. Are the semantics intended for human processing, or machine processing? These give rise to four kinds of semantics: 1. Implicit; 2. Explicit and informal; 3. Explicit and formal for human processing; 4. Explicit and formal for machine processing.

#semantics  #human-processing  #processing  #kind 
 Agreement in the semantic web

The more agreement there is, the better. For example, there are emerging standards for XML DTDs in specific domains. [...] If there is not agreement (e.g., only some Web sites include taxes in the price information), then effort is required to make sure that you have the right concepts at a given Web site. This creates more work in programming, e.g., one may need to create separate Web application modules for each site. This is made easier if the semantics of the terms and concepts at a given Web site are clearly specified (possibly informally). When there is not agreement and if the semantics of the terms are not clearly specified, there will be a lot of guesswork, thus undermining the reliability of applications. [...] Inspired by this analysis we conjecture that the following is a law of the semantic web: “The more agreement there is, the less it is necessary to have machine processable semantics.”

#semantics  #Web-sites  #Semantic-Web 
 If it uses rdf then it's sw

Because RDF (Resource Description Framework) [W3C 1999] is hailed by the W3C as a Semantic Web language, some people seem to have the view that if an application uses RDF, then it is a Semantic Web application. This is reminiscent of the “If it is programmed in Lisp or Prolog, then it must be AI” sentiment that was sometimes evident in the early days of Artificial Intelligence. There is also confusion about what constitutes a legitimate Semantic Web application.

#web-applications  #RDF  #applications 
 In the simplest case, the...

In the simplest case, the semantics are implicit only. Meaning is conveyed based on a shared understanding derived from human consensus. A common example of this case is the typical use of XML tags, such as price, address, or delivery date. Nowhere in an XML document, or DTD or Schema, does it say what these tags mean [Cover 98]. However, if there is an implicit shared consensus about what the tags mean, then people can hardwire this implicit semantics into web application programs, using screen-scrapers and wrappers. [...] The disadvantage of implicit semantics is that they are rife with ambiguity. People often do disagree about the meaning of a term. For example, prices come in different currencies and they may or may not include various taxes or shipping costs. The removal of ambiguity is the major motivation for the use of specialized language used in legal contracts. The costs of identifying and removing ambiguity are very high.

#ambiguity  #meaning  #people  #semantics 
 Publicly declared concepts in the semantic web

The assumption that there will be terms whose meaning is publicly declared and thus sharable is critical to making the Semantic Web work. Although we brought up this issue in the context of machine processible semantics, it is equally important when the semantics are hardwired by the human. For example, consider the Dublin Core Metadata Element Set (DCMES) [Weible & Miller 2000] a set of 15 terms for describing resources. The elements include such things as title, subject and date and are designed to facilitate search across different subject areas. The meaning for these elements is defined in English, not a formal language. Nevertheless, if this meaning is hardwired into a Web application, that application can make use of Web content that is marked up and points to the Dublin Core elements.

#meaning  #semantics  #formal-language  #web-content 
 The evolving web

Various Perspectives for Characterising the Web: - Locating Resources: The way people find things on the Web is evolving from simple free text and keyword search to more sophisticated semantic techniques both for search and navigation. - Users: Web resources are evolving from being primarily intended for human consumption to being intended for use both by humans and machines . - Web Tasks and Services: The Web is evolving from being primarily a place to find things to being a place to do things as well [Smith 2001].

#Web  #things  #web-resources  #keyword-search 
 The ability of the agent...

The ability of the agent to infer something about the meaning of fuel-pump depends on the existence of a formal semantics for ontology language such as DAML+OIL. The language semantics also allow the agent to infer the meaning of complex expressions built up using language primitives. The semantics of the language are not machine processible; they are written for humans only. People use them to write inference engines or other software to correctly interpret and manipulate expressions in the language. Note that today’s spectacularly impressive search engines by and large do not use formal semantics approaches. Overall it remains an unproven conjecture that such approaches will enhance search capabilities, or have significant impact anywhere else on the Web.

#language  #semantics  #meaning  #engine 
 Problem of kr languages heterogeneity

Different ontology languages are often based on different underlying paradigms (e.g., description logic, first-order logic, frame-based representation, taxonomy, semantic net, and thesaurus). Some ontology languages are very expressive and some are not. Some ontology languages have a formally defined semantics and some do not. Some ontology languages have inference support and some do not. If we are to allow all these different languages, then we are faced with the very challenging problem of translating between them.

#language  #Ontology-Language  #different-languages  #semantics 
 It will be layered, extensible,...

It will be layered, extensible, and composable. A major part of this will entail representing and reasoning with semantic metadata, and/or providing semantic markup in the information resources. Fundamental to the semantic infrastructure are ontologies, knowledge bases, and agents along with inference, proof, and sophisticated semantic querying capability. [...] In order to carry out their required tasks, intelligent agents must communicate and understand meaning. They must advertise their capabilities, and recognize the capabilities of other agents. They must locate meaningful information resources on the Web and combine them in meaningful ways to perform tasks. They need to recognize, interpret, and respond to communication acts from other agents.

#information-resources  #capability  #task  #semantic-metadata 
 Because XML is widely used,...

Because XML is widely used, DTDs and XML Schema are convenient ways to express standards. There are many such standards emerging in a variety of sectors. For example, the news and magazine publishing industries have developed NewsML [ITPC 2000] and PRISM (Publishing Requirements for Industry Standard Metadata) [PRISM 2001]. However, XML is merely convenient, it is not necessary. The DTDs and XML Schema say nothing about the semantics of the terms. Semantic agreements are implicit or informally captured in text documentation.

#XML-schema  #XML  #schema 
 Real world semantics

Real world semantics are concerned with the “mapping of objects in the model or computational world onto the real world ... [and] issues that involve human interpretation, or meaning and use of data or information.” [...] The real world semantics correspond to the concepts in the real world that the items or sets of items refer to. [...] We believe that the idea of real world semantics, as described above captures the essence of the main use of the term “semantics” in a Semantic Web context. However, it is only loosely defined. The ideas of axiomatic and model-theoretic semantics are being used to make the idea of real world semantics for the Semantic Web more concrete.

#real-world-semantics  #world-semantics  #semantics  #real-world 
 Problem of incompatible conceptualizations

Even with a uniform language, there may still be incompatible assumptions in the conceptualization. For example, in [Hayes 96] it is shown that two representations for time, one based on time intervals and another based on time points, are fundamentally incompatible. That is, an agent whose time ontology is based on time points can never incorporate the axioms of another agent whose ontology for time is based on time intervals. From a logic perspective, the two representations are like oil and water.

#representation  #language  #assumption  #conceptualization 
 Problem of term heterogeneity and different modeling styles

Even if we assume a shared language and compatible conceptualizations, it is still possible, indeed likely, that different people will build different ontologies for the same domain. Two different terms may have the same meaning and the same term may have two different meanings. The same concept may be modeled at different levels of detail. A given idea may be modeled using different primitives in the language. For example, is the idea of being red modeled by having the attribute color with value red, or is modeled as a class called something like RedThings? Or is it both, where either (1) they are independent or (2) RedThings is a derived class defined in terms of the attribute color and the value red? Even if the exact same language is used, and if there is substantial similarity in the underlying conceptualizations and assumptions, the inference required to determine whether two terms actually mean the same thing is intractable at best, and may be impossible. [...] we spoke of the intended vs. actual models of a logical theory. Respectively, these correspond to what the author of the theory wanted to represent, vs. what they actually did represent. [...] For a computer to automatically determine the intended meaning of a given term in an ontology is an impossible task, in principle. This would require seeing into the mind of the author. Therefore, a computer cannot determine whether the intended meaning of two terms is the same. This is analogous to formal specifications for software. The specification is what the author actually said he or she wanted the program to do. It may be possible to verify that a computer program conforms to this specification, but it will never be possible to verify that a program does what the author actually wanted it to do.

#specification  #ontology  #computer  #language  #formal-specification 
 The most widely accepted defining...

The most widely accepted defining feature of the Semantic Web is machine-usable content. By this definition, the Semantic Web is already manifest in shopping agents that automatically access and use Web content to find the lowest air fares, or book prices. But where are the semantics? Most people regard the Semantic Web as a vision, not a reality—so shopping agents should not “count”. To use Web content, machines need to know what to do when they encounter it. This in turn, requires the machine to “know” what the content means (i.e. its semantics). The challenge of developing the Semantic Web is how to put this knowledge into the machine. The manner in which this is done is at the heart of the confusion about the Semantic Web.

#Semantic-Web  #web-content  #Web  #machine  #shopping-agents  #semantics 
 The moment we accept the...

The moment we accept the problem and grant that agents may not use the same terms to mean the same things, we need a way for an agent to discover what another agent means when it communicates. In order for this to happen, every agent will need to publicly declare exactly what terms it is using and what they mean. This specification is commonly referred to as the agent’s ontology [Gruber 1993]. If it were written only for people to understand, this specification could be just a glossary. However, meaning must be accessible to other software agents. This requires that the meaning be encoded in some kind of formal language. This will enable a given agent to use automated reasoning to accurately determine the meaning of other agents’ terms. For example, suppose Agent 1 sends a message to Agent 2 and in this message is a pointer to Agent 1’s ontology. Agent 2 can then look in Agent 1's ontology to see what the terms mean, the message is successfully communicated, and the agent’s task is successfully performed. At least this is the theory. In practice there is a plethora of difficulties. The holy grail is for this to happen consistently, reliably, and fully automatically. Most of these difficulties arise from various sources of heterogeneity. For example, there are many different ontology representation languages, different modeling styles and inconsistent use of terminology, to name a few.

#ontology  #messages  #task  #specification  #language  #meaning