Even if we assume a shared language and compatible conceptualizations, it is still possible, indeed likely, that different people will build different ontologies for the same domain. Two different terms may have the same meaning and the same term may have two different meanings. The same concept may be modeled at different levels of detail. A given idea may be modeled using different primitives in the language. For example, is the idea of being red modeled by having the attribute color with value red, or is modeled as a class called something like RedThings? Or is it both, where either (1) they are independent or (2) RedThings is a derived class defined in terms of the attribute color and the value red?
Even if the exact same language is used, and if there is substantial similarity in the underlying conceptualizations and assumptions, the inference required to determine whether two terms actually mean the same thing is intractable at best, and may be impossible.
[...]
we spoke of the intended vs. actual models of a logical theory. Respectively, these correspond to what the author of the theory wanted to represent, vs. what they actually did represent.
[...]
For a computer to automatically determine the intended meaning of a given term in an ontology is an impossible task, in principle. This would require seeing into the mind of the author. Therefore, a computer cannot determine whether the intended meaning of two terms is the same. This is analogous to formal specifications for software. The specification is what the author actually said he or she wanted the program to do. It may be possible to verify that a computer program conforms to this specification, but it will never be possible to verify that a program does what the author actually wanted it to do.