Alas, I find the Web 3.0 arguments as clear evidence that the proponents don’t understand Web 2.0 at all. Web 2.0 is not about front end technologies. It’s precisely about back-end, and it’s about meaning and intelligence in the back end.
The real difference between Web 2.0 and the semantic web is that the Semantic Web seems to think we need to add new kinds of markup to data in order to make it more meaningful to computers, while Web 2.0 seeks to identify areas where the meaning is already encoded, albeit in hidden ways. E.g. Google found meaning in link structure (a natural RDF triple); Wesabe is finding it in spending patterns.
There are sites (geni.com comes to mind) that create narrow-purpose cases where people add structured meaning, and I think we’ll find lots more of these. But I think that the big difference is in the amount of noise you accept in your meaningful data, and whether you think grammar evolves from data or is imposed upon it. Web 2.0 applications are fundamentally statistical in nature, collective intelligence as derived from lots and lots of input at global scale.