Yikes: ‘Web 3.0’ Term Used w/o Irony

https://i0.wp.com/www.ercim.org/publication/Ercim_News/enw51/toivonen.jpgThis John Markoff piece in Sunday’s NY Times (reg. req’d) resurrects the elusive goal of the “semantic Web” with a bit of a twist, and un-self consciously calls it “Web 3.0.” Nonetheless the article is interesting, suggesting how users (without knowing it) and a range of companies (e.g., Radar Networks) are actively working to make it easier to retrieve more relevant information online (Local is the example used by Markoff):

[T]he Holy Grail for developers of the semantic Web is to build a system that can give a reasonable and complete response to a simple question like: “I’m looking for a warm place to vacation and I have a budget of $3,000. Oh, and I have an 11-year-old child.”

This is the Internet as a globally accessible database that can be accessed in an intuitive way by regular people. In a certain way this all comes down to two things: helping machines get more information out and in front of people and query disambiguation — understanding user intent.

But of course nothing ever goes exactly as planned and this concept has been around a long time. Clearly information retrieval will get better and more complete over time. When it comes to local, there’s still a tremendous amount of data that needs to be “uploaded” and organized before Markoff’s question can be fully answered — though an experienced human can answer the question now.

One of the key developments for the next-generation of  Internet services and companies — whatever it and they look like — is dealing with the paradox of choice. There’s already too much information online, yet also not enough (as with local). What I need is not 100 or 1,000 new choices, but 10-15 (at most) that are right for me or otherwise meet my criteria.

2 Responses to “Yikes: ‘Web 3.0’ Term Used w/o Irony”

  1. Vinny Lingham Says:

    Looks like Riya then will be the next big acquisition target!

  2. Richard Tripp Says:

    I recently read a post on how during the 80s and 90s computers made people smarter, and now people are making computers smarter.

    Sounds right to me, and I think there’s quite a bit of mileage to be gained yet out of user-contribution systems and the content they’re creating.

    Rather than build technologies that attempt to associate otherwise disassociated data to provide “right for me” info, why not build better user interfaces into the extraordinary social phenomenon of platforms like Wikipedia? I read yesterday how Wikipedia’s 1.4 million articles were largely generated by just 2000 writers. I’ve found that I now search Google with the full expectation that I’ll end up at Wikipedia more than 1/2 the time.

    Wikipedia and other UGC platforms benefit from human algorithims that intelligently self-regulate and miraculously stabilize once they reach a point that satisfies all participants (both content creators and consumers). Computers won’t catch up anytime soon with the brain-power of a single human being, much less a group of 2000.

    Why not focus instead on building better platforms for the UGC experience?

Comments are closed.


Get every new post delivered to your Inbox.

Join 138 other followers

%d bloggers like this: