Basics of Web 3.0?

Hannah Eaves October 31, 2008


What does "jaguar" mean to you? If you’re a fan of Bush’s tax cuts, it’s probably a brand of car. If you are a Mac user it might be an operating system, and if you’re hearing strange noises on the Mexico-Guatemala border, it might be a very big cat. But if you’re interested in the future of online technology, it is the evergreen example used to explain what’s called The Semantic Web.

Semantic web proponents argue several things pretty loudly, and one of them is that computers should be smart enough to understand the deep meaning of words by their context, just like humans understand sentences. Computers should understand that when you ask "Are jaguars extinct in Central America?" you are looking for an answer, and you mean the animal, not the car. They should also understand this automatically, without a user having to manually tag the content where it lives (page or video) with the words "jaguar" and "mammal" and whatever hundred other keywords might make sense. Instead, it should be able to use its context, combined with the infinite informational pages on the web, and human interaction, to make its decision. This implies a level of artificial intelligence whereby computers are able to infer meaning from language—from the structure of a sentence, from the sentences on a page. Not "how frequently does this word appear" but "I get what you’re saying."

Right now, when you type something into a search engine, you get a list of results based on the frequency of key words, and an algorithm that judges the relative value of the delivered pages. But little is done to understand the meaning of what you’re looking for—what class it falls into, what categories it lives in, what other things it might be related to. On top of this, if you’re Morgan Spurlock and Supersize Me is playing on a page, basic keyword integration might mean that McDonald’s ads are appearing around your film.

Once the computer is able to understand meaning, it should also be able to do something useful with it, drawing together information and resources from a variety of different sites and tools, all completely relevant to the concept you’re looking at. It might be Creative Commons-licensed photos of jaguars, an interactive map with pinpoints for recent sightings, and a draggable cross-referencing system where you can see what the average lifespan of a jaguar in Mexico is compared to one in Guatemala, compared to then, say, land use, and maybe add in "over time" for good measure. Then of course you’d also see a selection of videos queued to relevant sections about jaguar extinction, and ways to donate to protect the jaguar. All active, all relevant, and all automatically generated on some level.

Watch this video for a nice example of the holes in our current search system. It uses the simple example of a question "How tall is the Eiffel Tower?" to explain a little bit of the logic behind finding meaning in words. But also note that until users have helped to make this particular tool more robust (check out the end of the video), it can’t yet answer the basic question "Are jaguars extinct in Central America?"



The visionaries in the field of semantic web research argue that the Internet is one massive database that we’re not really using. Without understanding the language of videos and pages (and peoples’ own behavior, judgments, likes, and dislikes), we are neglecting a major resource. But there’s also a feeling that more data portability is essential, which is a different side of Web 3.0. For example, I should be able to drop my bank statement onto my calendar, along with my photos, and get a feel for the structure of my life. And on a much scarier level, I should be able to gear my whole browsing experience to my own likes and dislikes, which is where the threat of a new "echo chamber" reality comes in. I suppose the IMDB could know that I only want to see gossip about George Clooney and Hugh Laurie, and ads for art house films, docs and movies made before 1945. But also CNN might know that I only want the pro-Obama news, and where would that leave me? Never knowing if there’s a great Indian restaurant around the corner, that’s where—just because I don’t like Indian food.

The issue brought up earlier by Supersize Me is not a trivial one. A semantic tagging of the film, based on a transcript, might understand that the entity it’s talking about is McDonald’s, that the issues it raises are health, obesity, ethics, etc. But there’s little way to grade the tone of the content. Yes, McDonald’s is an entity, but are we seeing a positive or negative portrayal? This is where the power of the human mind comes in, and semantic engineers have been experimenting with this.

If there’s an entity missing, you might be able to add it. If the semantic engine has come up with an incorrect result, you can delete it. It might not take you seriously until a certain number of people do the same thing, or until a moderator double checks, but you can help out with your opinion. The next step is being able to answer prompt questions the machine might have about, say entities. You might be prompted with a question like "We see McDonald’s is an entity in this film. Would you say this is a positive or negative portrayal of McDonald’s?" Of course, then we get to the idea of trusted users versus first-time users, flagging, etc, all the complex questions that have come out of Web 2.0 experiments. We all have objective realities, but Wikipedia may have taught us that a compromised reality still holds value (now there’s a statement to live by!). This also ties into the question of ontologies (sets of categories and entities). For instance, the ontology of the film industry is different from that of the Met, which is different from that of the medical industry. Already, semantic tagging tools are trying to bring sets of ontologies into the fold, and this sometimes pushes the boundaries of intellectual property, especially when it comes to Big Pharma.

To see this in action, go here (http://semanticproxy.com/demo.html) and paste in this url from SF360—http://www.sf360.org/features/christmas-on-mars-on-halloween. Check out the entities you get—they’re not just words, but things and concepts, movies, people and albums. Calais has an open API, which means that under a certain number of queries, people can plug it into their own sites for free. Then they have to think about whether to incorporate it into the underlying language of their site (which I don’t go into here, but if you’re interested look up RDF, OWL and microformats).

The great thing about this field is that it’s still full of tinkerers too, working on a high level. Calais doesn’t go deep enough? Simply mash it up with Wikipedia or Freebase to give your entities legs and associations. This is still the bleeding edge.

Some of you are asking, Web 3.0? Already? I don’t even understand Web 2.0 yet. Aren’t these all arbitrary constructs? The idea of the semantic web has been around for a long time, but because the structure of human language is so difficult to understand, it’s also faced criticism for an equally long period of time over whether it’s actually possible. And now its proponents are dubbing themselves Web 3.0, probably hoping that it will become a self-fulfilling prophesy.

Semantic tagging and the tracking of user behavior for the future implementation of an "intelligent web" were the two big take-aways from this month’s Web 3.0 conference in Santa Clara. It’s worth noting that there were no content creators there, only technologists. And even the technologists seemed nervous about how accessible these tools are to ordinary web developers. But people showing off their tools now include eBay, Yahoo! and other big name brands. Also in attendance were advertising agencies, who can almost taste the blood of targeted advertising clients.

But what does this mean for ordinary content creators (read: filmmakers), if anything? The fact that Thomson Reuters—one of the world’s largest news agencies—recently bought Calais, the semantic tagging tool, is a good indication of things to come. Slowly but surely this technology does draw nearer to the real world. Not a week goes by without an announcement about someone working on speech-to-text capabilities, still a distant dream. Once auto-transcription is in place, semantic parsing of that text is the next step. But those with archived transcriptions won’t have to wait. The biggest market for this capability would be in targeted advertising, in a scenario where ads would refresh when the emphasis of, say, an extreme sports video changes from skateboarding to mountain biking.

For filmmakers it may mean being able to create a larger set of dedicated fans who are organically exposed to their work whenever it enters the online world, without having to search for it online or read their email. For those trying to make money through their films online, or through other means of new distribution, it might lead to more revenue from directed ads around your content. You might also find your content living in multiple environments, where the level of discovery might be higher, but might be driven by the relevancy of specific timecoded scenes. For social documentary filmmakers it might bring easier integration of "action items," deeper knowledge about your issue and meaningful contextualization. It might even make it easier to mobilize people who are all interested in a specific issue around the body of films covering that issue. It may also be that you’ll connect to one high level influencer (like a blogger), whose ex-posure and sharing of your work might reach into people’s pages or feeds, who don’t even visit their site. But right now it might just cause confusion. It’s a massive topic, so here are some links to get started:

Intro to Semantic Web video: http://www.youtube.com/watch?v=OGg8A2zfWKg.

Calais (skip this one and look at the links below for Calais in action):
http://www.opencalais.com/.

Semantic Proxy—Calais in action. Enter a URL and it will deliver a set of results. For instance, enter http://www.sf360.org/features/christmas-on-mars-on-halloween, and see all the films, people, music, cities, albums, industry terms, etc, returned: http://semanticproxy.com/demo.html.

The Powerhouse Museum in Sydney is using Calais to tag its collection. Check out this entry for Calais in action. All the Calais info is in the right-hand column. Note how users can contribute: http://www.powerhousemuseum.com/collection/database/?irn=353198.

Read Write Web, the absolute BEST source of information in this field on the web: http://www.readwriteweb.com/.

Read Write Web highlights: The Structured Web: http://www.readwriteweb.com/archives/structured_web_primer.php.

Semantic Web Patterns, A Guide to Semantic Technologies: http://www.readwriteweb.com/archives/semantic_web_patterns.php.

Zemanta: A fascinating plug-in with an open API that allows the most basic user to “semantify� their blog posts by detecting semantic concepts in the post and providing a selection of Creative Commons licensed images, news articles, blog article links, wikipedia hookups, etc, to amp it up. Check out their demo: http://www.zemanta.com/.

Freebase (a hyped-up semantic, user-editable database that can be mashed-up with tools like Calais): http://www.freebase.com/.

Powerset: http://vimeo.com/994819.

Twine: http://www.twine.com/.

0

previousnext

previousnext