Posted on | October 29, 2013 | No Comments
Since the term Web 2.0 was first coined, people have been waiting for Web 3.0. The linger question has been what it will be.
Some say it is the Mobile computing revolution, that everyone in the world will eventually walk around with a way to tap into the Internet right in their pocket.
Others think that it will be the advent of the geospatial web, how everyone’s location and how it is interpreted, be it a portable device or a car or a laptop, will be used to control what we see as we use our web browsers.
But the true “Third Phase” will probably take on a different form, one that might make fans of Terminator films a little nervous.
For Master’s Thesis, I proposed the idea of creating a website interface that reshaped itself to the preferences of whomever was using it. As they clicked on links the menu would begin to reprioritize the options, pushing preferred links higher on the menu and adding more that the site thought the user might want. Never quite excluding l the other options, just in case the user wanted something else, but always offering up what links it judged the visitor was looking for.
The big benefit of such a site was it needed no personal information about the user. A heuristic equation monitored activity and changed the menu, so a single tracking cookie was all that’s necessary.
The results from the test site meant to prove the thesis showed excellent results, but in practicality, the site came up short; repeat visitors to a web page expect links to be in a certain place, sort of a habitual de facto Branding. Moving them around would serve to confuse the user.
But the idea, that a website could act as online assistant and bring someone the information they wanted, not just asked for, was what I took away from the project.
And that was Web 3.0; the online assistant. We do the asking, and it does the work.
Right now, even with the targeted results that Google provides, digging through web searches is a chore. A single keyword can bring a flood of non-sequitur data. With the exponential growth of the information online, we need summaries of the summaries of the searches we run. Even skilled researchers are stymied by the cacophony of keywords that can make a targeted subject hunt useless. Inevitably, the searches need to be pared down again and again with new criteria, shaped to better ferret out our target.
But imagine if a simple heuristic program could do that very same thing, only going through the trial of changing the search parameters in microseconds, all the time knowing what we’re looking for, as well as how to get it.
Progressively the program would become smarter, maybe weaving some of our past searches together to better understand out long term interests. The inevitable conclusion? While such a program might not be a true AI, it would almost certainly be as useful as one. It would guess, if not think. The Turing Test would remain intact.
Maybe we can save that for web 4.0.