Search engine fraud, deception and down-right tomfoolery may not be – yet – a water cooler conversation topic around the start-up office but it should be. A spate of pieces published in the last couple of weeks point to a looming problem about to bite the online world right smack in its SEO.
No less than the New York Times published a piece February 13. Danny Sullivan also published a piece February 12.
Add to this conversation the emphasis on valid searches by Blekko, the acclaimed start-up competition to Google and one gets the sense we may finally, as an online community, get serious about link spam, doorway pages, AdSense containers, content harvesting and all the other annoying, if not dangerous, snake oil purveyors lurking around the dark edges of the ether world.
After 15 years creating websites, teaching all-things Internet and spending a great deal of my time on online content for clients, my own fun and business and even more time for nonprofit organizations like the Naples Free Net (Florida) I remain frustrated that Google’s brilliant minds haven’t found an algorithm to weed out the shysters.
The ultimate conclusions of the New York Times February 13th piece is that Google has an inflated index of websites.
I Search Google. So Does Bing
No one knows how many spam sites, link farm directories, content farms are out there, the sole purpose of which is to provide containers for AdSense advertising (Google) and increase link rank. Businesses not employing dubious but tolerated practices, are driven to the Google Ads option on Google. Something must give.
Then it hit me:
Google is trying to solve a problem with computers that can’t be solved by computers. No matter how hard we try to teach it, a computer doesn’t know a good site from a bad site; doesn’t recognize a content farms site, an unrelated link or a mere AdSense container. It has no judgment. If judging good sites from bad sites was a solvable computer problem, Google’s brilliant minds would have solved it years ago.
The Google e-mail program deals brilliantly with spam. To do so it made itself part of a bigger community of other e-mail providers, Internet providers and system administrators. Last but not least thousands of users are vigorously helping spam.
For a Search Engine Index clean up, Google can’t expect assistance from other providers. It’s too far ahead for them. Everyone else looks to Google for results…. (NPR Wait, Wait. I search Google. So does Bing)
Google was born in a Web 1.0 world and has yet to master the Web 2.0 and Social Internet world. Google alone might not be able to solve this without massive help from Internet users. If so, it becomes very obvious Google will need to go open source with its database. (This would actually leave Google in a stronger position in the search engine market.)
Yeah, we know the Secret Google Sauce is its bread and butter – or sauce. But Google is about to slide off the SEO table on that sauce if it doesn’t find a way to end SEAs (Search Engine Annoyances).
Just like the mortgage market accumulated an unhealthy set of sub-prime loans, which nearly brought down the entire world economy, a SEO market filled with garbage might bring down the whole Google Index, at least its credibility.
The New York Times piece is the ultimate sign that Google Search index bubble is about to burst and could take down the only company that actually has a valid track record being able to handle the task. The New York Times brought the issue into the mainstream and could become Google’s biggest PR nightmare of which it isn’t even aware.
Google should start crowd-sourcing searchers and employ site owners with easy-to-use tools to help them find ways to separate the wheat from the chaff. Display a prominent “Spam” button with each search result so users can easily identify the culprits. Make sure the clicks come from humans (apply so called Turing tests) and employ the tried and true click-fraud detection software already in place in context with the AdWords engine. Google knows how to do that.
An army of volunteers with trusted online identities could be part of a review cohort to settle disputes or conflicting data. The human factor seems critical for a trusted search source. Google is not able to handle this alone. “GoogleWiki Whacking Spammers” could be the slogan.
This would be, of course, a risky proposition for Google. Advertising revenue might go down once the playing field is leveled again and machine created content, link farms and link exchange are eliminated. It also would mean the idea of creating a competition for Google’s search index can be stifled.
Someone harvesting Google’s search engine results would be an obvious transgressor and search results would have a whole different outlook if a People’s Directory was in place. Providing the technology, the infrastructure and the brilliant minds developing the algorithm that put Google ahead of the search engine curve for so long would still be in place. And Matt Cutt’s work as the sole anti-spam department with a few hundred employees would get much easier.
Google has the choice to wait until the inflated page database bubble bursts and follow the fate of sub-prime mortgage bankruptcies or lead the way into the second generation of the one and only online directory. Trust your people, Google!