It’s the #Algorithm, Stupid!

We can’t know for sure, Google won’t reveal its algorithm. The almighty algorithm is the cloud giant’s most important business secret, perhaps only rivalled by its advert auction algorithms. The irony is double, firstly for a company that spends millions lobbying against intellectual property to rely so heavily on its own (yes, business secrets are also a form of intellectual property) and secondly that an organization that cares so much about transparency would set up a completely asymmetrical market for adverts. Of course these are not the only ironies surrounding Google and granted the transparency is there to some extent (I found both links above on Google Search). No, this is about the Algorithm. So while we may not know the details of how the PageRank algorithm works, it is understood that PageRank algorithm uses the number of incoming links to a webpage to assess its relevance. The more other websites link to a page, the more important it is.

According to Andreas Ekström, a Swedish writer who has studied Google (and who is familiar to Netopia readers from this talk), this principle is taken from the academic world. Or from the world of academic papers, more precisely. The more an academic paper is quoted by other academics, the higher it is ranked in academic indexes. It’s a quantitative approach to research findings. It makes sense that this concept would be familiar to the Google founders, both Page and Brin were post-grad students at Stanford University when they started the company after all.

So using the proven recipe of academic knowledge “organize the world’s information and make it universally accessible and useful” sounds like a good idea, right? Sure, except the academic system has one more part: peer review. The quantitative system of quotes is balanced by a qualitative system of peer review, other researchers vetting the results before they are published. It is the balance of the two that makes the academic knowledge eco-system work. Nothing gets published before it’s confirmed by independent experts, and therefore nothing gets quoted that hasn’t been confirmed. (At least in theory, of course this system has its flaws and shortcomings like all systems.)

So, when Google set up its search service which came to dominate the web and become an eco-system unto itself, it actually took only half the proven concept, forfeiting quality over quantity. Quantity over quality is for sure one of the guiding principles of the digital era, and not only Google’s doing, but this reinforced that tendency and in search, Google’s dominance set a standard for all competing services to follow. Too bad the validation system got lost on the way. Of course, Search has been corrupted by many other factors since: paid results, biased search returns, linkfarms…

How would the internet have developed, had Google search considered both aspects? Would it have escaped the pitfall that all facts are created equal? Would the filter bubbles of vaccines sceptics, trolls, racists, chemtrail conspiracy theorists and other tin foil hats have gotten less traction? Would we have a better public opinion formation online? We will never know. Maybe someone else would have invented a competing search engine that works just like the Google we know and won out. But is such a thought always a case against responsibility? That someone else would have done it, had they not? I have no illusion that something like peer-review can be added in retrospect. But I think it’s worth asking what responsibilities the global digital intermediaries have, should have and how well they live up to them. As it stands, there is room for improvement.

1

Leave Comment

  1. […] have been lots of discussions about algorithms, artificial intelligence and about how robots might take over our jobs or could lead to unethical […]

Comment on this article