The False Truths of Google Auto-Complete

The UN campaign on women’s rights is provocative and deserves all the recognition it gets. It features a photo where a Google search box covers a woman’s mouth, the search phrase is “women should not…” with auto-complete suggestions like “vote” and “work”. The impression is that a lot of people search for these phrases and that women’s rights are far from universally accepted. While Netopia agrees with the cause, the logic seems to be a bit flawed. As BBC’s Tom Chatfield points out, these auto-complete suggestions are not the same for all users: location, previous search patterns, popular topics and many other factors go into the invisible algorithms that produce these suggestions. Users are well-advised not to take them literally. Chatfield also points to the problem that auto-complete suggestions can limit our searches, rather than the opposite. In my experience, there is yet another challenge is that they may suggest information – true or false – that we’d rather not have. I once googled the name of a writer and auto-complete suggested he had been in prison for domestic violence (ironic considering the topic of the UN campaign). I would have much rather not known about his potential wrong-doings, but now that’s all I can think about when I read his stories.  What has been seen cannot be unseen. Sure, you don’t always find what you’re looking for when you surf the net, but you always find something interesting. Except sometimes you would much rather not find out at all. This example speaks both to the information-centrism of digital technology, and to the detronisation of truth: the predominant ideology for how technology is designed is that more information is always better (and enough information could save the world). Personal DNA-tests are a good example, information-centrism says it is better to know than not but it turns out that reality is much more complex, like how “false positives” can be misunderstood by us laymen. The detronisation of truth of course is the fact that I will never know whether this writer actually hit his wife or not. Finding out for real would require a lot more investment than I am prepared to make (like getting the public court protocols). Conclusion: too much information is more than enough, not least when its unreliable or hard to understand. As always, Netopia’s suggestion would be to make Google’s algorithms transparent so we could make informed decisions on the reliability. Until then, I guess we’ll just need to have faith in the machine.