Author Archive

Facebook’s arbitrary freedom of speech definition

Sunday, November 3rd, 2013

Rob den Bleyker is a cartoonist, he was locked-out from Facebook for twelve hours after having uploaded an innocent cartoon that someone found offensive. It is ironic that the bar for what is acceptable speech in social media is set by the least tolerant, at the same time as these services are celebrated for their contributions to freedom of speech – even sometimes supposedly bringing democracy to dictatorships. It is doubly ironic that such a cartoon that would be perfectly normal in any offline publication (or rather any publication with editorial independence and responsibility) is banned from Facebook, where as traditional media’s efforts to stop unauthorized distribution of their content is often considered a threat to freedom of speech. One can only draw the conclusion that freedom of speech is a rubber paragraph, just like the user license agreements that govern our relationships with the online services. These license agreements are of course non-negotiable and it only makes sense that nobody ever reads them. But for the online space to be a real contribution to humanity, we must do better. It is about time that the learnings from traditional media are brought to bear on social media. No more mass distribution without editorial responsibility. No more private censorship. No more arbitrary and obfuscated rulings on acceptable content and random bans. Transparency and independent scrutiny, please.

Facebook’s biometric database

Wednesday, October 30th, 2013

As reported, I found Mikko Hypponen’s contribution the most valuable part of TEDx Brussels. His insight, one-liners, unrelenting de-construction of the counter-arguments pro-surveillance… that’s the sort of razor-sharp thinking that cuts through the myths and obfuscation that permeates so much of the tech debate. (Will post link as soon as it’s available). I can only daydream about how Hypponen would have taken apart NSA-boss Keith Alexander’s defense at the House intelligence committee on Tuesday. Hypponen’s conclusion that open-source software is the solution is looks like a shortfall compared to the insights of the analysis. To me, that is a nostalgic perspective on the internet, where everyone has the best intentions and the wisdom of the crowds can and will replace traditional modes of production and knowledge. Not that I have anything against open-source software per se, in fact this web magazine runs on WordPress which is open-source. I only think expectations on it are often out of proportion.

Now, the revealings of how internet services release private information to US security agencies brings up another topic from TEDx Brussels: biometric data. So we share our opinions and movements and it gets picked up by the government. But what about our bodies and all the data about that? Genome scanning on a large scale is still a few-years into the future, but finger prints are everyday stuff, which anyone who has passed Immigrations on the US border or owns a laptop with  a fingerprint encryption scanner will know. And what about our faces? Automatic facial recognition has been a research field since the 50’s but it’s still nowhere near perfection. Plus the data in systems like crime databases is far from enough to make a useful system. However, there is one huge database of faces connected to names and we all contributed voluntarily: Facebook. In fact, these days staying off Facebook for privacy reasons is so suspicious it will soon be exclusive to actual lunatics. So Facebook is on its way to a complete index of faces and names, at least in our part of the world. That position demands great care, so far we can only trust Facebook to use it responsibly as there is no regulation or similar in place to deal with this. Read more on facial recognition in this National Public Radio-story. And ask yourself what Mikko Hypponen would say about it.

The Problem with TED

Tuesday, October 29th, 2013

I have a problem with TED. It’s like: you’re so smart, I’m so smart. That’s really West coast! I want to argue about ideas!

The words of my US East coast intellectual friend echoed in my mind as I joined the TEDx in Brussels yesterday. The format does not invite argument! There is no opportunity to disagree, or at least let anyone know you do. Sure, you can voice your disagreements in social media (this post being one example), but that is nowhere near actually interacting on-site. The basis of democracy is different ideas competing; this invites debate. The opposite is consensus or dogmatism, which is typical of intellectual mono-cultures like religious sects or dictatorships. I like TED; I really do. It has inspired my work and thinking over the years, I have learned lots. Yesterday, I learned you’re not supposed to listen to music that you like if you want to focus on work! But at the same time, it feels intellectually limiting for the aforementioned reasons. So, dear TED organisers, step up your ambition! Invite disagreement. Invite debate. Ideas are great; they get even better when they are pitched to a live audience, but what really makes ideas grow is when they are challenged and must be defended. There is a reason for the academic tradition of having opponents to papers and theses—it improves the thinking. So next time, after the fifteen minutes of TED online video fame, let the camera pan out and let the audience zoom in.

Zeno’s Turtle Beats the Singularity

Monday, October 28th, 2013

Zeno was an ancient Greek philosopher who argued that our senses deceive us and that we must trust our mind only in trying to understand the world. A true rationalist. Zeno sought to convince the world of this point with a number of paradoxes. The most famous is Achilles and the turtle. Achilles was a legendary warrior and famous for his speed (killed when an arrow hit his heel, the only place his magic protection did not cover). In Zeno’s example, Achilles runs a race against a turtle and for fairness gives the turtle a head start. Now our eyes will tell us that Achilles catches up with the turtle, overtakes it and beats it to the finish line. But our mind, Zeno argues, will tell us that is not possible because in order to catch up with the turtle, Achilles must first cover half the distance between himself and the turtle, then half of the remaining half, then half of that, and half of that and half of that ad infinitum. So logic tells us the turtle wins and reality is an illusion, quad era demonstrandum.

The singularity will come in 2045. It is that inevitable point where technological progress becomes so fast, everything happens at once. Consider the evolution of the telephone. The first telephones appeared in the late 19th century as a development of the existing telegraph technology. It took sixty or seventy years before they were in every home, office and street corner. The mobile phone was first developed in the 50’s and 60’s as an extension of the telephone combined with two-way radio technology. They made it into the consumer market as car phones in the 80s and then as pocket mobile phones in the 90s. The smartphone was developed around the Millennium and took only a few years after its commercial release before it sold billions. The uptake of the Ipad was even faster. Judging from this, every cycle is shorter, every new technology release is picked up more quickly, spreads wider and faster. So, by extending this curve, there will be a point when everything happens at once. Innovation will be immediate. All the information will be present at once. Computer power will be infinite. Everything will be digital and it will be abundant. We will live forever, through genetic engineering, cell-reparing nanobots in our blood stream, cybernetic upgrades and/or hard-drive backups of our brains (or will that be cloud-based?). By the best-informed estimates, this is going to happen in the year 2045.

Sure, plenty of non-believers will tell you that not all innovations follow this trajectory, that there is no telling what the future will be like just be extension of current trends (hello Black Swan!), that the blessings of the Singularity will be exclusive to a privileged elite at the expense of the masses, and that this whole thing is nothing but a Neo-Christian variation of Biblical themes like ascension, the Holy Ghost and Heaven. But don’t be surprised if it turns out the best critic of Singularity is a 400 BC Greek philosopher that reminds us that no matter how fast technology moves, it will never beat that turtle.

Guns as a freedom of speech issue

Friday, October 25th, 2013

This week EU Commissioner Cecilia Malmström expressed concern that disruptive technologies like 3D printers could challenge the gun control regime. It would only take a few days before her worries were confirmed. This morning, UK media reported that the police have seized a “3D gun printing factory.” A 3D printer in a private home in Manchester had been used to make a trigger and clip for what is thought to be a gun. It was easy to see it coming, however: in May this year the first shots were fired with a 3D-printed gun called the Liberator (nice touch) developed by Texas-based not-for-profit organisation Defence Distributed. The US authorities issued a takedown notice, but blueprints spread quickly across file-sharing networks.

Defence Distributed states its mission as follows:

To defend the civil liberty of popular access to arms as guaranteed by the United States Constitution and affirmed by the United States Supreme Court through facilitating global access to, and the collaborative production of, information and knowledge related to the 3D printing of arms; and to publish and distribute, at no cost to the public, such information and knowledge in promotion of the public interest

…the more guns, the better, in other words. The Defence Distributed manifesto is a link to a John Milton speech on the liberty of unlicensed printing. So, there you have it: guns are now a freedom of speech issue. If you need a case for regulating the digital space, look no further.

Next month, Netopia will publish a report on 3D-printing technology and its potentially disruptive influence on markets, law, and society. Watch this space!

UPDATED Since this blog entry was posted, the UK police have revisited their conclusions and are no longer sure the items found are actually pieces for guns.

Merkel’s Needle in Obama’s Haystack

Thursday, October 24th, 2013

PEN International’s Larry Siems makes a strong case describing the problems with ubiquitous surveillance on the so-called Dissident Blog. Talking about the proverbial needle in the haystack, Siems shows that with complete surveillance, every straw may be a needle at one point or another.

This can be read in context with the unfolding controversy over the NSA’s alleged eavesdropping on German chancellor Angela Merkel’s mobile phone calls, which led her to call Obama directly, demanding an explanation (Netopia’s sources did not reveal whether the NSA also listened in on that conversation or not).

This demonstrates that technological imperatives—what can be done must be done—will lead to very problematic consequences, be it government monitoring, private data mining, or peer-to-peer file sharing. The solution is the same; it must be the norms and conventions of society that decide how technology applies, not the other way around.

Off with their’eads, Facebook!

Wednesday, October 23rd, 2013

Growing up the Eighties, snuff movies were urban legend in the school yard. Like WWF wrestling, some argued they weren’t real, some lied about having seen them, some might have actually come across a copy of the banned Faces of Death-video tape which featured alligator accidents and mortal falls from tall buildings. For sure the myth was greater than the actual content. Today’s school kids don’t have to debate the existence of snuff movies, videos of real killings are readily available both on file-sharing networks and more legitimate platforms like YouTube. Anyone who likes can watch an actual decapitation at anytime. On-demand real-life depictions of lethal violence.

Facebook has changed its content policies and now join the ranks of those services that accept extremely violent videos uploaded by users. There is only one caveat: it has to be uploaded with the purposes of criticising the action. For sure, it is the Facebook user that does the actual uploading and supposedly should convince Facebook that its objectives are not such schoolyard mythology as in my childhood, but important social critique (presumably of the horrible practice of decapitation). Looking beyond the ambiguity of verifying the purpose, the philosophical question whether the end justifies the means (is the video still okay if other users link to it with less morally appropriate comments?) and the practical matters of monitoring these shades of gray, this puts the spotlight on a very challenging topic of curation, editorial responsibility and intermediary privilege. Most internet companies will claim the technology is neutral and that users are responsible for how they use the service. When asked to forward information to those users, whether it be Google, your broadband carrier or The Pirate Bay, the answer is always that such a thing would be a violation of privacy. This is a – vulgar – interpretation of the safe harbour principle, or intermediary privilege if you will. But when these services go beyond providing the means and begin to interfere with the content – be it banning certain types of content or shaping data traffic to increase revenue – they enter the domain of media companies: selecting, filtering, curating, taking upon themselves to decide what is appropriate for distribution. But the media is closely regulated across the world in terms of editorial responsibility (and many other ways, not least regarding adverts), whether it be in law or through self-regulation (in most cases a combination of the two). The assumption is that you cannot have mass publication without responsible editors. This is a good principle that has contributed greatly to an informed public and a political debate that is fundamental to democracy, at least in most Western democracies. There is no reason to believe that internet makes this any different, quite the opposite. The examples of problematic mass publications on social and online services are numerous.

Internet services have a choice of being responsible editors, in which case they must abide by similar rules as media companies. Or they can be infrastructure providers, in which case they must allow for contact with their users (or at least relay communications) and must not interfere with the traffic except for network integrity reasons (viruses and similar). Details of both avenues can be discussed, but what such companies can’t do is be media sometimes and infrastructure other times as it suits them. It is – to borrow a word from digital technology – a binary choice. In both options, the rules must be transparent and subject to outside scrutiny, the practice of user agreements as a reasonable replacement for democracy is passé, if it were ever plausible. If Facebook prefers to be a media company, that is fine: but in this case they cannot decide the rules themselves, it has to be done by a third party – self-regulation body or government – and it has to be completely transparent and open to discussion. No serious media company would ban pictures of nursing mothers but allow decapitation videos.

If anyone ever really bought the line that technology is neutral, this case should convince the last believers that it is the opposite. The views and convictions of those who develop and control technology are – articulate or not – the basis for the design choices.

Encrypted e-mail not secret

Tuesday, October 22nd, 2013

Anonymity is often regarded as the best way to secure one’s privacy online, except that it clashes on a fundamental level with the way the network is set up, as our every online action leaves traces in the form of digital footprints, cookies, IP numbers stored in logs, and many other ways. In practice, anonymity is an illusion. Most users take some reassurance in the fact that there is so much data traffic that the risk of being spied on is very little, but big data analytics changes that, as Edward Snowden’s whistleblowing on Prism clearly demonstrated. Anonymity is a dead end for privacy online.

For those who may need further evidence to be convinced, consider this comment to The Guardian by data-encryption specialist Phil Zimmerman. Even encrypted e-mail is not safe, Zimmerman says, as email headers are always without encryption to comply with e-mail protocols. So while you may encrypt the actual text, the information on whom you wrote to and when and the subject is out there for anyone to see. Technological solutions to protect anonymity are another dead end.

We must find a different way to think about privacy online. We need democratic institutions to make rules on how our information can be used and keep a short leash on those who use it. Good news that there is progress on data protection legislation in the EU. Bad news: it seems to turn a blind eye to the role of technology in this regard.

Is digital jurisdiction Reding’s blind spot?

Friday, October 18th, 2013

This week I attended Commissioner Viviane Reding’s press conference and “citizen dialogue,”  as reported in a separate story. Reding has an ambitious policy proposal for data protection that allows national agencies to fine wrong-doing companies up to 2% of their worldwide turnover. Reding says this is to give them teeth. However, in order to bite, authorities must find the culprits. Sure, this helps with legitimate businesses who are properly registered in a European member state. But last time I looked, the internet was global, and a lot of services that cater to European users operate from overseas. In fact, the shadier the business, the farther away, and the more smokescreens in terms of relay servers and so-called darknets. I asked this question at the press conference, but the response from Reding was only repeating the description of the system. This begs the question: does the Commissioner not want to discuss what’s outside the domain of authorities? Why not? Does she not share this view that it is problematic? Or does she not see the whole picture? Or is it just that it is a touchy subject that she would rather not talk about?

In an earlier life, I worked for the games industry and once met with Viviane Reding (in her earlier life as Commissioner for Infrastructure and Digital Society!) to discuss age recommendations for games. This was a favourite topic of Reding’s, and to her credit, she has done much to forward the so-called PEGI system. One of the challenges for the games industry in protecting minors is that many play illegal copies downloaded from pirate sites, and these obviously don’t have the age recommendations prominently printed on the box cover like the legitimate product. When we industry representatives asked the Commissioner her view on this problem, she only responded, “Let’s focus on what we can control,” completely dismissing the issue, which very well may be the real problem in the protection of minors in relation to video games.

It seems to me that this attitude still exists today in Commissioner Reding’s policy-making. “Let’s give teeth to those who regulate the responsible actors, but let’s not worry about the difficult dark side of the internet.”. Except this is where the real problems may exist. Is digital jurisdiction Reding’s blind spot?

Stallman on surveillance

Thursday, October 17th, 2013

Technology theory veteran Richard Stallman writes an op-ed in Wired Magazine which is well worth some attention. Stallman argues convincingly that digital surveillance has gone much too far and that governments now have access to a lot more information about citizens than they should, as Edward Snowden has shown. Stallman suggests some technological solutions in order to limit surveillance, such as limits on security cameras to connect to data centers and store video. His answer to the question of digital surveillance is technological. That view has a lot of merit, not least as the basic assumption is that technology must change to meet the priorities of society. But Stallman’s other basic assumption, I find troubling: that anonymity is the best solution to privacy. The idea that government is the main threat to privacy is moot. Instead we should put in place where human rights are at the forefront, privacy is guaranteed by keeping tabs on government institutions and technology suppliers both. The basic question is “who watches the watchmen?” (and that question is of course much older than the internet). If the answer is to hide better, we are heading the wrong way – we should be free to act openly, assured by the fact the it is the task of the government to protect our privacy. That requires putting in place some manner of oversight function that handles privacy both in relation to government agencies and private actors. And it also requires of the government secret services to dismiss the technological imperative that suggests that everything that can be done in terms of data collection also should be done. Privacy requires many trade-offs in terms of efficiency, not least in the age of big data. This is certainly true for secret services, who must be kept under close democratic control. For sure, the nature of their operations make actual transparency impossible (they are secret services after all), but certainly democratic oversight can be introduced in a way that does not compromise their work. So while Netopia is happy to sign up to most of Stallman’s shopping list of privacy suggestions, the main solution lies elsewhere. Democracy must be part of solution, not the problem.