Author Archive

The Father of the Internet – a Belgian?

Friday, May 2nd, 2014

The internet has many roots and several men aspire to the title “The Father of the Internet”. One name often suggested is Vint Cerf (now at Google), who was a program director at the US Defense research agency DARPA in the 1970s and did important research on data packet traffic. Another candidate would be Vannevar Bush, whose concept of hyperlinks in the 1940s was the template for the world wide web four decades later.

One equally qualified but less well-known thinker is a Belgian: Paul Otlet. Both in writing and in experiment, Otlet developed such concepts as search engines, hyperlinks and social networks (but with other names). As a librarian he developed a system to index the world’s knowledge (not so different from the mission statement of the company that dominates the internet today).

Otlet’s work is available to the public at the Mundaneum in Mons, Belgium.

Liberté, Egalité, Fraternité en Ligne

Wednesday, April 23rd, 2014

Yesterday, the French Senate heard Netopia following the report Can We Make the Digital World Ethical?. It was certainly encouraging for Netopia to be recognised by such a prestigious institution. The hearing was part of the Senate’s information mission on global internet governance.

Video will follow, in the meantime – here is my speech:

Monsieur le Président de la Mission Commune,
Mesdames et Messieurs les Sénateurs,
Mesdames et Messieurs,

Je vous remercie pour votre invitation, c’est un honneur d’être présent parmi vous. Mon nom est Per Strömbäck, je suis le fondateur et éditeur de Netopia, le forum pour une société numérique, dont le but est de réfléchir à de nouvelles perspectives sur les technologies, leur impact et leur force motrice. Par-dessus tout, Netopia s’intéresse à la question des droits de l’homme, de la démocratie et de l’Etat de droit en ligne. Netopia existe à travers l’alimentation régulière d’un magazine numérique – netopia.mslgcp.com –  l’organisation d’évènements et la publication de rapports, dont celui qui nous réunit aujourd’hui Quelle éthique pour le monde numérique ? (Can We Make the Digital World Ethical ?). Il a été rédigé par Peter Warren, qui prendra bientôt la parole, Jane Whyatt, présente à mes côtés, et Michael Streeter. Il consiste en un ensemble d’interviews avec des représentants du monde académique, parmi lesquels le Professeur Murray Shanahan de l’Imperial College de Londres, qui prendra la parole cet après-midi.

Comme vous pouvez certainement l’entendre, mon français n’est pas excellent. J’ai fait beaucoup d’efforts pour apprendre votre belle langue, mais je n’y suis pas encore parvenu complètement. C’est pourquoi j’espère que vous me pardonnerez de passer maintenant en anglais.

As the Editor of Netopia, I commissioned this report – Can We Make the Digital World Ethical? – in order to ask the fundamental and philosophical question of Free Will. This question is as old as philosophy itself, but in recent times a new aspect has been added to the mix: the freedom of technology. According to a report from the American computer network technology manufacturer Cisco, before the year 2020 more traffic on the Internet will be generated by machines than by humans. This is a profound change. We are used to thinking about communication as something that happens between people. With digital technology, to some extent communication takes place between a human and her machine. And of course machines have communicated with machines for a long time on a mundane level: a light sensor sending instructions to a light post to switch on the lights as darkness falls at dusk. The automated brakes on a train actuating as it passes an activated signal on a track. But with the magnitude the Cisco report talks about, machine communication takes on a new form. It becomes machines making decisions on behalf of humans, communicating like humans, acting like humans, even tricking us into believing that they are humans.

Some present day examples: the scoreboards in the sports and finance pages of the daily newspapers are not put together by editors but by computer programs, but we humans read them just like when they were man-made. The trades on the stock market are to a large extent made by robots, but they are no different in practice than man-made trades – the shares change owners. If you play poker or chess online, there is a great chance that the person you’re playing against, is really a software robot. If you find it difficult to submit the best bid in an online auction, it may be because you’re bidding against software robots that can place bids milliseconds before the auction closes. All of these are commonplace today. Tomorrow, similar technologies are expected to be omnipresent in many parts of life and society: health care, security systems, law enforcement, the military, traffic control, power distribution, insurance and many more.

This development raises questions of freedom and responsibility. Who should be responsible for an accident caused by a self-driving car? The owner? The manufacturer? The software-developer? The software itself? Who should be held accountable if a patient is hurt during treatment by medical robots? What about the software robots that create editorial content: do they have protected freedom of speech? Are they responsible for such things as defamation or copyright infringement? Are works made by robots copyright protected and if so, who owns that copyright? Should machines have human rights? These are the types of questions that inspired this report.

So let me talk about “Free”:

Jean-Paul Sartre said “L’homme est condamné à être libre. Condamné, parce qu’il ne s’est pas créé lui-même, et par ailleurs cependant libre, parce qu’une fois jeté dans le monde, il est responsable de tout ce qu’il fait.”

“Free” is a particularly confusing word in English. Many languages have to words for free: one for free of charge, the other for free as liberty. In French there is “gratuity” and “libre”. In my native Swedish there is “gratis” and “fri”. The word free is often used in relation to the internet and digital technologies but the distinction is not always clear. We are often told that we must “keep the internet free”. But what does “free” mean? When it is spoken by representatives of Silicon Valley companies and so-called internet freedom activists, it most often means that the internet should be left unregulated by government. On a closer look, that is a curious definition of freedom. Would we accept something like that in normal society? Would we have greater freedom if there were no government? I would argue the opposite: public institutions are put in place to secure our individual rights, our freedom. We have the right to fair trial, but only thanks to the law and institutions like public defense lawyers. We have the right to property, but only because the legal system guarantees it and public institutions settle ownership disputes. Without law and institutions to uphold it, we exist in what Thomas Hobbes calls “a State of Nature” – bellum omnium contra omnes – The war of all against all. Anarchy.

But the internet of today, with little or no government regulation under the “keep the internet free” maxim, is not in a state of Anarchy for most of the users most of the time. Rather the current ideology hides a different regulator. Because it is not the governments of the nations of the world who make the rules online, it is those private companies that run the services and technologies that operate the internet who are the real regulators. By accepting the idea that the internet should not be regulated through democratic institutions, we also accept that it is Google who decides whose products and services are available to the consumer. We accept that Facebook decides what pictures and links are appropriate to publish. We accept that server hosting providers decide what content is appropriate and what should be taken down. It is regulation by Silicon Valley and it is very far from the values we normally connect with civilization. By coincidence, most of this regulation is done by software machines.

There is another type of “free” that is also often confused in relation to the internet. This is the idea of free as in free of charge, but only all too often mixed up with free as in free will or free of oppression. “Information wants to be free” is the first cousin of “keep the internet free”. It was a famous slogan in the early days of the world wide web two decades ago. The phrased was coined by this man: Stewart Brand. In the Seventies and Eighties he published a mail order catalogue called the Whole Earth Catalogue which some claim was like an early version of the web. Brand was one of the pioneers behind Silicon Valley’s odd combination of hippie-ethics and hardcore capitalism. But his insight was much more profound than the “information wants to be free” motto suggests. Here is Brand’s complete quote:

On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.

So the insight was there from the beginning, free of charge is not the same free as liberty. There is even an old joke from the geeks of Silicon Valley from around the same time as Brand’s quote: “Free as in Speech or Free as in Beer?”

This insight was lost somewhere on the way and these days keeping the internet free not only means free of government regulation but also keeping its content free of charge. As you have guessed, I disagree with both these notions.

Where does this line of thought take us? My conclusion is that democracy is freedom and the way to achieve it is for democratic governments to take an active role online, regulating the technology, rather than letting technology regulate societies and citizens. This becomes more pressing as new technologies like decision-making machines and autonomous algorithms become common and gain more influence.

There are two main ways that governments can exercise such influence: firstly through research grants for new technologies: these should always include considerations of societal impact. Secondly through regulation of the companies that operate the services and networks that underpin the digital society. Our report looks closer at some of these aspects and provides additional detail, I hope you will find it inspiring.

Jean-Paul Sartre again “l’existence précède l’essence” and while that is true of humans, the opposite is true of machines: their essence precedes their existence. We design them for a purpose, then we build them. It is humans that rule the machines, never the other way around.

 

UPDATE

The videos from Netopia’s appearance in the French Senate are now online.

My intervention: http://videos.senat.fr/video/videos/2014/video22552.html

Prof Shanahan: http://videos.senat.fr/video/videos/2014/video22553.html

Peter Warren: http://videos.senat.fr/video/videos/2014/video22554.html

Q&A: http://videos.senat.fr/video/videos/2014/video22555.html

(All voice-over in French)

 

Horizon 2020 – A Chance to Include Privacy, Data Protection, and Human Rights in Technology

Wednesday, April 16th, 2014

How can the government have a tangible influence over technology? This issue is at the core of establishing rule of law online. In many cases, government intervention in digital is messy: it may be inefficient, blunt, or have unwanted side effects like privacy concerns. Needless to say, it is often a source of friction with the tech industry. Many times, regulation is added as an afterthought once the technology is in place. For natural reasons, sometimes it is only when a practice has been established that the consequences become clear. This is often described as the government being behind the curve on new technology.

However, at the same time, the government is ahead of the curve on many technologies. The internet itself started as a research project in the Sixties with the US Department of Defence’s ARPA (Advanced Research Projects Agency). The World Wide Web started at CERN, the European particle physics research centre. Even a project branded as anti-establishment, like the TOR anonymity network, is government-funded (first the US Navy, later the US and Swedish governments).

Netopia has organised two events and report launches on new technologies and government in the past six months, one on 3D printing and one on the Internet of Things. Both of these are areas where various governments invest a lot of tax money into research and development. Both areas have potentially disruptive consequences for society: personal 3D printing brings up issues of consumer protection, sales tax collection, gun control, intellectual property, and more. The Internet of Things challenges the way we think of the network—rather than people communicating with people, machine-to-machine traffic will be predominant before 2020, which of course raises questions of freedom of speech, surveillance, and responsibility. All these developments are clearly foreseeable, have been discussed for a long time, and Netopia’s reports only add to the existing knowledge. Yet, on both occasions when I asked representatives of the European Commission what sort of policies they had to deal with this, the answer was “None. Yet.”  So there is one part of government funding the development of new technologies and another that is clueless to the implications.

There is an opportunity here: if considerations like privacy, data protection, fair competition, rule of law, and consumer protection are included in the design phase when new technology is developed, many of the shortcomings of today’s tech regulation (as spelt out above) can be avoided. Some talk about “privacy by design”. That would be a good case. Another example is stock trading software platforms. I have a friend who develops such software and once asked him how they deal with differences in finance law (money laundering, insider trading, that sort of stuff) in different jurisdictions. The answer was, “Easy; we just include it from the start.”  So the answer is right there in the open: join the parts of government dealing with the research grants with the parts dealing with legislation. Make the societal implications part of the research grant requirements.

Europe has the unique opportunity to do just that with the recently announced Horizon 2020 research framework. Now is the chance to ensure new technologies are not only groundbreaking but also in line with human rights and democratic values.

I touched on some of these ideas in a Euronews appearance on 3D printing earlier this week. U Talk is the name of the show; check it out: http://www.euronews.com/2014/04/11/consumer-concerns-around-3d-printing/ [Updated: Correct link]

Data Retention’s Blind Spot

Saturday, April 12th, 2014

This week’s most important digital policy news was, of course, the European Court of Justices ruling that declares the Data Retention Directive invalid. Many pirates and so-called internet activists have celebrated this as a victory, and certainly the court had good reasons for its ruling—the directive was too broad, not clearly defined in purpose, combined different types of crime, and raised concerns about privacy. The last problem has only become more serious following the Snowden leaks last year, which confirm wide-spread government surveillance, not as a potential threat but as a reality

However, make no mistake—only because data retention is not regulated in European law (anymore) does not mean no states have such rules. On the contrary, it is just not harmonised across Europe. Whether that is good or bad probably depends on your view on the European Union as a whole, but nevertheless, in no way does it help the highly anticipated Digital Single Market. Also, even if there is no law demanding that operators keep traffic data, of course they still can (and do) for whatever purposes. You are still being monitored; only the rules for that have not been decided by your elected officials.

Making laws for the digital space is not easy. So the directive was too wide? Well, how can it be narrow if technology, business models, user behaviour, other rules, and much more change all the time? Any legislation concerning such a moving target needs to be flexible to be future-proof.

So what is the way forward? Netopia does not claim to have the final answer, but one clue can be taken from Oxford professor of internet governance Viktor Mayer-Schönberger. In his 2013 book Big Data (with Kenneth Cukier), he suggests a shift from regulating the collection of data to regulating the use of it. In that way, responsibility would be demanded from the actual use rather than the potential risks of keeping the information. Perhaps that is the way out for DRD 2.0?

Net Neutrality or Not Net Neutrality? Is that really the question?

Wednesday, April 2nd, 2014

The European Parliament is as exciting as a cup final at the moment, at least if you are at all interested in digital issues (and if you read Netopia, I will assume you are). Thursday will see a close vote on network neutrality. The suspense is that no one can say which side will win. The net neutrality proponents say it is the key to an “open” internet, the opponents (and telecoms) say it’s a bad idea and services must pay for capacity just like subscribers. But what if it is so difficult to decide because both options are bad?

In theory, network neutrality means that no data packets should be discriminated. Most also say prioritization is wrong. Harmful code, like viruses, must be stopped of course. And spam. And some forms of illegal traffic (= child pornography). So already there are a lot of exceptions to this concept. But the main problem with this idea is something else. The independent intermediary exists only in theory. In real life intermediaries have all sorts of vested interests, such as telecoms operating content services or internet platforms setting up their own networks. Even if network neutrality were a realistic policy, that leaves no room for internet governance – at least not via democratic means. Instead, it is for the dominant online players to make the rules, as the regulator has given away its keys. That track record is not great, when the rights of users collide with the share-holders interests, most companies will be loyal to its owners. (Evgeny Morozov wrote a great book about this phenomenon.) Last year, I wrote an opinion in the European Voice on the “mirage of net neutrality”.

Now, the telecoms position is equally problematic. Of course, any business would like to charge for the same product several times, but none do it as effectively as the ISPs. They charge subscribers for broadband access with a premium for better bandwidth. They also charge extra for non-discrimination of services like voice-over-IP that compete with their own telephone services. They charge the back end services for access, plus they want to charge a premium for particularly popular (capacity-intensive) services. Plus they develop first party content services to compete with the third parties they want to charge extra. (and in the case of entertainment services, bear in mind that “bittorrent is the killer app for broadband”). In the big business logic of the telecom operators, a start-up service can get by without paying for priority in the early days, with the opportunity to buy quality once it is successful. Not really convincing for a start-up and where does such a policy lead down the line? Is even more powerful telecoms really what the internet needs? At least it is safe to say that this position has killed the idea of mere conduit once and for all.

In the former case, pro-network neutrality, independent oversight of exceptions are necessary. In the latter case, contra-network neutrality, a strict online competition authority is essential. So both sides point to the same conclusion: democratic governance is the solution. Network neutrality is a negatively defined freedom: the absence of regulation is supposed to guarantee a level playing field. We would never accept such a thinking in any other part of society. Instead we develop institutions to look after fundamental rights. There is no reason the internet should be any different. What MEP will be the first to suggest a third position, where fundamental rights and fair competition is guaranteed by an independent authority under democratic control, tasked to regulate the telecoms not only in terms of competition between the access providers, but also to guarantee a level playing field for services, fair business practices toward subscribers, rules on what sort of content should be blocked or filtered, making sure surveillance does not happen and ensuring transparency of the networks? More democracy, not less, is the way forward.

§8.3 Fixing the Internet – Not Breaking It

Sunday, March 30th, 2014

Technology is defined not so much by innovation and break-through research, but rather by government investment, policy decisions, competing standards, market changes, opposing business interests, and not least legal developments. Napster may have brought the music industry to its knees in the late Nineties, but it could not have happened had it not been for a very specific piece of legislation. The Digital Millennium Copyright Act of 1996 was the Clinton Administration’s attempt to strike a balance of the opposing interests of the entertainment and telecom businesses. The latter were granted immunity from any infringements on the part of their users, the former the chance to send cease-and-desist-letters and bring cases. It may have looked like a great solution at the time, but as is obvious to the present day observer, the legal process was much slower than the pirate distribution. Worse, the DMCA created a business climate where the intermediaries have little incentive to take action against any sort of abuse, but can tinker with the traffic as they see fit – discriminating against services that threaten their revenue (such as voice-over-IP-telephone services) and selling extra capacity to those who can pay (like the recent Netflix-Comcast deal). If there is one root cause of the problems online today, it boils down to this intermediary privilege. Money laundering, Silk Road drug sales, child pornography, TOR gun/bomb shopping lists, online bullying, phishing… you name it: J’accuse the DMCA.

The European version of the DMCA is the Infosoc Directive of 2001, complete with immunity for intermediaries and a legal mechanism to counter infringement, §8.3. However, in its European implementation, these rules allow courts to order intermediaries to block their users to access services and websites which contain infringing content. And it works, according to a fresh report from the music industry, the ten EU countries where courts have ordered ISP’s to block infringing services have 11% lower use of Bit torrent (the main standard for file-sharing piracy).

This week, the Court of Justice of the European Union, ruled in the so called Kino.to-case which concerned the Infosoc 8.3 rules. The CJEU found that website blocking is a balanced and effective way to counter copyright infringement. Some say that this ruling “breaks the internet”. If you think the internet is about anarchy and freely taking (and distributing!) what is not yours, then sure: that may be broken. But if you agree that the Internet must be part of society, where the same rules apply and where rule of law, democracy and human rights prevail, then the CJEU-ruling rather fixes the Internet than breaks it. Dare we hope it’s a first step to getting rid of the intermediary privilege altogether?

Cost Disease or Fixed Eyeballs

Tuesday, March 18th, 2014

Netopia contributor Ralf Grötker writes about William Baumol’s cost disease theory this week. I find this thought fascinating, but have some objections. First of all, it seems to be a cost-oriented approach to pricing, but you could argue that the right price of a good or service is what the customer wants to pay, not how much it costs to produce. That is the logic of the classic supply/demand price graphs. In Grötker’s piece, this is best illustrated by the ice-cream vendors who managed to make more money relative to productivity increase thanks to new flavours and fancy names (maybe it tastes better too). By the same thinking, the answer should be to make the good or service more exclusive/less accessible if you want to keep the price tag up. However, that is hardly a good plan for things like healthcare and universities – labour intensive public functions.

At a recent seminar in Brussels, I asked Tony Clayton – Chief of the UK Intellectual Property office – about cost disease. Clayton dismissed Baumol’s case with the string quartet, arguing that it too can have productivity increase by recording their music and selling it, or broadcasting their shows to a larger audience. That is similar to how the media has developed with newspapers’ web TV for example. More output, smaller staff. But also more competition for a relatively fixed amount of eyeballs, so the irony is that not only does cost disease make journalism relatively more expensive, but increased supply also drives the price of content down. That is a double negative for the news media, and by no means a problem that is limited to that industry.

The Problem with Cybersecurity

Tuesday, March 11th, 2014

It may seem that all digital problems have a technological solution, often in the shape of cybersecurity. Worried your private images could spread across social media? Adjust access settings on Facebook (or Google Play, but who uses that?). Phishing attempts keep you awake at night? Upgrade your anti-virus software for only $7.99 per year. Annoyed that cookies on your browser send personal data to advertisers? Just clear the cache and cookies after every website you visit (or use the private surf mode). The pattern is familiar. Speaking at Austin’s SXSW digital conference, NSA whistleblower Edward Snowden added to this techno-centric solutionism, suggesting that encryption is the answer to global surveillance: “End-to-end encryption makes bulk surveillance impossible. There is more oversight, and they won’t be able to pitch exploits at every computer in the world without getting caught.”.

Netopia has the greatest respect for Mr. Snowden, but there are many objections to this view: encryption may well prove a false comfort as there are many ways around even the best protection, and while it may slow the NSA down, they just as well might have an answer that we will never know (at least not until the next whistleblower), just as a majority of us were surprised to learn about the extent of surveillance that was revealed by Snowden himself only last year. But the main problem with encryption as default is that it reinforces the view that it is the individual user that must protect herself against outsiders and society.

Cybersecurity, as described in the examples above, puts the onus of protection on the user. We would never accept that view in any other part of society. Quite the opposite, in fact: we have rights and institutions that protect the individual; the idea that each person should protect themselves is anathema in any civilised society. Sure, you can have an alarm, a central lock, and a wheelbar for your car. You can take the front off your car stereo and engrave the windows with your plate number (at least if you’re still living in the Eighties). Lots of security companies sell services like those. But that is not to say we also should not have laws against stealing or breaking into cars, that we should not have insurance companies compensating those whose cars are stolen, and police, prosecutors, courts, and prisons to deal with the thieves (plus defence attorneys to help the thief’s case!). And help programs for ex-convicts to stay out of trouble once the sentence is over. And social services welfare to try to keep troubled kids off the streets and help challenged communities. And norms that say it is wrong to break into cars, and those who do will make their friends and families upset. The car alarm and wheelbar exist in a context. Except if your car was online, it would be your own stupid fault if it was stolen or broken into. If you click on a bad link and reveal your bank log-in, sure, you could get the money back from the bank, but no one would really think of you as a victim. And for sure, no one would expect the criminals to be brought to justice.

Cybersecurity rests on the assumption that everyone is an expert on cybersecurity. Or at least interested in the issue. That is not the case in real life. Lots of people use digital services with only a basic understanding of how the technology works. That must be so; otherwise, many groups would be excluded from the digital revolution. We must be able to rely on the system and, if need be, put new government functions in place to protect the users. What then about NSA surveillance? It needs to be kept on a shorter leash, which POTUS also agrees to.

More democracy—not more technology—is the answer.

Netopia on the Airwaves and in the Media

Thursday, March 6th, 2014

Netopia’s report Can We Make the Digital World Ethical? has created quite a stir. I gave an interview to Brussels online news and opinion frontrunner EurActiv, which dubbed me “internet guru” (that’s a first!). I discuss the opportunities for democracy online and challenge the idea that government is always behind the curve when it comes to technology.

This evening, Thursday, March 6th, UK’s Resonance FM will broadcast a one-hour program based on the report and made by the authors. It features many voices from the academic community on the ethics of the internet of things, self-programming software, decision-making algorithms, and other near-future technologies. Some voices are familiar to Netopia readers, such as Oxford University’s Viktor Mayer-Schönberger, who comments on the possibility of human-imitating machines. For Netopia readers in London, the frequency is 104.4 FM. Others are invited to listen online. 20.00 GMT.

 

The Independence Delusion

Monday, February 24th, 2014

Last week I visited a telecoms tech conference. This time it was for the companies that supply telecom operators with hardware: fiber, routers, switches, those sorts of things. Always interesting to visit other industries’ shows because it says a lot about their self-image. This time was no exception, the fiber crew spelled it out to me: “Our job is to provide as much capacity as possible, regardless of what travels through the pipes. We are infrastructure. We are independent.” Sound familiar? I had heard it before too, but this time it got me thinking: can you really ever be independent like that? It sounds convenient, but is it true?

First of all – is any infrastructure ever independent? Does it ever exist in a power-vacuum? Take small communities in the country-side: the topic of conversation is almost exclusively infrastructure. What is the quality of the road? Why doesn’t it get repaired more? Who should pay for making it wider? Or what about heating, power, public transport – all of them hot topics if you live away from cities. Or take any infrastructure construction project: airports, bridges, motorways – always controversial, always with petitions and protests. I challenge you to find an infrastructure construction project that was not focus of controversy of some kind. Or take energy, the Nordstream gas pipeline in the Baltic Sea was debated for years by the neighbouring countries. Ukraine is the transit country for Russian gas pipelines to Europe, which some analysts have pointed to as an added complication to the current situation. Or the oil pipelines in Afghanistan. No, infrastructure is never neutral. It is more like a constant power struggle.

Second – is telecom infrastructure different? Well, many will tell you that the Internet almost on its own can overthrow dictators, or at least be the catalyst of revolution. If that is the case, it is nowhere near neutral. Even if you don’t believe that (I don’t), the telecoms infrastructure has owners with very different agendas. Governments are big owners of telecom cables – also in countries with de-regulated telecom markets – and government is often pointed to as a threat to individual liberty in discussions relating to digital communications. Not an independent owner by any measure. Other that own pipes are the carriers. They build both mobile and landline networks and are the biggest clients for the fibre suppliers. And they are in no way independent, in fact they are so far from independent that governments all over the world put very specific legislation in place to maintain some degree of network independence (with varying degree of success), such as must-carry and net neutrality regulations. Still telcos use traffic-shaping methods to optimise the network load and maximise their profits: quite the opposite of independence.

Third – network technology is not created by nature or some divine force – it is designed, made and sold by people. People with different agendas, priorities, inspirations, bad hair days. Sometimes mistakes are made. Sometimes it does not work as planned. New features are added, think about how Ericsson technology was used by the Syrian-regime to identify protesters mobile phones (and seek them out). This feature was designed. It could be used for different purposes, but that is not the same as neutral. Some infrastructure technologies are used for identifying content. Others to block certain types of code (like viruses). These are not neutral purposes. They are the result of conscious decisions. Nothing independent about them.

Independence is nice and comfortable. Too bad it’s a delusion. Now would be a good time for the tech infrastructure companies to accept that they don’t exist in a vacuum and that their actions have consequences. Just like everybody else’s.