Author Archive

Flattening the Curve – Why Exponential Growth in AI May Be a Mirage

Tuesday, April 7th, 2020

Time spent in lockdown can be used to think about the big things in life. Like artificial intelligence. Netopia has a mini-theme on artificial intelligence, with Peter Warren’s story on insuring self-driving cars and Ralf Grötker’s review of Stuart Russell’s Human Compatible. Both put artificial intelligence in context. No, artificial superintelligence will not eliminate humans anytime soon. Yes, there are completely different issues we need to debate to make AI useful for humans.

Exponential growth is a popular way to think about digital phenomena. Innovations added to innovations in an ever-accelerating pace. This mindset leads to quotes like “change will never again be so slow as it is now”. It brings the conclusion that there will come a point where innovation happens all at once and the speed of change explodes into the “singularity”.

In this virus pandemic, we have been looking at a lot of exponential growth curves and thinking about ways of flattening them. While the virus might be disruptive in its own way, it is driven by mutation rather than innovation.

Exponential growth is seductive. Apply it to any process and you get mind-blowing results. The problem of course is that not all processes can accelerate exponentially. Let’s look at self-driving cars. If computing power (whichin theory grows exponentially according to Moore’s Law), that would make the AI smart enough to take over from the driver anytime soon. The first problem is that AI relies on a number of other technologies, such as GPS, cameras, lidar, radar and many other sensors and communication technologies. Each of these technologies develops at its own pace, but not necessarily with exponential speed. 5G telecom networks, for example, are often mentioned as a key to self-driving cars, but the roll-out pace is held back by many factors: legal, financial, political etc.

The second problem is the data-set. One way to make AI useful is to train it on a big set of data and make it find the patterns. This is what machine-translation has done. In the old days, machine-translation tried to replicate how humans learn languages with grammar and glossary and such. The break-through came with predictive statistics applied to huge samples of real language (corpuses), where the AI can guess with some accuracy what the most likely next word will be from the context. This is what auto-maker Tesla tries to do with its auto-pilot system, which silently observes how real drivers deal with various traffic situations (such as stopping at red lights), uploading that to a central system which then analyses the best driver behaviours and feed them back to the autopilot. That brings the philosophical question if all traffic situations can be predicted and simulated. If the answer is no, self-driving cars will never be 100% self-driven, just like machine-translation can never be 100% accurate.

Put together, this means that exponential growth in self-driving cars may exist in simulation but not on the road. In real-life, incremental innovation is a better explanation. We make the camera a little better, so the car can navigate in rainy conditions. Then we make the wheel sensors a little better, so the AI can get more feedback about the tires grip and calculate braking power better. Then we add more videos to the data-set so that the AI better recognizes road indications and can keep the car in lane. All these things little steps amount to great progress, but it makes for systems that assist the driver (lane departure warnings, automatic high beams), rather than the computer replacing him. Mathematically this is logarithmic growth, which means diminishing returns, also know as Achilles and the Turtle.

In many real-life AI applications, logarithmic growth may be a better explanation than exponential, only not as spectacular. If we keep this distinction in mind we can have a better informed debate about the threats and opportunities of artificial intelligence. Also, next time somebody says exponential growth, you can ask “sure it’s not logarithmic”?

Now, if we could only flatten the corona-virus curve in the same way…

Hospital Pass – Is Insurance a Bump in the Road for Self-Driving Cars?

Tuesday, March 31st, 2020

In the minds of many, autonomous vehicles are the future dream, promising cheap, accident free driving where the old get around for longer and the young can get into cars earlier.

For if cars drive themselves and no-one has their hands on the wheel then the responsibility must be at the wheels of the robot car makers.

A world of independence conferred by Artificial Intrelligence. Where cars are fuelled with free electricity from solar panels and we throw away our insurance cover and pass our travel cares to the vehicle manufacturers. There will be no more draughty, smelly petrol stations to separate us from the best part of €100 for carbon fuel, because the solar panels on our house’s roofs promise unlimited environmentally friendly travel due to the smart metres measuring your contribution to the grid. This means that you will be able to pay for the electricity from charging points with your contributions to the power grid. Even better, according to the dream, will be the end of the expensive annual insurance policy. For if cars drive themselves and no-one has their hands on the wheel then the responsibility must be at the wheels of the robot car makers. The day of the driver will have passed the only care we will have will be what will we watch and how we occupy ourselves.

Well that is the dream, now unfortunately for a little bit of reality. Drivers will have to keep their hands on the wheels of their autonomous vehicles for a little longer due to their evolving systems and because of the mix of manual and autonomous vehicles expected on our roads. It is a problem that the insurance industry thinks could get worse due to technological dependence.

After a certain time the vehicle will come to a stop wherever it is and that could be somewhere very dangerous

“There are some vehicles which at certain points will require a driver to take back control and if that driver doesn’t respond after a certain time the vehicle will come to a stop wherever it is and that could be somewhere very dangerous,” said Sarah Cordey of the Association of British Insurers.

“Certainly, we are keen to see the increased automated technology on the roads because it has exciting potential. But if there is a stage where it actually becomes more dangerous because it leaves drivers too disengaged from the driving task to be properly involved, then insurers might prefer that that stage is skipped, and we just go straight to full autonomy.”

Which would be a tremendous challenge to the way technology has been introduced to date because the infrastructure would have to be put in place, tried and tested before everyone could step into their shiny new robot cars.

There continues to be a worrying lack of clarity around how Automated Driving should be defined

Last year Matthew Avery, Director of Research for Thatcham Research which carries out safety testing on behalf of the insurance industry said: “By 2021, Automated Driving Systems on some new cars could allow motorway drivers to essentially become passengers in their own vehicles. However, there continues to be a worrying lack of clarity around how Automated Driving should be defined and crucially, the role of the driver when a car is in automated mode.

“Our position is that driving systems that rely on the driver to maintain safety are not recognised by the insurance industry as being automated.”

At a UK Government consultation on the issue, Avery said that discovering how an Automated Driving System must safely hand back control to the driver in certain scenarios is crucial. For example, in the event of a system failure the vehicle must be capable of carrying out a managed hand back to the driver or reach ‘safe harbour’ on its own in the event of an emergency.

Drivers take around 35 seconds to psychologically get used to driving again when control is handed back to them by an automated vehicle. The motoring equivalent of a rugby ‘hospital pass

It is a topic exercising the insurance industry according to Cordey: “So, insurers are looking to vehicle manufacturers to address that by ensuring that a vehicle will first find itself a safe harbour or a safe place to stop before it becomes immobile. So, there’s an awful lot of details here that the insurers have really been getting into to try and help shape things for the future and make them as safe as possible.”

According to research by Professor Natasha Merat at the University of Leeds and Dr Dick de Waard of the University of Groningen’s Psychology Department drivers take around 35 seconds to psychologically get used to driving again when control is handed back to them by an automated vehicle. The motoring equivalent of a rugby ‘hospital pass’ where you get the ball just as you are lined up for a tackle.

A problem of tech dependence not exclusive to cars, world aviation authorities now insist that pilots carry out a minimum number of manual landings rather than using the autopilot. It has also been noted in the armed forces that operators are loathe to over-ride automated weapons systems through fear of being responsible for their actions.

It has also been noted in the armed forces that operators are loathe to over-ride automated weapons systems through fear of being responsible for their actions.

A buck passing that means that the combination of manual and automated traffic in the interim phase before complete automation presents a nightmare for drivers and insurers.

“Liability issues are a big one to sort out if a vehicle with a lot of smart technology on-board is involved in a collision with another vehicle,” said Cordey.

According to Mark Deem, a lawyer for Cooley the world’s largest legal practice which includes impressive technology household names among its clients, the next few years will legally be as tertiary as they are for vehicles.

“The law always looks for workable definitions of products, services and harms for which legal solutions and interventions are required but the speed of technological change and the measured pace of legal change means that legal definitions cannot be nailed down in transition.

Speed of technological change and the measured pace of legal change means that legal definitions cannot be nailed down in transition

“Problems will exist in the tertiary stage of development where legal solutions will be needed to deal with products at differing stages of automation, varying degrees of precision and in different environments. The question of responsibility will evolve with the technology.

So, what does this mean for the age of the autonomous vehicle, in charge without responsibility? Like the insurance industry?

Mark Deem see’s this an evolution, not only will the vehicle change so will our insurance.

“Once we are through that tertiary stage, we should see more fundamental and permanent shifts changes to deal with risk – perhaps a change in insurance where we see travelling in an automated vehicle as an extension of personal travel insurance, rather than belonging to the vehicle owner.”

Header Image © Rodrigo. See the original here

Abstract Intelligence – How to Put Human Values into AI

Friday, March 27th, 2020

Book Review: Human-Compatible – Artificial Intelligence and the Problem of Control (Viking 2019) by Stuart Russell

AI has severe limitations. Still we have reasons to worry – both because of these limitations and because they could be overcome in the future. In his new book “Human-Compatible: Artificial Intelligence and the Problem of Control” explains the principles that govern the action of autonomous AI-systems and makes proposals for how such systems should be designed to make them beneficial rather than evil.

This is a book about which principles are needed in order to create beneficial Artificial Intelligence-systems. It’s original, and it’s important.

To start with: Stuart Russell is professor of Neurological Surgery at the University of California, San Francisco and Professor of Computer Science at Berkeley. He is vice chair of the World Economic Forum’s Council on AI and Robotics. He is a fellow of the American Association for Artificial Intelligence. And so on. Reputation isn’t something that one gets for nothing. More than any arguments that the author presents, his outstanding position in those fields of science that are relevant for AI is a strong reason to listen to him.

First of all, Russell provides us with a clear estimation of where we stand. For the near future, there still will be major tasks which AI is far away from being able to tackle. The success of AI in winning over human champions in board games such as chess or Go, Russell explains, should not seduce us to think that AI has magic powers in other fields, too. The reason for this is that AI works, to a great extent, with methods of machine learning, that is autonomous learning. With games such as Go, the approach works surprisingly well, because the game is regulated by strict rules. The real world is much less convenient. One reason for this that our daily life consists of thousands little tasks which we accomplish rather effortlessly, but which are very difficult to program or to learn for an AI.

One difficulty is that very often actions and tasks that we perform intuitively are not very easy to discern and to define from an abstract point of view. “What we want is for the robot to discover for itself that [e.g.] standing up is a thing – a useful abstract action”, Russell explains. “I believe this capability is the most important step need to reach human-level AI.” So far this has not been invented.

AI cannot find by itself ways to proceed from general rules to concrete actions, if there are no human-defined rules for this. Thus, AI basically lacks the capability to plan and perform actions.

This is a major point. AI, Russell explain, cannot perform abstract reasoning. AI-machines such as IBM’s Watson, he explains, can extract simple information from clearly stated facts – “but cannot build complex knowledge structures from text; nor can they answer questions that require extensive chains of reasoning with information from multiple sources.” Or take Alpha Go – Google Deep Mind’s AI-system for playing the board game Go. “Alpha Go has no abstract plan. Trying to apply AlphaGo in the real world is like trying to write a novel by wondering whether the first letter should be an A, B, C, and so on.” This is a broad limitation. AI cannot find by itself ways to proceed from general rules to concrete actions, if there are no human-defined rules for this. Thus, AI basically lacks the capability to plan and perform actions. “At present”, Russell writes, “all existing methods for hierarchical planning rely on a human-generated hierarchy of abstract and concrete actions.” Computers that learn these hierarchies by themselves have not been invented so far. The reason: Human scientists “do not yet understand how such hierarchies can be learned [by an AI] from experience.”

From a cognitive point of view, the function of goals is that they have a focusing effect on one’s thinking. AI-machines do not have goals.

Besides abstract thinking, machine often lack something which, in cognitive science, is called smart heuristics. Smart heuristic stands for the many abbreviation and tricks that human perform to solve tasks and problems – without employing too much calculating power. It’s not just tricks, but also embedded in practical concerns. One example are goals that we are striving for. From a cognitive point of view, the function of goals is that they have a focusing effect on one’s thinking. AI-machines do not have goals. Current game-playing AI-Systems “typically consider all possible legal actions”. This is where they are superior to human players, who cannot foresee such a variety of different paths. But here lies AI’s weakness, too. Because AI cannot limit its scope, even a super-equipped AI will be overwhelmed with the variety of different paths of action in real life. Human have acquired techniques to reduce that kind of complexity. AI hasn’t – a least not in a way that we deem trustworthy.

These are severe limitations. Still we have reasons to worry – both because of these limitations and because they could be overcome in the future. Russell actually thinks that human-level AI is not impossible in principle. To the contrary: Super-intelligent machines, he warns, could actually take control of humanity. A whole chapter is devoted to this issue.

One not so-technical aside of interest to Netopia-readers refers to the inherent drive or rationality of AI-systems. It’s about maximizing clicks – getting users to visit a website in order to generate traffic. How would an intelligent system maximize click-rates? One solution: simply to present items that the user likes to click. “Wrong”, says Russell. The solution which an intelligent system would choose “is to change the user’s preferences so that they become more predictable…. Like any rational entity, the algorithm learns how to modify the state of its own environment – in this case, the user’s mind – in order to maximize its own reward.” This is thrilling – and a good example how AI can pose threats even before becoming super-intelligent.

The solution to the threat of super-intelligence and, at the same time, evil or questionable AI-systems is the same as the solution

The solution to the threat of super-intelligence and, at the same time, evil or questionable AI-systems is the same as the solution Russell sketches for the problem of coping with the limitations of current AI: “Machines (…)  need to learn more about what we really want”, Russell point out, and this learning should happen “from observations of the choices we make and how we make them.”

There are two lines of reasoning underlying this proposal. The first is: If human-level AI is something that we should expect to happen, then this super-intelligence should preferably be benevolent. Being benevolent, though, is something that can neither be programmed, nor can it be learned by super-intelligent machines themselves. Even if AI could acquire the capability of abstract reasoning, they could not pursue the goal of being benevolent. The reason for this genuine philosophical: “Benevolent” cannot be defined in any unambiguous way, because there are just too many competing and not compatible values around. (To build this argument, large sections of the book are devoted to philosophical endeavors to rationally construct human preferences and perceptions of utility, both on an individual level and on a group level.)

The second line of thinking refers to the above-mentioned problems that today’s AI has with abstract thinking. It is this part of the book which is definitely of practical interest to people designing AI systems today – both on the level of software as on the level of human-machine-interaction.

A better solution, Russell thinks, would be for the AI to ask the user routinely and automatically questions

One example is the gorilla problem. Some years ago, a user of the Google Photos image-labeling service complained that the software had labelled him and his friend as gorillas. The interesting point of this incidence is that it makes clear the value-proposition build into the software. Obviously, the image-labeling service assumed that the cost of misclassifying a person as a gorilla was roughly the same as the cost of, e.g., misclassifying a Norfolk terrier as a Norwich terrier. As reaction to the incident, Google manually changed the algorithm – with the result that later, in many instances, the software just refused to do labeling in cases that were unclear. A better solution, Russell thinks, would be for the AI to ask the user routinely and automatically questions such “Which is worse, misclassifying a dog as a coat or misclassifying a person as an animal?”. Answers to questions of this kind could help to tune the labeling-service according its users’ needs.

This is what, in the end, it all boils down to. Where possible, machines need to learn about what human users really want from observation. If observation is not possible, asking is a suitable approach. Human-level AI is not about more or better computation. It’s all about the design of human-machine-interaction in order to feed human values and preferences into the system.

 

Virus Action Reveals Big Tech’s Double Standards

Thursday, March 12th, 2020

Youtube CEO Susan Wojcicki published a statement yesterday on how the video platform responds to the corona virus outbreak. It is worth reading.

Wojcicki says “It remains our top priority to provide information to users in a responsible way.” Sounds great. Could that not be the policy always? So maybe we would not have to live with alt.right-propaganda, election interference, ISIS execution videos and such things that would never be published in proper media. That would make the internet better, regardless of virus outbreaks, right?

“YouTube will continue to quickly remove videos that violate our policies when they are flagged” – Another great idea. Hope it’s not like pirate videos where rights holders must go through a slow bureaucratic process just to have the videos uploaded again right after. Except the really responsible thing would be to not have those videos posted in the first place. That would really help stop virus misinformation! (and the creative economy)

Perhaps that can’t be done because of the way Youtube works? But wait, Wojcicki has the answer:

“In the days ahead, we will enable ads for content discussing the coronavirus on a limited number of channels, including creators who accurately self-certify and a range of news partners”

Great idea, perhaps similar things can be applied to the other problems Youtube brings upon the world?

Thanks to Susan Wojcicki for speaking out. There are two problems here. The first is that this is not enough if Youtube wants to live up to its parent company’s motto Do the right thing. This fits more into the familiar pattern of “do as little as possible” which is justified with anything else would “break the internet” (or Google’s profit forecast). The second is that at previous occasions, the standard response from Google, Youtube and most of Big Tech has been something along the lines of “it’s the algorithm we don’t know what it does” or “that would be like in China”. Wojcicki’s statement shows that they can if they want to. Great, now do that more often. And better.

(A similar issue related to a smear campaign against Michelle Obama. Watch this great TED Talk!)

Netopia Ten Year Anniversary

Monday, February 17th, 2020

They say time flies, but it also true that time is slow.

Netopia first started ten years ago. Since then much has happened. But much is the same.

Netopia launched February 16th 2010 in Swedish. With a bang! An op-ed in the #1 daily Dagens Nyheter described the mission. If democratic institutions and rule of law are not present online, that void will be filled by others. This basic premise guides Netopia still. The Manifesto remains.

Sweden in that time was like nothing else. It was the Sweden that brought The Pirate Bay, which in itself started as an experiment by the pirate think-tank Piratbyrån (“The Pirate Bureau”). It was the Sweden that elected The Pirate Party to European Parliament with 7,3% of the votes (but not to Swedish parliament the year after). “Polarized” may be a cliché, but it was polarized back then because there were only two positions available in the public conversation: pro or against “new technology”. Having worked in game development for many years, I did not feel at home in that dichotomy. I wanted to be pro-technology but also pro intellectual property rights. It felt wrong that the ruleset that connects creative work to the regular economy (=copyright) should be less important. Quite the opposite, I thought it should be more important since digital output is intangible. So I set out on this search for a third position. A few wrong turns later, it had occurred to me that the issue was much bigger than copyright. It was about power, competition, freedom, democracy and big words like that.

So I started Netopia, to be able to invite others to help figure it out. Turned out many were interested. I published work from historians, scientists, lawyers, creators, business people, policy-makers… and some pirates too. They looked at technology, law, China, human rights, history and many other topics, in different ways connected to the “digital society”. I was able to convince some organisations in the entertainment industry to bankroll this soul-searching. They didn’t always like the things I posted, but generously let me carry on. (Thank you, you know who you are!)

It was also controversial, almost an insult to some. The pirate movement took turns swinging at me. Responding to the comments on the launch op-ed took six(!) blogposts. It was fun and a bit overwhelming but the pushback in social media was quite different from what I met in other places: it turned out a lot of different people had shared my sense of unease with the polarized debate and welcomed the search for a third position. Tertium non datur is not very helpful.

It was frustrating too. Trying to take part in a global debate in a small language. The people I wanted to talk to (and argue with!) mostly wrote in English. I did read them but they could not read Netopia. Also, as a balance to the power of the internet monoliths, the Swedish democratic institutions were not much. The EU on the other hand, that could be something to put one’s trust in. So in 2013, Netopia switched language to English and moved to Brussels.

Since 2010 everything has changed and nothing has changed. There is a bigger understanding now of the problems with the lack of rule of law online. There is an appetite for regulation that did not exist in 2010 (no I’m not saying Netopia made it happen, only that it was part of the change in mindset). But to a large extent, the power battle is the same. If anything, the monoliths have become more monolithic.

Netopia is ten years. Who knew? Here’s to another ten. And to the hope that those ten will be plenty.

Skål!

The Swedish edition of Netopia is still online here:

www.netopia.se

Evergreen Movie Director on Evergreen Piracy Question

Tuesday, January 28th, 2020

Here is an evergreen topic: does piracy really hurt legal sales?

Last week, film-maker legend Werner Herzog said at a film-festival in Switzerland that pirates have his blessing if they can’t find a legal source for his movies.

Gizmodo conveniently disregarded the second part and added the pirate’s old pet theory that piracy may help legal sales. To support this argument – which is not what Herzog said – Gizmodo cherrypicks two studies: one is the infamous Ecorys-study which was commissioned by the EU Commission (no pun) but buried – most likely not because of some inconvenient truth but for its inferior quality. More on that report here. The other, if you read it, recommends fighting piracy, at least pre-release. Pretty weak support, considering the heap of studies that come to the opposite conclusion.

But maybe that old argument is less interesting, because Werner Herzog is right of course. If he makes a film, it belongs to him and its for him to decide what happens with it. That decision is his and nobody else’s. Unless of course he chose to sell that right to somebody else (maybe in exchange for a production budget), in which case that right belongs to the buyer.

Werner Herzog also said the stupidity in social media is what scares him the most. Netopia concedes.

Wikipedia, Trolls and Copyright

Wednesday, December 18th, 2019

Jimmy Wales, founder of Wikipedia has lashed out at the EU Copyright directive calling it an attack on the way that people use the internet.

Speaking in an exclusive interview during, a now annual think tank on the role of technology in society, Wales said that the reason for recent Wikipedia ‘blackouts’ in Italy and Poland was to signal to European lawmakers that they are attacking the soul of the internet.

“Well it’s not so much anti-copyright but we do oppose legislation that fundamentally affects the way ordinary people are using the Internet. And we felt like that this these proposals will do that.”

In what is being portrayed as a battle between bureaucrats and the guardians of the internet age. Wales and technology giants like Google and Facebook are increasingly depicting the EU as out of step with the information age and as a threat to the freedoms of the new high-tech world.

On one side are the forces of the old conservative order, the politicians, bureaucrats and the police in various flavours and on the other an unlikely alliance of the new hugely rich technology companies and the internet libertarians who claim the internet as their own personal fiefdom.

The big tech companies say that regulations like the French-inspired ‘droit à l’oubli’, the right to be forgotten and the EU Copyright Directive simply prove that politicians do not understand the brave new world of the web.

It is a view of ‘out of touch politicians’ that Asa Raskin, a former head of technology for Mozilla and one of the founders of the Centre for Humane Technology which campaigns against internet abuses says is common in the Californian heartland of the tech companies.

“There is a meme in Silicon Valley which is that governments are too slow and they are too uninformed and that the people are too old to make good policy,” Raskin, commented.

A charge from Silicon Valley that the internet should be free of offline regulation that Axel Voss, a German MEP and a key figure in drafting the EU Directive on Copyright, rejects.

Voss thinks that the internet companies are exploiting ideas of internet freedom and libertarianism to create their own world and then claiming that any opposition to it is counter disruptive and that they have should have total freedom and to do what they like.

“With this argument of course, you can avoid everything that is a legal requirement and it’s a criminal argument. Are you saying: ‘if I have to sort out child pornography, propaganda, hate speech and whatever you can think of then you are killing the internet?’  No with this argument you’re turning the whole internet into a law free space.

“This is something our society has to decide. Would we like to live in this world or not? Fulfilling legal requirements is not intended to be a counter disruptive action for the platforms or their business models. It’s simpler than that, their business model has to fulfil the legal requirements and so they should do something.”

It’s simpler than that, their business model has to fulfil the legal requirements and so they should do something

According to Voss, copyright is a fundamental right, a property right and a fundamental property right that the internet companies themselves assert over their software.  Voss claims the big US tech concerns are infringing the fundamental rights of other intellectual property owners and hiding behind ideas that say that knowledge on the internet should be free and the old rights of copyright holders are unenforceable while the internet companies can enforce their property rights and their terms and conditions.

It was an argument that once held sway but now the EU now has the technology companies firmly in its sights, right across the board from the payment of taxes where they are earned, to the regulation of their online activities. Voss’ colleague John Haworth, an English MEP, is even more robust about the tech giants’ actions and what the response should be.

“There’s no free. Microsoft and others constructed the ludicrous notion of free services, it’s a massive con. There are no free services nothing is free.”

“It is entirely right that people are properly rewarded for their work, they have been comprehensively ripped off. Copyright applies to the people who create images, who create films, or create television programs, and so they need to have their interests protected too. For me the copyright directive was about fair work, fair pay, and rights over intellectual property that the people who created the so-called free internet have been happy to exploit to make money from,” said Haworth, who just as vigorously attacked the internet libertarian idea that all information should be free.

“There’s no free. This is a massive illusion; people pay for things. People provide value in exchange for services to the internet companies. Microsoft and others constructed the ludicrous notion of free services, it’s a massive con. There are no free services nothing is free.”

The copyright champion Voss admits that he is puzzled by Wikipedia’s position on the copyright legislation because the legislators took particular concern to ensure that the online encyclopaedia was not harmed by the new directive.

“The European Court of Justice judged that article 14 of the E-Commerce Bill can only be valid for passive platforms and not for active platforms. This means active platforms have a liability and we took that on board in this copyright reform. Jimmy Wales can’t complain because we took Wikipedia out deliberately in Article 2.6 in the consolidated version,” said Voss adding that Wales’ and Wikipedia’s opposition appeared to be ideological: “I think they don’t like to have a liability of platforms.”

For Voss though, more worrying than the spat with Wales, is a misconception that the directive means that the EU will censor the internet and will install upload filters to enforce the directive which he says is a totally untrue claim spread by social media.

They think we are installing upload filters because of copyright and this is absolutely not true

“This was one big, big, big fake news campaign but in Germany the younger generation believes in it. They think we are installing upload filters because of copyright and this is absolutely not true and because no-one has explained what we are really doing the fake news is out there.

“So, you can see how dangerous this already is. This power of communication to millions of young people is influencing the democratic structure even here in Germany because a lot of EU colleagues were telling me you are totally right but I can’t vote for it now because an election is coming and I will be confronted with this situation.”

One perhaps for Jimmy Wales, he was at the Copenhagen Tech Festival to launch his revamped WikiTribune idea, an online news service to combat fake news and change the now tortured soul of the web.

“It’s an attempt to try to rethink how social media works, to think about journalism. To see how we can engage with quality people in the community and amplify them rather than just accepting what the comment fields on news sites say because they are just full of trolls and the worst people in humanity,” Wales told me.

A Global Moral for the Tech Companies?

Wednesday, November 27th, 2019

Netopia attended the annual Internetdagarna (“Internet Days”) conference in Stockholm, Sweden. Tuesday’s highlight was a panel on the tech companies’ moral obligations.

On stage were the public policy representatives for Twitter, Facebook and Google in Sweden—Ylwa Pettersson, Christine Grahn and Sara Övreby—and law professor Mårten Schultz. Schultz challenged the panelists with things like the Christchurch massacre and said the coloan guarantees from federal pension fundsmpanies are not doing a good enough job moderating their systems. The tech representatives talked about transparency in community standards and terms of use.

Technology is the opposite of neutral. Technology is a product of ideology. Of public investment. Of legislation. Of public policy decisions.

They pointed to the difficulty in making content regulation that works in different countries. Grahn from Facebook said that the AI catches 99% or more of child abuse content. Google’s Övreby said YouTube takes down “supremacy content”, which means content that says one group of people is superior to another. Twitter’s Pettersson shared an interesting point that in hate speech, the AI looks more for behaviour patterns than content. Also, all three companies have signed up to Sir Tim Berners-Lee’s “Contract for the Web” (perhaps demonstrating that it will have no impact on their business).

Despite these efforts, calls for regulation are increasing. Not only on this publication, but only in the last few days, Amnesty International and comedian Sacha Baron Cohen have joined these calls.

In a key comment, Google’s Sara Övreby said: “Technology has no morals or ethics” (my translation). Whether this is good defense play or at heart of the self-image of her employer, this comment captures why the strategies applied will not work. Because it’s wrong. Technology is the opposite of neutral. Technology is a product of ideology. Of public investment. Of legislation. Of public policy decisions. In other words, technology is a product of morals and ethics. Consider this:

Ideology: the internet itself is a result of the Cold War. The research for the computer-to-communication-protocols that created the internet was funded by the US military in the 1960s. The Cold War is one of the clearest ideological battles in history: Communism in the red corner, Capitalism in the blue.

Public investment: most of the research on the technology that runs the internet was publicly funded: microprocessors, harddrives, touch-screens, GPS, voice-control, etc. Big public investment goes into upcoming technologies such as additive manufacturing, smart electric grids, self-driving cars, supermaterials, etc. It does not stop at that; even the famous venture capital funds on Sand Hill Road relied on public funding, using loan guarantees from federal pension funds—four public dollars to each private.

Legislation: the immunity from prosecution for intermediaries, laid out in laws like Section 230 of the Communications Decency Act in the US, is the cornerstone of the platform economy. Without that paragraph, internet companies would have to operate in a completely different manner, taking responsibility for what users post on their systems.

Google itself is very much a product of the ideology that was popular at Stanford University in the late 1980’s and early 1990’s when founders Page and Brin studied and did research there. Famously articulated by Stanford Review editor Peter Thiel (yes, that Peter Thiel, the Bond-villainous superstar tech investor): No Regulation, No Taxes, No Copyright, No Competition.

Technology is the product of morals and ethics. Accepting that is a great first step toward change.

 

Amnesty: Google and Facebook Unprecedent Threat to Privacy

Thursday, November 21st, 2019

The go-to-answers to any form of criticism of the tech business have always been “freedom of speech”, “human rights” and “it would turn us into and China”. But as of today, Big Tech must come with a new answer. Amnesty International has called the bluff.

The report Surveillance Giants was published today. It calls Google’s and Facebook’s offers “a system predicated on human rights abuse” and says it is:

an assault on the right to privacy on an unprecedented scale, and then a series of knock-on effects that pose a serious risk to a range of other rights, from freedom of expression and opinion, to freedom of thought and the right to non-discrimination

Read that again. “Unprecedented”. As in “never before”. That means that it trumps the historical atrocities of the KGB, Stasi, North Korea… even the Sesame Social Credit in China.

Yes, Google and Facebook protested (and Amnesty has graciously included the responses in the report) and said they help freedom of speech. Except Amnesty International is a better judge of that.

Next time Chinese dissidents complain about surveillance, the Communist Party can say “at least it’s not as bad as Facebook”.

The 230 Trap

Thursday, November 14th, 2019

The biggest illusion around the internet is that lack of regulation brings freedom. It does not. It only cedes the power of regulation to the tech companies, rather than keeping it in the hands of democratic representatives.

The mythology is strong, though. Tech gurus are fond of saying that immunity from responsibility is the heart of how the internet works. Without it, no Wikipedia. No freedom of speech. No innovation. And, surprise surprise, more power to Big Tech.

The mythology is strong, though. Tech gurus are fond of saying that immunity from responsibility is the heart of how the internet works. Without it, no Wikipedia. No freedom of speech. No innovation.

US legislators were the first to stumble into this trap. When they passed Section 230 of the Communications Decency Act some 23 years ago, they meant to empower intermediaries to tackle illegal content on message boards and chat rooms.  They thought – mistakenly – that shielding intermediaries from responsibility for what users do would turn them into so-called “Good Samaritans.”  The reality turned out to be very different.

The EU’s E-commerce Directive is a bit more balanced. It makes a distinction between passive and active service providers.  That’s not enough for Big Tech, which has urged the US government to export Section 230 to Europe through a trade agreement, giving them “certainty” that they’re free from any possible liability.

EU leaders, meanwhile, worry that they may have already gone too far. They are asking tough questions about the need to revisit rules made long before things like smartphones, social media and fake news.

Even if we wanted the internet to be completely “open” and “free”, do blanket immunity laws like Section 230 bring that? The answer is no. Intermediaries instead intervene when they like. Pressed on hate speech, Cloudflare took offline the infamous anything-goes chat forum 8Chan. The big platforms have banned right-wing-extremist Alex Jones and his Infowars-channel.

Easy to agree with those actions, but this is not legal certainty. Quite the opposite: random responses to outside pressure. In each parliamentary hearing, Zuckerberg promises to hire thousands more moderators to fix whatever the issues. But we can only guess what those moderators actually moderate. Facebook bans emojis that can be used for innuendo, but allows political ads with no restrictions on… lying. Is this what “open” and “free” looks like? Is this freedom of speech?

Freedom of speech is the right to express your own opinion. It’s not the right to distribute other people’s works and expressions against their will. It’s not the right to operate a taxi service without following taxi service rules. It is not the right for a machine to distribute any data without restriction.

Innovation does not happen from lack of government. In fact, sensible government intervention often supports innovation. The internet itself is a good example: 50 years ago, the Americans where ticked that the Soviets beat them to space. So they started pouring money into advanced military research. One thing that came out was the internet. No government, no internet.

Yeah, Wikipedia. We love the convenience, but is it really that neutral? Some entries are more like battlegrounds for fake news bots. And how come Wikipedia blacked out in Italy and Poland, protesting copyright reform? That smells more like activism than unbiased knowledge.

And if Big Tech wouldn’t suffer from changes to Section 230, how come they lobby against every such change?

EU policy-makers should see through the illusion of intermediary immunity. When making law for the digital age, they should not fall into the 230 trap. Instead, approach the internet as any other field, where balanced regulation provides the rights of the individuals and promotes fair competition, institutions protect those rights and oversees those markets, and democratic process runs those institutions. Let’s call it a proven concept.