Author Archive

Avoiding Catastrophe – Netopia Spotlight: Prof. Stuart Russell

Wednesday, December 14th, 2022

Artifical intelligence is no longer the sci-fi future that we have so often used as a panel for projecting our fears and dreams onto. Today it is in every person’s hands (or device) as the raving popularity of text-generating AI-systems as ChatGPT or text-to-image-systems such as DALL-E and Midjourney have demonstrated. What does this mean for the impact of AI?

Netopia spoke to a true veteran and thought-leader in the field, professor Stuart Russell (read Netopia’s review of his 2019 book Human Compatible!) Abstract Intelligence – How to Put Human Values into AI – Netopia Netopia

Professor Russell came to Stockholm earlier this month for the Nobel Week Dialogue and further discussed these topics on two panels on the program, watch them here (starting at 4:30:00): Nobel Week Dialogue – NobelPrize.org

WATCH THE FULL INTERVIEW

Professor Stuart Russell, Welcome to Netopia’s Video Spotlight interview.

It’s nice to be with you. Thank you.

And we are in Stockholm today for the Nobel week dialogue and you have a very impressive resume that you’re a professor of neurological surgery. I was a professor of neurological surgery. for just three years while I was working in a research project. And also computer science has been a focus?

 Yes, computer science has been my day job, And I’ve been in UC Berkeley for 36 years now. And that’s an interesting combination of neurobiological surgery and computer science. The mind leaps to eurological interfaces that connect to your brain.  You might think, but actually, it was a coincidence that some of the basic mathematical ideas that I had. I thought might be useful for some of the problems that come up, really just in keeping people alive in the Intensive Care Unit. So, when you’ve had a head injury… often your brain is unable to regulate your body. And so the Intensive Care Unit is there to do it instead of the brain right to keep you a temperature in the right range. To keep your heart rate, your blood  pressure, your oxygen levels, it has to manage everything and do that. It collects a lot of information. So a patient in the Intensive Care Unit is plugged full of sensor devices that are measuring all these things so you can know when you need to fix it. But it’s very hard for human beings to keep track of all that data. So we thought that we’d be able to use AI systems to watch the sensor values and then determine as soon as possible if something was going wrong and then intervene early and more effectively and It turned out that yeah, we could do that a little bit, but the human body is a very, very complicated thing. And so I think we just scratched the surface of that problem.

That’s really interesting….. It’s the subconscious operations of the body rather than mimicking the mind because that’s something we often think about when we talk about Artificial Intelligence.

That’s right. So the connection is really a coincidence. It’s not, I wasn’t trying to understand how the brain works. It was just trying to stop people from dying but I know much more about the plumbing of the body basically.

You are here in Stockholm. Now for the Nobel Week dialogue and you shall be speaking this afternoon. What’s your topic today?

So there are two panels. One is on living with technology so I’ll do an introduction about artificial intelligence. What’s Happening Now? What are the trends? What’s going to be the big thing inthe future? The second panel is on how to avoid catastrophe , which happens to be what I’m working on for the last seven or eight years. I’ve been thinking about what really one main question Which is: if we build machines that are more powerful than human beings, how do we have power over them forever?

I’ve been thinking about what really one main question Which is: if we build machines that are more powerful than human beings, how do we have power over them forever?

So, that’s the question what I’ve been asking. And so it’s led me in some very interesting directions, including a realization, that actually we really got the field wrong right from the beginning.

How so? So, the way we, the way we thought about AI….was we started doing AI roughly in the 1940s and so  obviously it’s about making machines intelligent. The question is: what does that mean? Does it mean just that they wrote beautiful poetry or, you know, in some people thought, oh, it means that they have to behave just like human beings, right? But that’s really a sort of question of psychology and humans behave in ways that are sort of a lot of accidental results of evolution of structures of our brains and bodies and so on, you can’t really build a mathematical discipline out of that. So the definition that won out was a definition. That we borrowed really from economics and philosophy, the notion of rational behavior, the notion that are our actions can be expected to achieve our objectives. And obviously, if you take an act that you don’t expect to achieve your objectives, then it’s not rational, It’s not intelligent to act in ways that are contrary to your own interest, so that’s the model that we borrow. Right, and for humans, that makes sense because we come in and we have our objectives by, for whatever reason. There are things we want our future is to be like the things we don’t want our futures to be like, but for machines they don’t come with objectives.

So the model that we developed was you make objective of achieving machinery or as we call optimising machinery. And then you have to plug in the objective, right? And so, in the early days of the field, those objectives were logically defined goals. Like, you know, I want to be at the airport before 2 p.m. right more recently. We understand that there’s uncertainty, we have to do with trade offs. So we have a more, a richer notion of what we mean by objective, but the same principle that our actions should be expected to achieve our objectives. And the same for machines…the problem with that model which for some reason we just didn’t notice until recently is that if you if you put in the wrong objective, Then you have a problem right now, you’ve got a machine that’s pursuing an objective. That’s actually in conflict with what you, the human want the future to be like, right? So you’re really setting up a war between humans and machines. Well that’s exactly what we want to avoid.

Then you have a problem right now, you’ve got a machine that’s pursuing an objective. That’s actually in conflict with what you, the human want the future to be like, right? So you’re really setting up a war between humans and machines. Well that’s exactly what we want to avoid.

So one answer might be okay. We just have to make sure that the objective we put in is exactly right. Yes, That it’s complete that it’s corrected covers all conceivable, human interests, no matter how the future actually evolves. And that’s completely impossible because there are there are things that are going to happen in the future that we don’t yet know whether we’re going to like them or not, right? So, The answer seems to be get rid of that model all together, Get rid of the model that we build objective of achieving machines and we put objectives into them. So what we do instead is build machines that know that they don’t know what the real objectives. So they’re actually uncertain about what it is that he would want. Even though their goal is to help humans, get what they want but they don’t know what it is. So that’s a new kind of program where we didn’t have those kinds of programs before and it actually leads to all kinds of desirable behaviours, Because if the machine knows that it doesn’t know what the true objective is then For example, it has an incentive to ask permission. Before doing something that might violate…. Some of our objectives, our preferences, right… in the old way of doing things. There’s never a reason to ask permission because the machine has the objective. That’s what it has to pursue those the right thing to pursue it, right? And so it never ask for permission, right? So so it’s early days and there’s a huge amount of work to do, but I’m reasonably optimistic that this way of thinking about a I will actually turn out to be better and maybe we’ll solve this problem. This long run problem of how we maintain power over machines. So it’s an “Artificial Doubting machine”…. it’s a “Humble Machine”.

Since you wrote the book that we reviewed a few years ago there has been a big change that AI has become something in every man’s hands now with the mid-journey and Dall-E and the issue of creation, artificial intelligences, and also chatGPT, very popular as we speak and it’s all over social media. Yeah, did you expect this to happen? This democratization of it. …of artificial intelligence tools and what’s the impact?

So it’s interesting that you bring up these two examples, the second one, chatGPT is very much along the lines that people have always written about in science fiction. If you think about on Star Trek, you know, the computers there. You could talk to the computer, ask it questions. It gives you very knowledgeable answers; you know, some of the early real AI systems even in the late 60s, were question answering systems. You can ask your questions in English and it would answer you in English and interestingly

ChatGPT is not able to do some of the things that those systems were able to do in the late 1960s

ChatGPT is not able to do some of the things that those systems were able to do in the late 1960s. So for example, in those systems the most famous being system called SHRDLU by Terry. Winograd, the conversation was about a simulated world with where you were moving things around. The table and you could say to it, okay, put the red block behind the green pyramid then you could ask questions. Like well, what’s in front of the green pyramid? And it could tell you, whereas chatGPT very quickly gets confused and you can’t answer those kinds of questions in house.

Abstract understanding of the outside world is that right?

That’s right. So it can’t build and maintain an update correctly, a model of what’s happening in the world. It does some other things, really very impressively but those kinds of sequential tasks, not so much but I think that’s probably not a Time. The other kind of system…

We hope for the best. We have absolutely no idea how the systems do what they do. We can’t predict when they’re going to work, when they are not going to work sometimes they answer questions correctly.

this idea that you could put in some text and then it will produce a picture for you….. And I was I was giving a speech in the House of Lords a few weeks ago. So I just had to have some fun and put in “Members of the House of Lords wrestling in the Mud” And this was on stage one of the stable diffusion systems and It produced a really quite impressive picture ….of you know elderly gentleman wearing long robes covered in mud. It was quite funny but that was never a goal of AI, right? Right. That it just wasn’t something that people worked on it. Just turned out by serendipity, People realize that, yeah. If you train with both text and images, you can get generative models. And I think it came, people found ways of generating images, if you train it on, lots and lots of faces and with a certain kind of technology called Generative Adversarial Network or GAN, Then you can ask that model to generate new faces. It’s very good at that. But then they just realized if you train in parallel with textual descriptions and images, then if you ask for text, you can it produce images. So, a completely new functionality, That wasn’t ever really seriously pursued in AI until very recently, So it’s been a very fascinating period and the kinds of things that are going on in AI…. …they just don’t resemble anything that we did historically in the terms of the methodology, The early question and answering systems that I described from the late 1960s underlying it there was a logical reasoning system with a database and then we would take a natural language, we would find the structure of the sentence we would convert it to into an internal formal representation interface that with the reasoning system and so on. Now we just make basically a big pot of circuit, you know, billions and billions and circuit elements. That are just tunable and we just train it on trillions of words or text. We hope for the best. We have absolutely no idea how the systems do what they do. We can’t predict when they’re going to work, when a not going to work sometimes they answer questions correctly. Sometimes they just output complete nonsense. Any but you know, one of my friends with just sending me examples, he was trying chatGPT, He was asking okay: which thing is not bigger than the other an elephant and a cat and GPT. Confidently says: “Neither an elephant nor a cat is not bigger than the other.”

You speak so fondly of the artificial intelligence almost like we talked about our children or our pets and at the same time, some people think of it as the end of humanity….

Well, I think you can simultaneously enjoy both pictures. I mean the things we have now, in many ways, they’re amusing toys and in some sense, they are like animals in that.

We use dogs for hunting, we use horses for pulling carriages around, and we’ll find ways to use these chat systems.

They are the result of really a sort of process of natural selection to process that, you know, is a stochastic gradient descent algorithm which is sort of what natural selection does and it has some other things to it too. But that process, that sort of; it’s almost like a certain chemical reaction …it’s just sort of like throw lots of stuff in… let it boil for a while and see: you know, maybe it’ll turn into a cheesecake, or maybe something else, right? And it just turns into this thing and you don’t know how it works. And so, you just play with it and you learn what it can do and can’t do like with a cat. We learn that cats. Don’t come when you call them … A can of cat food, then they come. Some things cats can do, some things dogs can do, We just sort of learning…. This is almost like a new species will just learning what they can and can’t do and how how to use them. You know, we use dogs for hunting, we use horses for pulling carriages around, and we’ll find ways to use these chat systems, Well the new generation systems. Something that’s actually much more capable than the old chat was.

As humans think we tend to project things like emotion and intention on living things and and objects…. of course, also on AI. …as I was preparing for this interview, I thought about the old robot dog. AIbo…it was. That’s right. Sony AIBO and it was like a small puppy dog, and it acted like a puppy floppy ears. Do you think that they would be point where, we get a perfect puppy? Or is there something intangible? Something like…. Life or soul… that… In principle we could do that. Whether it would make sense, I’m not sure. And it seems quite likely that the natural direction of technology would take us. In different directions, right?

So, the probabilities given that machines are so much faster than biological frames and as they scale up, they bigger memories. They have much more communication bandwidth with each other, right, they can exchange information, far faster than me humans. Can exchange information with each other. So they’re just going to look very different for biological systems. I think. And I would say the jury is still out on which technological approach will end up working. I know there’s a lot of excitement in our around deep networks and large language models, which chatGPT is an example. But there are reasons to think that those approaches will fail in the end. And we’re already seeing the ways that they don’t work as well as you would like, in the sense that they seem to need far more data than humans do around right. ChatGPT has already read possibly millions of times more text than any human has ever read me. And yet they still get very simple basic questions wrong. The image recognition systems need to see thousands of millions of examples of a giraffe, right? But if you get a picture book, read to your child …you can’t buy a picture of the million pages of giraffes!

The image recognition systems need to see thousands of millions of examples of a giraffe, right? But if you get a picture book, read to your child …you can’t buy a picture of the million pages of giraffes!

is one giraffe and it’s really a simple, you know, yellow and brown cartoon giraffe and that’s enough for that child to recognize giraffes in any context, anywhere in the world for the rest of their lives. From one example of the human learning is much more capable. and I think that illustrates that there are basic principles that we haven’t yet succeeded in capturing in our approaches to machine learning.

I think that we have tended to think of AI as something foreign or something… that comes around the corner someday and maybe it’s been a topic of science fiction. So, back to the democratization, does this change the relationship between us humans and artificial intelligence and the expectations. we might have on it now that we can more easily interact with it?

I think it does. And I think the point made earlier that we probably overestimate its intelligence and whether it’s at, actually reasoning or or even remembering. So it’s it’s very hard to remember that something that is able to generate grammatically correct and coherent text could be doing that using completely unintelligent principles. But that can certainly be done, right? And there are many examples as one of my favourite on the web is called “the Chomsky bot” and so the “the Chomsky bot” is a very, very simple statistical text generator. That was trained on a lot of pages of the writings of Noam Chomsky and it produces paragraphs and you know they’re very coherent you day. They’re very characteristically Chomsky and very complicated sentence structures and complicated logical relationships. Among all things in. If you just ask it to speak or to write a few paragraphs you think “Oh my goodness”…. This is amazing. You know this program is so brilliant but actually if you keep doing it it starts to get repetitive right and then you realize that you could start to see, okay? How is it making its really a party trick… and the large language models, chatGPT, and others are a really more sophisticated versions of that, what the response that they’re giving is in some sense a statistical, average of the kinds of responses that humans have given in to those kinds of inputs. It all the text that the system has as ever, written. So simple example would be if you ask it how are you today? Right? Well, what’s the most common answer? In history to the question. How are you today or? I’m fine. Thanks. How are you, Right? So horribly it says, unless it’s been specially prepared ground to avoid it. It will probably say “I’m fine. Thanks. How are you?” Right!? But that doesn’t mean that it’s fine, it’s actually just parroting what humans say. And it’s the, it doesn’t have any sense that…. Yeah, it exists or that it could be fine or not fine. Or even that the word fine doesn’t apply because it’s a machine. It’s just parroting. So if you just keep that example of mind, right? But it’s not answering your question, It’s been then that the helps to dispel the illusion on the other hand. It makes you wonder, is that what human beings are doing? Most of the time right there? We’re not really doing all the reasoning and thinking remembering that a lot of the time our speech is generated by pulling together patterns that we’ve seen in the past or even things we’ve said in the past. And I can tell you, that’s what I’m doing right now

And sometimes we lie, maybe we are not fine We could just don’t want to talk about it.. Okay, last question. It also appears that input data to AI is a field of a power struggle on many different levels, we have big companies investing in artificial intelligence systems and trying to get access to as much data as possible to train them?

We have super power States doing similar things. How do you see this? this playing out, will the benevolent forces stand tall in the end.. or is this part of the dystopic?

You can be very intelligent without a lot of data humans. You know that, for example the amount of text even in GPT3 which is not the latest generation, right? We’re about to see GPT4 but the amount of text that GPT3 was trained on is roughly the same size as every book ever written.

Well, I actually think that at least I hope that this data race is coming to an end, you know, that this idea that data is the new oil. And the more data you have the more power your systems are going to have and whoever has the most data wins. Will I think that’s a horribly, an incorrect narrative? Because getting more data doesn’t necessarily result in more intelligent system. I think there, there are basic research advances that are going to be determinative of, who is who creates the first real general-purpose AI systems. And one thing that is obvious from looking at humans is that you can be very intelligent without a lot of data humans. You know that, for example the amount of text even in GPT3 which is not the latest generation, right? We’re about to see GPT4 but the amount of text that GPT3 was trained on is roughly the same size as every book ever written. So you’ve already pretty much consumed most of the text in the world, right? You know what else is there was a bunch of on the web, a lot of that text was generated by computer programs spitting, out instructions and news items and things like that, that are machine-generated. So it’s not clear. That adds a lot in terms of creating more intelligence. So I think that we are coming to the end If we haven’t already come to the end, it was soon going to …of the idea that we can create more capabilities simply by having bigger circuits training with more data and this is an opinion, I should say is not a theorem, I can’t prove to you what I’m saying. It’s a gut feeling and other people have a different gut feeling. They feel like, well, would you just get 10 times more data? Ten times bigger circuit something qualitatively new is going to happen but that’s just what you know. That also feels like wishful thinking to me why there’s no scientific basis for that because they don’t even know what happens to produce a qualitative change in behavior.

Thank you Stuart Russell. Thank you so much for coming to the Netopia and giving this interview and good luck with your talk this afternoon.

It’s a pleasure to be nice to speak to you.

 

Footnote:
To the reader, it’s with no irony that this article was created via speech-to-text AI recognition software. Almost all video upload services today have some form of extraction of audio to text. Perhaps this is for the user output, perhaps for advertising input or the positive output being for hard of hearing and deaf users who can avail of the content with subtitles.

In all, we’d rate it as 9/10 for accuracy from input sound to output text, though in all fairness it helps when the input language is in English and the speaker is a Professor using clear and ordered language and sentences.

Enough Abuse Online – Football Stepping Up their Game

Wednesday, December 1st, 2021

Hate speech and racial abuse online has haunted football and maybe peaked with England’s penalty shoot-out against Italy in the European Championship finals this summer. What can football do about it and who else should take action? The English Premier League‘s head of equality, diversty and inclusivity Iffy Onuora joined Netopia’s video spotlight interview series to discuss his work, the role of the legal system and tech companies, as well as equal opportunity in football, social media boycotts and his life in football.

Transcript

Welcome to the Netopia Video Spotlight interview and for this episode

I have a special guest. It’s none other than Iffy Onuora, he is with the Premier League and is the Head of Equality, Diversity and Inclusivity – welcome Iffy. It’s great to have you here on the show. You’re the head of equality, diversity and inclusivity. What does that job description mean?

It can be anything that could be representing and trying to make clear pathways between underrepresented groups, and what I mean by that is, if you look at the Premier League fantastically diverse on the pitch, lots of representation, a lot of nationalities, a lot of black players, lots of players European players.

The Premier League is fantastic visual in that respect, but the thing we found in this country and certainly in Europe was that wasn’t being represented of the pitch so there was a paucity of black executives and that’s not just black and white. We don’t make the transition in this country very welcome, from the pitch to the boardroom. I think other countries that slightly better than us. So it’s a topic and an action for us to address and then obviously the  coaching atmosphere, we think there’s 20-30% of players are black or identify as black, but very few managers, and that’s not just in the Premier League but beyond, so we are always looking at ways we can help turn up those kinda statistics around so that was part of my work previously alongside the coaching with the PFA and when it came to the Premier League I knew this would be my soul focus and that’s what it was a new position for the Premier League. I think there had been a lot of very good work being done well before I came but I think making this job specific to a new incumbent was away the Premier League saying right we’re now going to bring this in-house, because I think they’d gone outside to other organisations, such as Kick It Out to represent some of the anti-discrimination works, but I think bringing that title my roll into the building in-house making a Premier League specific role was the way of saying “right, we’re going to now make this role really distinct and bespoke to the Premier League.

[Per] So you speak about the lack of diversity or the difficult in the transition from the pitch to to the boardroom, but another big topic has been around the racial abuse towards players. But not specific to the Premier League in anyway, but rather a bigger conversation in football.  How do you see that discussion?
What’s been happening in the last 18 months in the lockdown?

We have the killing of George Floyd, which was a pretty seminal event really around the world, you see the protests, you only have to go back a year to see the world wide protests, and there was I think it was the red line in some respects. It was people deciding and people of all races,  I should say, decided that enough is enough. There is this problem of race, that permeated all through society, not just football and it was a sign now of society to take it seriously. We’d spoken about it before and it had been passing phases sometimes it was mentioned it never seemed to go out of fashion. I think the difference now is people have really laid a marker down and said we’re going to do something now about this, so I think in terms of the Premier league, then we set out an action.

“No Room for Racism”, I’ve got the badge here and within that we you’ve got action plans around some of the things mentioned about coaching pathways and one of them is some of the abuse that was outlined you saw in England and the players decided to take the knee before the start of every game that was an action plan that the Premier League have very much behind supporting the players, we didn’t want to dictate to the players but once the players decided they wanted to do this with a must supportive of them.

So we’ve got lots of actions around that, and actions around online and abuse, which you can talk in more details about so I think the important thing is to say that my roll covers so much of that.

It covers some of those pathways, it covers imbedding anti racism within the football stakeholders,  the leagues, the clubs…it covers combatting racism for the social media companies, working with the social media companies working with government, it covers quite a lot, it’s wide-ranging, it’s extensive.

It’s meant to be, if it was an easy task. It wasn’t just going to be solved just like that It’s meant to be a test that are some certain specific works attached to it and its long-term of longitudinal work that can’t be done in a minute, in a few weeks, a year – it is the trying to reverse a lot of things that happened many years and that takes time.

[PER] So to follow up on the specific detail,  you said that the Premier League supported the players who wanted to do the take a knee gesture before games. I know this has been controversial in in some other sports leagues. Was this a difficult decision to get support for that particular action?

Going back to the restart… remember the League closed down due to the pandemic and when it restarted, this was very much in the storm of the Black Lives Matter movement and the players very vocal and wanted to their voice heard around this issue heard.
So, when they restarted, they had the Lives Matter on the shirts,  and they were auctioned off.

I think one thing was just how united they were, all the players black and white wanted to show the support for this bigger objective and that was a thing that drove forward so all the captain’s that were representing their clubs and wanted to do it and then that was, so that  was the restart of the previous season, and we got into summer and the Euros and the England players wanted to carry on doing it.

That’s where some of the maybe push-back we call it occurred and there’s a lot of discussion about whether we needed to keep doing it. Some people resisted and I think was poorly 50/50 but which way was going to go and then you may be aware of what happened when the the team lost in the finals and by the way this team is a young team diversity in wonderfully led by manager Gareth Southgate who’s really led the team and squad fantastically and so I think the public had really got behind the team, they identified with my young team,  developing team, a lot of players like Marcus Rashford and Raheem Sterling chose to use their platforms positively.

So the public are really behind them even though there’s maybe this discussion about whether we be going to continue to support the knee but actually what happened was the players missed the penalties, three black players and they got abused on social media and from then on it was almost like that awareness now came into the mainstream. Everyone kind of thought: “Right, this is why they take the knee” If you weren’t sure before, this is why, we understand now what they object to, what they’re using the platform for?

So I think that made easy and some ways, so at the start of this season now we started discussing going to keep taking the knee that was very little argument against it now so out of that distress emerged something a lot more unified and a lot more accepting a lot, more acceptable now you go to the stadium week in week out and players take the knee. Applauded all throughout the stadium, they understand it, they applaud it and I’m really proud of the Premier League were supportive of that. Not just the players.

We commissioned the short about it as well. So the players didn’t feel it was just them doing it without our support. You very much know if you want to do it we will support we will do it this way if you’re happy and I think it’s been a really good thing.

[Per] Hate speech online and specifically racial abuse is a much bigger issue than particularly to football and also football, the Premier League might be one of the most famous parts of football, but It’s also a part of a big world,  so for all the the success and all the influence that you might have how much can you accomplish?

As a cog in this big machine – Is it possible for you to have real influence on his speech online?

Yeah it’s a really good question. I think that we have to accept we are the cog but what we also recognise is that football and you can see in the summer when the England team playing and doing really well, it captures the imagination.

When the England team is doing well with the big clubs are doing well, the public are so engaged so football has that power, it has the power to connect, like no other sport so I think it’s the Premier League being aware and tapping into that so we’re not saying that this is a football but it’s a society problem, but what can we do as a football as a football industry to drive that form and to support it knowing that our influence it can be extensive.

So that’s what we try and do. Recognise our influence and use it to affect so yes all the things about social media. You’ll see that we have it to our society, high profile people are abused and not just and not just a footballer but obviously football is the number one game, it’s the one that’s spoke about, it reached a crescendo in the summer with the players in the penalty, so we’re just almost setting an example.

Can we showcase and be an example in football for how we address it. We don’t want run away from it,  how do we address it, how do we bring influence on the social media companies some of the big players that have millions of followers on social media, so that can be really influential. How can we use that influence of the players and ourselves as the industry and don’t forget the brand of the Premier League, it’s so extensive. You know we reached out all through the world with our broadcasting partners with the most watched league in the world so within that comes that influence comes that reach and I think that’s what we’re trying to use effectively so we know we’re not government.

we are reactively working with authorities to to track down abusive content and abusers online

We are not legislators, but can we influence legislators, can we influence governments and we do have a policy team that speaks directly with members of the government around these issues and then the social media companies as well, and we have a team team since October 2019, a designated team who work with social media companies proactively to take down abuse as it comes and then we are reactively working with authorities to to track down abusive content and abusers online. Some of that is not generally known and readily known outside maybe ourselves, that how much we do we take on that responsibility.

Maybe at the expense or in the instead of some of the social media companies in we always feel can do more, but we certainly don’t shy away from it, and we have a team who do that.

[Per] That’s really interesting how you have some people actually working to to fight the racial abuse in social media and can you tell us a little bit more about? How does that work?  How do you do that work and what kinda results have you seen so far.

Yeah since 2019, they monitor all the social media accounts of all the players and their families and managers, so they monitor all the accounts centrally and they can track the abuse that comes, and obviously it’s such a vast amount of work and some abuse continue to get through, we’ve seen that. But we’ve been really successful in the liaising. So we have to work very closely with Twitter, Facebook, Instagram around some of these things and we have been really successful at stopping some of these before it gets on the platform or liaison very closely with them and highlighting abuse when it comes.

And there has been some successes with the social media companies. Some of them have moved, some of them have filtering systems around some of the hate speech.

Instagram for example can filter some of it out. We do think there’s an issue around verification which I think is their redline.

Instagram for example can filter some of it out. We do think there’s an issue around verification which I think is their redline.

The redline for social media companies is that they don’t want the full ID verification which is something we’ve pushed for, so there’s always going to be that gap and we’ll wait and see what comes along from the legislation where that gap is filled, if at all.

We have worked with my have moved and significantly from where they once were. In terms of real-life examples and you’ll know of someone like Neal Maupay, the Brighton and Hove Albion striker, he worked closely with us and our team at the Premier League to track down an abuser all the way in Singapore and that’s really significant for us, because it wasn’t just abusers in this country.

We could go further afield and work with the Singaporean authorities which we did and we needed Neal to stay engaged and make sure that there’s no let off to someone who’s sending abuse, and absolutely and we saw that through to a conclusion with a court case. The abuser was named and shamed in Singapore and his name plastered all over that and that we think that’s strong message. We will go that extra mile, literary, outside the borders to track down abuses as and when it happens and we’ve had more recent success domestically with a guy called Romaine Sawyers at West Brom. West Brom are now in the Championship but unfortunately this happened to Romaine in the Premier League when he was with us last season and a Gentleman was tracked down who’d sent abuse to him. He actually received a custodial sentence. He was sentenced for eight weeks. And we think that is significant as well.

a Gentleman was tracked down who’d sent abuse to him. He actually received a custodial sentence. He was sentenced for eight weeks.

The judge made a strong precedent, that now if you send hateful abuse you will not only suffer reputational damage, and his name was plastered everywhere in this country. You will receive a custodial sentence if it reaches that high benchmark of abuse. So we’re pleased about that.

We’re pleased that was starting to show, that’s it’s starting to bear its teeth.

Alot of the work that the team at the Premier League are doing is starting to bear fruition because we want it to be a cultural thing where people now don’t even think that we can do this.

Maybe there’s been a clothes before that maybe you can go to the football match or type away at a computer and this is where you let of steam, and you can say whatever you like, you’re frustrated, you’re angry, in the stadium you can just shout abuse and maybe some of that is  the demographics of fans, and we want fans to be as inclusive, not just women, families, diversity and the more that kind of abuse isn’t tolerated the easier it is for young families, children and people of colour to come to the stadiums and feel welcome, so that’s something ….it’s a work in progress.

[Per] You mentioned ID verification as being on the wish list for four more tools. If that were to be put in place, what more would you be able to accomplish that you cannot do at the moment?

Yeah, we’ll be starting to go down the area beyond my specialist knowledge, but what I will say is that we have pushed for that in the past around the fact that you can actually at the moment.

Set up an account post abuse online and then you can close down an account and leave no digital footprint,

Set up an account post abuse online and then you can close down an account and leave no digital footprint, I’m sure you’ll be aware this far better than me, so the idea is to not allow abusers to be able to do that that some kind of verification would be in place that would leave a digital footprint so you could track down an abuser. Now this is a divide from what we’ve pushed for and what social media companies regard as a little bit more sacrosanct and I’m sure we have a piece of legislation coming through the Parliament at the moment which will see exactly where that where that line is and who and who can and who wins that particular argument.

I think as long as there is space for people to create an account and then disappear, then you’ll always have this problem for persistent abusers. I mean what you want is that we affect some cultural change so people don’t feel as if they can or should do it, and then some real change so actually not only should they not do it, they can’t do it as well. So that for me feels the magic solution at the moment we’re working on both of them. We’re still working on the cultural thing that people don’t feel that they should be able to do it, but actually we still need some movements on whether they allowed to do it.

[Per] Some players have been quite critical of of social media platforms and language even boycotting social media, protesting on to the lack of action by the platform companies, what would you like the social media companies do in terms of stepping up?

I mean beyond ID verification…

Yeah, it goes back to what we were discussing, you know some of those boycotts were born out of frustration, because of the perceived lack of action from the social media companies like to say it’s been slow, but there has been some movements.

Well, we’re just turning keep pushing along, further along that path. And in terms of that boycott, we were very much part of that boycott. We had a boycott 18 months ago, a 48-hour blackout of social media companies and that was meant to be part of that strong statement blackout weekend that really got some traction and all the football bodies came together, the Premier league, certainly all the governing bodies, the FA, the PFA, around this social media boycott. And again, that does have an effect. People say well, it’s only 48hrs that’s it. But in those 48hrs we did see that Facebook and Instagram executives who previously maybe hadn’t shown that public face from a media point of view media point were coming on the airwaves to explain what they were trying to do, it felt very much that they were on the defensive and having to explain what do you doing?

What they were doing and why they couldn’t do it, why abuse is taking so long to be taken down even when it was there for everyone to see, and I think that’s a good thing.

They have to be held to account. We all have a accountability in our work

They have to be held to account. We all have a accountability in our work, if I’m not doing my job, or you’re not doing your job someone gets explain or someone gets to ask you why it isn’t being done. It’s exactly the same as Facebook.

If they’ve been asked to take down abuse within a specified amount of time, which they are, and they go beyond that time and it’s still there then it’s not a me to ask them to explain on their behalf, they would be better served by explaining themselves what happened. It’s too easy sometimes to be faceless and they are massive businesses and it’s not my intention to badmouth social media, it’s the way we all interact now.

I’m at pains to say social media is actually a wonderful thing, it’s the way we connect wonderfully throughout the world. It’s in its infancy compared to newspaper in this country, where they are 200 years old, radio and television 100 years old. Social media is 20 years old at most.

You know it’s still very much in the infancy, and we’re all wrestling on how it’s regulated a bit like television, newspapers and radio is regulated. So that’s what we’re doing.

We’re not necessarily demonize social media but trying to regulate and understand its powerful reach which it has now so regulated so it’s Its used effectively and you know with good intentions.

[Per] Besides what you are doing, or beyond what you’re doing yourselves and beyond what you would like for social media companies to do.

Do you have any other wish….is there somebody else who should take some action to also help and support and take action against online hate speech? That is a gathering movement in this country that we do feel that we got to offer something different.

Not just footballers, celebrities, members of parliament have suffered abuse and so I think there is now a sense that we have to do something so it’s not up to football, we can certainly play a part and you should be proud to play significant part, but it’s not up to football to do this it is I think it’s up to all progressive people who can see the dangers of letting things just drift unchallenged and that’s an effect not just on myself, on yourself but on generations, younger kids and people coming up.

We’re always working and how to best use social media.

I think it’s important that all of us who have that power or platform to challenge and cajole and make things better for those who come after, that’s legacy and I it’s for all of us to play our part.

Whether you have a big role or a small part and it all adds up to, I think a gathering sense that we have to have to try and get this way because there’s generations to come in who will who will certainly benefit from a better and more respectful landscape.

Thank you for closing on an optimistic note. I almost feel like I’m one of the players that coached. That’s great. Thank you so much. Iffy Onuora for coming on the show, the Netopia Video Spotlight interview and wish you best of luck with your work in your role and thanks to you, everybody who watches our little program and will be back with more interviews.

Transparency note: Premier League is a Netopia-sympathizer. All Netopia’s editorial decisions are independent.

Have You Met a Whistleblower?

Sunday, November 7th, 2021

Today, November 8,  Frances Haugen will give testimony in European Parliament’s IMCO Committee. Ms Haugen is the latest Facebook whistleblower to call foul on the conduct of the platform.

Will this be the moment where Facebook steps up to their responsibilities as intermediary? Or will there be yet another round of symbolic action? The European policy-makers hold the keys to holding platforms to account.

This is not the first time that Facebook is in the spotlight,  it’s become something of a habit.

In 2018, Cambridge Analytica was revealed to extract user data for political influence, using Facebook precisely as it was designed – micro-targeting users.

Facebook then failed to intervene when its services were used to broadcast hate propaganda that contributed to the genocide of the Rohingya population in Myanmar.

Last year, former employee Sophie Zhang posted information showing how Facebook knows their services are used for abuse and propaganda, there are internal deliberations whether to intervene in certain cases. Deliberations that stretch far from “it’s the algorithm, we don’t know what it does”.  There are mere examples of scandal, harm and negligence. It is remiss not to mention   the Capitol Hill storming or the murder of British MP David Amess or the Christchurch massacre when discussing the reasons why there’s lots to whistleblow about.

“It may not be your fault.
But it’s your problem”
Steven Levy

In their 2021 book An Ugly Truth (Harper Collins), journalists Cecilia Kang and Sheera Frenkel cover the controversies around Facebook over the last half a decade. The back cover is pure genius: rather than the traditional blurbs bragging about how great the book is, it has a list of quotes from Facebook-founder/CEO Mark Zuckerberg and chief operating officer Sheryl Sandberg! Like these two:

We never meant to upset you” – Shery Sandberg, July 2014

I ask forgiveness and I will work to do it better” – Mark Zuckerberg, September 2017

Beside such vague statements, Facebook’s actions have included hiring a few thousand more moderators and the “oversight board” – which is employed by Facebook and has the power to criticize interventions where content is removed or users restricted, but not cases of non-intervention.

Of course, this is not real self-regulation: proper self-regulation has transparency, independence and teeth. This is well understood in many industries: news, advertising, games etc. There is no reason it could not work in social media.

Facebook’s reluctance to take real action has little to do with some mysterious algorithm that works beyond the control of any human mind, but rather a question of ideology on Facebook’s part. Consider what Facebook Vice President Andrew “Boz” Bosworth said in an internal memo appropriately titled “The Ugly” in 2016:

So we connect more people. That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

As the company develops the new, three-dimensional internet – the “metaverse” – these questions will only become more prominent.

Wired Magazine’s Steven Levy said it best: “It may not be your fault. But it’s your problem.”

Selbstzweck, Nein Danke – the recipe for human-centric technology

Tuesday, June 1st, 2021

Technology must be human-centric, says MEP Sabine Verheyen in this exclusive Netopia video spotlight. Technology should not be “Selbstzweck” – only serve itself. This principle can guide both research and legislation, according to the EPP deputée.

In this interview, madam Verheyen elaborates on how offline regulation can be brought online, the limitations of the policy-makers’ reach and accountability in AI – what if the AI makes a new Rembrandt?

Episode 1 featured MEP  Alex Agius Saliba, this is the second interview in our Netopia video spotlight-series. Enjoy

 

Netopia Spotlight interview with MEP Sabine Verheyen

You’re a member of European Parliament. You’re with the European People’s Party group and you’re also the chair of the culture committee, so, thank you so much for coming to Netopia.

MEP Sabine Verheyen
Thank you for inviting me.

Per Strömbäck
It’s a pleasure. Now, my first question is: What is it that brings you to the digital policy topics? What is your political drive in this aspect?

MEP Sabine Verheyen
When I entered the Parliament nearly 12 years ago, I was foreseen to deal with media questions, as the media landscape is turning more and more digital.

The way people consume and use new technologies, the devices, the way of distribution of content has changed and switched in a very tremendous way because there are new ways of distribution and also new ways of how to create content.

So, it was natural that I have to deal with the digital questions when you want to deal with media, with creativity, with new ideas, with Innovation. And that was the reason why I had to start to work with this. When I studied architecture, I was just skipping the digital era. I was working with my hands. I made all my plans for this and all these new systems came up for the CAD systems after I finished. So, I’m not a digital native, but I have to have to deal with all these things. And I think so, it is for everyone in our society who is older than forty or fifty years of age. We’re not growing up with these huge number of digital possibilities, and so we have to learn. Perhaps that is also why I have understanding for for both sides.

My children grew up with digital tools but there’s also a generation that has to adapt to this and we have to have both in mind and we have to see which chances but also what challenges are in the digital side. But we also have to face the problems and challenges that are coming up with the digitised world.

Per Strömbäck
What are some of those challenges?

MEP Sabine Verheyen
We normally think that the digitisation of the internet brings more diversity to people, but in the end if you really take a look, it’s not always that diverse.

We normally think that the digitisation of the internet brings more diversity to people, but in the end if you really take a look, it’s not always that diverse.

We are meanwhile caught in bubbles, due to the algorithms via the content that already interested us before and the wide range of varieties.

The wide range of diversity is not always presented if you don’t search actively for that.

And that is also something we have to deal with when it comes to Digital Services Act, Digital Market Act. But also to other regulatory tools that we keep the digital landscape, open, and broad, and that we keep the diversity we want to have especially when it comes media policy elementary for media freedom and diversity and also crucial for functioning democracies.

Per Strömbäck

So, what learnings can you bring then from traditional media policy to this digital media landscape? Is there something useful that can be replicated or is it completely different?

MEP Sabine Verheyen

We were dealing with the Audio Visual Media Services Directive in the past, the last legislative period together with Petra Kammerevert, from S&D, I was rapporteur for this and we had a co-rapporteurship on the AVMS and we had already included some parts.

When it comes to these grey zones of what’s legal, what’s illegal and you have some zones in between, you have content that is formally legal, but is harmful…… harmful for democracies like fake news or foreign interference information and propaganda and also other things and there

When it comes, to audio-visual services online to balance out, a level playing field between classical media and digital media, especially when it comes to transparency, in advertisements to have a clear separation between the advertisement and the content, they already had the classical media.

And the question is always how to transpose that to the digital worlds into digital platforms and tools. So that people really can distinguish between different characters of content.

We also have a question and that is something we had media literacy tools. also, in the past, to deal with information, how to detect sources, that was something that was easier in the past because you did not have to wide range of sources and we can also think about what work well in the past, but can be transposed to the new digital times, but also to think about what has to be done differently.
What has to be done in another way then we know it on the classical and media regulation what can be transposed. How can we use best practice?

The best practice, For example, the work of the media authorities, the regulatory authority and the ERGA. That’s the European Regulators Group for Audiovisual Media Services.

I think we can learn from their experience from what they did also in the past especially when it comes to these grey zones of what’s legal, what’s illegal and you have some zones in between, you have content that is formally legal, but is harmful…… harmful for democracies like fake news or foreign interference information and propaganda and also other things and there, I think we can also learn from the experience we made with other media players, classical media players that cannot be adopted one-to-one, but perhaps the fundamental ideas that are behind can be transposed? That’s already very interesting.

Per Strömbäck
Now you talk mainly about the audience and how media policy can help the audience, what about the other end, what about the creators and their business partners?

MEP Sabine Verheyen
That is why the Digital Markets Act is set out to get a better level playing field between platforms that are controlling the market.

To fight for your own, right as a creator, as a publisher, as a distributor of content, is quite difficult sometimes because of the market power, puts you in a less good position when it comes to negotiations and conditions.

The market power of the big platforms, like, Google or Facebook and others is quite high.

And so, as to fight for your own, right as a creator, as a publisher, as a distributor of content, is quite difficult sometimes because of the market power, puts you in a less good position when it comes to negotiations and conditions.

That was the reason why I thought it was very important that we made the Copyright Directive, which included a responsibility for platforms, so that they should not control the content but they have to to take responsibility when know about infringements, for example, copyright infringements.

The second step that comes now is the Digital Services Act that also plays a very important role in the distribution and in the relation between content provider, service providers online, and the platforms and still have the right level of responsibility for the big tech players.

Per Strömbäck
I understand the ambition but sometimes we hear that there is a limit to the reach of the European policymakers.
Do you think these policies can achieve all the things that you hope they will? What’s the reach of the European policy in digital?

MEP Sabine Verheyen

You see it already just in the draft, just proclaiming legislation. Just the draft led to a change in the way how platforms worked, because they see that they cannot go without responsibility in the future.

For example, when you take a look at what’s going on in Australia, Facebook and Google were forced to share the advertising revenue that was generated by publishers in connection, with the presentation of journalistic content.

That is on the basis of what we did also on the European level. So European policies has an impact on this because all the other regions [are acting]. A short while ago I had a chat with a politician from the Canadian Parliament.

She was very interested in what we did in the AVMS with the platforms with the video sharing platforms but also with video-on-demand. That was something we’d previously regulated, but especially the video-sharing, she was interested looking how it works, what we did.

I think we can make a change when it comes to responsibility and also to secure democratic structures, also on the platforms. I think the internet is not a law-free environment, it should be carried and driven by our democratic understandings and by our societal agreements we have, by the values we have, that this is on the one hand Freedom of Expression on the other hand but also that our Freedom of Expression is limited when it becomes harmful for others.

And I think to level this out in the right way, can have an impact on how the platforms are working and how they take their responsibilities, also in the future.

Per Strömbäck
Speaking of the DSM, Digital Single Market Directive, it’s been many years now in the making and there is a Trilogue, but there seems to be a delay in the implementation of that in the Member States. And now the Commission says it will issue guidelines. What do you think of the Commission issuing guidelines after the Trilogue? Is that a way for the Commission to change the outcome of the Trilogue?

MEP Sabine Verheyen
Normally not.
These guidelines should reflect what was discussed during the Trilogues, it’s clear like we also did in the AVMS, but also for the Copyright Directive.

We could not finalize every detail in the legislation It would be better if you want to have a minimum level of harmonisation on the European level, because you cannot touch on digital issues just at national level, because digital services are very often cross-border offers. So it is good not to have a split, diverse structure for digital players in the market but to have a common minimum of level of regulation and that’s what we wanted to do with the Copyright Directive and AVMS Directive to give direction for how we should work. But there are still differences in the Member States, but the general line should be similar or in co.

Per Strömbäck
What do you see as the role of the policy maker in Artificial Intelligence, and in particular for the culture and creative sector?

MEP Sabine Verheyen
First what is important is that Artificial Intelligence technologies are human centric, the technologies should not replace human beings entirely

The last decisions must normally be taken by human beings that also in the creator-sector where I think it is important that the guidance for the framework on how artificial intelligence tools are working should be set by human beings, by the programmers, by those who are using and make applications it should be should be the human beings.

I see in artificial intelligence,  good chances for creation to make things easier or very complicated things that need huge data to work with these things and the creative process as well as the database and what’s behind the picture, and questions have to be answered, for example when I make a new picture in a Rembrandt style? Is it Rembrandt in the end who gave the basis for all with all his pictures with all his work? You’ve done for the new painting because without this data of the Rembrandt paintings the new Rembrandt never would exist not in that way so the question is who is the Rights owner, who is the developer of this? Is it the one with the idea to make a new Rembrandt? Is not important because he has been dead for longer than 70 years, but if you have contemporary artists, it becomes a question that has to be cleared up.

There are legal questions that have to be discussed, but also ethical questions, for example when it comes to implement it in education or for vulnerable people.

What can you do with the data because artificial intelligence always based on mass data and that is the reason why we have to be careful and balance out to promote the chances that are coming up with these new technologies, but on the other hand also see the risks. Or also when it comes to the distribution of cultural and content of works. When it is targeted micro-targeted to a special group, can I guarantee access? What other criterions are in the algorithms? Whether artificial intelligence tools are working with. What is going on also in other areas of the creative sector, like in the gaming sector?

Technology is a tool. Technology is not a value per se and have to set a frame for the development into the right direction.
We don’t want to limit creativity and limits also innovation.

It’s quite interesting to see how software can and artificial intelligence tools can interact. You can have a computer or robots that are working with artificial intelligence. There are so many opportunities, also for art and the creative sector. But that is what we always have to see where are the risks and can we keep the human being in the centre of the work, in the centre of creativity so that we are not overruled by machines one day?

Yes, I think that has been a fear since the days of Frankenstein already been going back to the Gollam and we always fear that man’s creation will bring us down at some point, but that’s already very interesting and we will get the chance to talk more about artificial intelligence.

It strikes me that… when we talk about digital development, digital technology including artificial intelligence, often the point is made that the development is fast, so can the policy maker keep up or is it always one step ahead?

That is the point we are always discussing. Are we fast enough with our legislation processes with political discussions are always jumping behind or are we in the forefront?

We sometimes see that developments go faster can react and that’s the reason why you have to think different. We should not react but we should set the guidelines where to go to.

Technology is a tool. Technology is not a value per se and have to set a frame for the development into the right direction.

We don’t want to limit creativity and limits also innovation.

But we have to set fundamental rules that are technologically neutral and that is something we are trying to implement in the legislation. In the last year’s that we get more towards not targeted legislation to just one single technology but become technological neutral in the principles that we put into legislation or into a Regulation or a Directive.

Per Strömbäck
We’re almost out of time, but I cannot resist asking your question on this topic of the policy maker keeping up with technology, but isn’t it also the case that public funding for a lot of the research that goes into a new technology? For example, the big European research programs. Do you see a connection there, because that part of policy should be ahead of the technology, or?

MEP Sabine Verheyen
That is what we discuss: politics cannot decide what’s good and what’s not good beforedand before we know where we are going or we want is to to enable new developments and the research towards new technologies, new ideas, but we also have to keep up with the development when it comes to fundamental structures and values. Also research is not out of a value framework.

You cannot do everything that theoretically possible just because it’s possible, because that is what I mean with it must be human-centric.
We want technology to be developed to serve human beings, to serve our nature, to serve our environment.

Not just to serve yourself. For me technology has to serve the future development of human beings and our our planet and I think that is if you keep that in mind you can support new developments, also fundamental research on principles. That’s important to lay a basis for technological innovation and new possibilities and sometimes you have to to let it develop and then take a look and see is it a good development or not?

We want technology to be developed to serve human beings, to serve our nature, to serve our environment.

But you need also the power to say when it’s something is going into the wrong direction. Like we see now with some platforms as spreaders of information that has a negative impact on democratic structures.

Then we have to set guidelines. We have to set guidelines then where it where it’s not acceptable for us as a society, because society if a loses its orientation and loses its fundamental and basic values it becomes difficult just because it’s technologically possible, and I think that is always the balance.

We have also to find out balance out in politics between that what is in the interest of technological development innovation of what is technological possible, to support new developments but on the other hand also give guidance which will be how technology will play in our society.

Per Strömbäck
Thank you very much for coming on to Netopia Spotlight and we wish you the best of luck with this to work.

MEP Sabine Verheyen
Thank you.

Netopia Will Be Televised

Friday, March 26th, 2021

Netopia goes television – that’s right, in the spirit of democratized media, Netopia goes to video. In a series of interviews, your humble editor will meet key people in digital policy and discuss the hottest topics. First out is MEP Alex Agius Saliba (S&D) who has made a name for himself in European Parliament as rapporteur for the Digital Services Act. I asked him about freedom of speech, innovation, competition, the reach of EU policy on a global network and many other things. Some of the answers may surprise you!

Check out the video here. For your reading pleasure and convenience, we have provided a transcript of the interview as well. Enjoy!

EU must protect its fundamental values online

Thursday, March 25th, 2021

In our first spotlight video interview, Netopia editor Per Strömbäck meets Alex Agius Saliba, member of European parliament representing Malta in the S&D-group. MEP Saliba is one of the most prominent persons in European digital policy, not least as rapporteur on the Digital Services Act for the IMCO committee in 2020.

Netopia had the chance to ask MEP Saliba’s views on the reach of European policy on the global internet, how to promote innovation, freedom of speech online and other hot topics.

This is the first Netopia spotlight interview, more to come – watch this space!

[transcript]
What brings you to digital policy?

I think that digital and especially social media platforms today have become so important and so prevalent in our lives, they have become the new public utilities, and if you look at the legislation when it comes to the digital framework, strengthening our digital single market we have fallen behind for a number of years.

So I have always had a lot of interest in information technology law, especially issues dealing with eCommerce and also competition law and how competition interacts with the digital industries.

Therefore, this was my natural choice to focus on this in the European Parliament, especially in the internal market committee on digital issues (IMCO).

It’s a field whereby a lot of work is needed at the EU level and I believe that this is the right time to do so.

What is the domain and reach of European policy. We’re talking about the Internet it’s a global network. It’s a global market, it’s dominated by, often actors far outside of Europe. What is the reach of the European policy?

It has a lot of reach: first of all, I think what we are doing today with the big discussions in the DSA and also DMA I think this will have a reach not only in the European continent, but it will have also a reach and impact in other continents.

We have a reach as European legislators for one reason, because ultimately when you are talking about digital you are not talking about a vacuum.

Just one thing you’re talking about fundamental rights, fundamental rights of privacy, you’re talking about fundamental rights which are so important.

Consumer rights and other user rights so digital rights per se are interlinked with core European values core European values which we must and should not let only big tech companies to regulate the digital infrastructure, and digital regulatory framework by themselves; basically conditioning the rules upon which they have to work and function. Ultimately we, as European regulators have to be ambitious, courageous enough to take bold steps, so that ultimately we take back control of the digital ecosystem. It’s not about an issue about punishing.

It’s not about small versus big. It’s an issue about if you want to target directly our users, our consumers, our market…big tech companies being big, small, medium sized. They have to play by our own rules.

So this is a fundamental point and I think and I believe that is why we should act and act fast. We have already lost a lot of time.

In this sphere it’s a total shame for me as European regulator that we are in a situation whereby creative legislators and creative European court to interpret a 20-year old legislation.

The e-Commerce directive was enacted way before a number of these big tech giants and platforms which have become unavoidable trading partners for European industries and have become unavoidable for European citizens. This is the reality.

And this ecosystem is being tackled through and by a 20-year old piece of legislation which has some elements which are still relevant, but I think there are a
lot of legal loopholes that we have to fill and this is so important to take back control of the digital ecosystem.

What are some of the policies that you would like to put in place to deal with the problems you point to?

First of all, some of the most pertinent issues that I have been very focused upon are very consumer-oriented and I always take a very consumer-orientated approach because I believe in consumer rights. And I don’t believe that there should be first- and second-class consumer rights.

First class for offline and second-class consumer rights for the online system.

That is not fair. Today our consumers have become digital consumers and it’s really important for us to take actions accordingly and not treat digital consumers as second class consumers. And also, if we want to help our industry, our innovators.

We have; for example, a very flourishing app developer [scene] in the European Union. The biggest issues that these innovators, start-ups are facing is issues of scale-up and if we want to become the leaders, we have to take back control, by moving forward a number of the fundamental principles such as those principles which I totally endorse under the DMA
basically, defining who gatekeepers are… having a systemic role in the ecosystem and also, imposing a list to achieve a better equilibrium in the ecosystem to take back control by
moving forward the “Know Your Business Customer” provisions to ultimately do away with issues of…. I have nothing against internet anonymity because
internet anonymity is a fundamental importance,  and is a fundamental right for users. But anonymity for business transactions.
This is something which has to be solved,  and this can be solved by the verification element that we want to insert under the traditional Article 5 of the eCommerce Directive
So strengthening information that ultimately is being provided. And also by moving forward others principals, such as principals which are so important when it comes to the business model that these big tech companies and digital platforms are using…. the business model of advertising.

There are a lot of issues when it comes to targeted advertising. We are an assembly group have moved a very interesting campaign: “The Enzo Campaign” aimed to raise more awareness
and do away, ultimately, with targeted advertising. And this is also a priority for my political family. But also, other issues, for example, giving an  extra-territorial element to
the eCommerce/DSA initiative, ultimately enforcing the principle which the Commission took also from our report of what is it illegal offline, should also be considered to be illegal online
and ultimately those sellers who are targeting directly through online marketplaces, but not only, those sellers who are targeting directly our users, our consumers, our market, they have to play by our own rules.

And we do away with the competitive disadvantage of third-country sellers and European producers and distributors who have to abide by stringent EU [rules], especially when it comes to Health and Safety requirements and ultimately third-country sellers do away with all these requirements and have the competitive edge over industries and over our SME companies in the EU who have to abide by all the rules to enter in the Single Market, but also other important issues for me are issues of the Notice and Action” system, which is againof fundamental importance. Maybe users were not so aware about the importance of having a harmonized “Notice and Action” system, they became aware of this big legal loophole, when they saw the recent Capitol Hill incidents when they saw the Parlor being taken-off by a number of platforms when they were basically challenging decisions that were taken unilaterally by big tech companies, and again that is why I could believe that it is the right time to do such reforms, because there is the political push and there is the political visibility and ultimately citizens and users also No, what is happening more transparency out there and also make these want to know to what is happening, have more transparency and also make these companies function in an ecosystem and in a regulatory framework and not play only by the by their own rules. I think these are the main issues that I want to see tackled through the DSA and DMA initiatives.

It’s the Digital Services Act (DSA) and the Digital Markets Act, the DMA, and currently as they we speak they been proposed by the European Commission and they’re being negotiated now.
First do you see these two as parts of the same thing or can they also be handled separately, can you have one without the other?

No. I think both of them are important single market instruments and one is dependent on the other.

It’s impossible to move forward what we are proposing under the DSA and create this equilibrium and sort this big imbalance we have if we don’t have the DMA instrument whereby we will be filling in the blanks which traditional competition traditional ex post competition law cannot fill by itself. So one works in-tandem and in-hand with the other and we cannot discuss the DSA, without in parallel also having a strong DMA. I think they are both very ambitious texts. If you tell me are the ambitious enough? I will tell you I wish the Commission was more ambitious, in a number of areas, especially when it comes to online marketplaces … when it comes to issues such as recommender systems and then transparency in these systems and not only focus on big tech when it comes to advertising and recommender systems.

I think we have to take a more holistic approach, but if you look at the proposals, I think that the point of departure in this is that we cannot view them as a silver bullet to solve all the issues that we have in the ecosystem.

If we take that approach and try to do a lot of patchwork, even internally, although we try to do away with patchwork initiatives in different member states, but if we try to do a patchwork exercise whereby we have these two legislation and try to overload them with a lot of initiatives to try to solve all the issues we will definitely fail, so I think it’s a good to start to try to take back control but ultimately  they are good proposals and something which we can work on during these next months and definitely they will be tough negotiations between the Parliament, Council and Commission.

Maybe if we look at one particular piece, the so called Good Samaritan clause which provides immunity to intermediaries when they take action, some say this will put the terms and conditions in the community standards of the services and platforms above national law what what do you say to this argument?

I’m going to be totally blunt about this. I was always blunt when it comes to Good Samaritan issues I am not in favour of replicating the US Good Samaritan system .. even with a soft touch approach implemented at European level so this is an issue which was very hotly debated also under the compromise of the proposal which I was leading, and in the IMCO committee and a considerable number of MEPs who want to go in that direction especially in some political groups, such as EPP and the Renew group but at the same time we as an assembly group and myself as a person who directly on the DSA and I will continue to work on a number of amendments that I wish to move forward
in the next months and weeks ahead the negotiations we will have in the internal market committee, but I am not one of those of in favour of copying a failed systema system which is already heavily critized in the US and replicated under EU law.

It’s not my first pick and it’s not my first reference to have a Good Samaritan system being soft, a softer system than US or hiding due diligence any other name. I believe that the issues that and cannot be so simply by self-regulation.
They have to be sorted by a clear set of rules

The other half of the immunity for for intermediaries which is the Safe Harbour provisions which are part of the eCommerce directive since many years in Europe….How do you see these rules could be developed, do you think there should be some kind of vetting process to achieve immunity?

I think the Commission in its proposal was very clear when it comes to Article 12 to Article 14/15 of the eCommerce Directive, 12 to 15 and the 15, on touching up the Safe Harbour provisions.

I was very skeptical about even this exercise and I have asked many times different members of the Commission including the Executive Vice President because where you have also application exercise and the copy-pasting exercise you always open up the Pandora’s box for an number of amendments and tweaks to the original principles is that we have are of fundamental importance to the flourishing and functioning of our Digital Single Market and digital services online. So we have to be very very very cautious if we touch-up on those fundamental principles, they can destroy the whole ecosystem so we have to be cautious in that regard. So I think in that respect the eCommerce Directive was very future proof.

So I think, and I firmly believe that we can still work within the Safe Harbour provisions that we have but ultimately there needs to be more clarity and the courts have provided a lot of this clarity “when it comes to the “active” or “passive” status, when it comes to a number of other principles to define when and how exactly an action will help you to lose your liability shield so I think that as legislation we have to basically translate in clear tense so that we do away with the
uncertainty that is present out there by a number of players who tell you that although they are comfortable with the with the liability shields It’s not always very clear. But I think that the guidance is there from the European Court of Justice and I think we as regulators have to frame our provisions and our regulatory provisions around when one is “active” and when one is not, but ultimately I don’t believe that we can, and we should, alter the liability shield provision that we have under the eCommerce Directive and I think this is the message also of the Commission and I agree with that.

Hopefully there won’t be, in the replicating exercise and in the copy and paste exercise from the eCommerce to the new legislation there won’t be attempts to try to alter these principles.

Do you see that there is a threat to freedom of speech with increased regulation?

Do you think this is the domain of the State or should the governments of the world including the European Union, stay away the internet for freedom of speech reasons? 

I think the biggest threat would be if we leave everything as is. If we leave everything as is, the threat is prevalent as we speak without clear EU regulation, and I think the biggest threat would be to leave private companies to regulate speech by themselves and see what fits and what fits not. I don’t with 99% of what Trump says but ultimately I don’t think that big tech companies should be the regulators to regulate on online speech.

So having a system which brings more clarity which basically sets out a framework, for these big tech giants to act and also take decisions on fundamental rights and fundamental freedoms.

I think that is why I open my remarks by saying that we have to take the back control of the ecosystem.

It’s a situation whereby the EU should and must protect its fundamental values, including fundamental rights which are so important by having a system which sets a number of standards, regulatory standards for big tech company to work with, and not work around their own rules.

I think that is the biggest threat to our democracies.

What about innovation?
Do you see that there is a risk that regulation might stifle innovation and put your European companies at a disadvantage in the global competition, compared to other territories which might have less regulation?

I think that the biggest issue that we, and I’m going to focus on the DMA. When it comes to to the stability of our market is the issues that we have with gatekeepers.

So I think that the biggest risk for innovation and I believe, and I’m always a big supporter of innovators and startups in this field, who do so much more if we give them a fairly level playing field to be able to scale up in the digital ecosystem. I think that regulation in this regard will not stifle innovation, ultimately it will help smaller players to innovate more to be more competitive in the ecosystem, because ultimately with the system we have today whereby big tech companies are controlling the ecosystem.

They are putting aside, and putting away, and we have seen numerous cases whereby traditional competition law by itself couldn’t handle complex digital issues that the system will help to bring more innovation, because with more competition comes more innovation and more consumer choice, so ultimately having more control for big data companies will not stifle innovation but will make innovation flourish more because there would be more opportunities for smaller and medium sized companies  to compete out there in the ecosystem.

[transcript ends]

Digital Myth: Copyright Stands in the Way of the Digital Revolution

Wednesday, July 8th, 2020

The Internet was built on copyright content; it’s the foundation of the digital economy.

There are enough myths about copyright and internet to fill a book on its own, but let me bring up a few of them. The basic myth is that copyright does in some way or other limit the progress of technology. I’ve seen and heard many variations of this idea, but let me tell you the one that took the prize. The critics of copyright can be accused of many sins, but never of a lack of imagination. At a conference in Geneva, organised by none other than the World Intellectual Property Organisation – an arm of the United Nations – one panelist suggested that copyright might stand in the way of the colonization of Mars. Not even meant as a joke! He explained that if we make it to Mars and somebody were to spray synthetic DNA liquid on the ground to claim a piece of the planet as their own, that could create problems for further exploration. How this has anything to do with copyright escapes me. If you, dear reader, find it hard to believe, rest assured that the video from the panel can be found on the WIPO website.

Technology can of course be developed in different ways and there is nothing to say that internet technology could not support copyright just as well

No, copyright does not stand in the way of progress; instead it makes possible much of the content that users desire, creating a demand for things like broadband subscriptions, hardware and streaming services. To the extent that content stands in the way of ‘innovation’, it is because the innovators would prefer not to pay for it, but perhaps it should be part of the package with incubators providing office space and hardware and staff working for stock options rather than salaries. Except in that case, the content owners should get a stake in the business too! Technology can of course be developed in different ways and there is nothing to say that internet technology could not support copyright just as well, only there’s no incentive on the part of those who run the internet platforms and broadband cables. Why should they? They get all the benefits of consumer demand without having to share any revenue. Some say that music or film or other content is now a commodity that has no real value. But the truth is that the value is still there in the form of consumer demand, only the revenue ends up in the pockets of those who did not pay for the content to begin with. This creates an odd catch-22, where the revenue from the old analogue distribution methods (say, cinema or print books) subsidize the digital services that fail to create enough revenue to make the pivot to true digital possible for most types of content. That’s right, it is the failure of the digital markets that stands in the way of the digital leap, not copyright.

Another claim is that the creators don’t really benefit from copyright, only the intermediaries. That is at best a smokescreen; creators are better off with as many options as possible and that of course includes copyright. If a creator does not want to use the copyright system, they are perfectly entitled to give away their work for free or to monetize it, whatever they like. Perhaps in the old days when analogue distribution was in the hands of mighty gatekeepers there was more grounds for such a view, but today anyone can access a potentially global audience for a dime (getting their attention is a different job, though). In the digital age, the film studios, book publishers and music labels are more like investors and marketing partners to the creators. The gatekeepers are still there though, but more often in the shape of internet platforms, app stores and social media algorithms. Of course an author can still suffer from an unfair publishing contract, but in most cases the real tug of war is between those who invest in content, sharing risk, and those who distribute it and, at best, share some of their actual revenue with the creators and investors. In a way, it’s the same old conflict between the creators and the distributors as in the analogue age, but with a different cast in the role of the distributors this time around. In newspeak: disintermediation is really re-inter-mediation. When Apple launched the App Store, game developers, fed up with game publishers as they were, rejoiced; a friend of mine called the iPhone ‘the Jesus phone’, only to find a few years later that the disintermediation of the publishers brought a new and stronger gatekeeper into the App Store: as more developers jumped on the opportunity, the platform became crowded with almost 100,000 new games per year, making attention, not content, the new scarcity so the owner of the platform could pick who got heard in the noise, like a bouncer at a hot nightclub spotting the cool kids in the crowd. Re-intermediation was a fact.

Another sub-myth is that the entertainment industry is ‘conservative’ and does not really want to change. That may have been true in the 1990s but today it’s underestimating the basic powers of the market economy. If the entertainment industry could make more money with different business models, it would change. It’s not a matter of emotion. This accusation of conservatism is often used as a segway to proposals on copyright reform, which is code for having to pay less for content. But it may be rather the infrastructure of copyright which needs an update: license databases, micro-licensing, automated licensing, new contract models and other innovations can facilitate licensing, generate revenue and make content more accessible to various services. Except if free content is what you really want, that doesn’t help.

Sometimes rich superstars are raised as a case in point against copyright, but it’s difficult to see how that is a problem. It’s not like anyone else makes less because most successful artists make a lot, and certainly those who make the least would not make more if the right to the financial yield of their creative output was taken away or somehow diminished. In fact, in some cases, successful artists subsidize the work of less successful ones. Take Bruce Springsteen, who has released his albums on Columbia Records since the 1970s. He could easily change label or set up his own, so we can safely assume he’s happy with Columbia. I haven’t seen Springsteen’s royalty statements, but it’s easy to guess that he gets a better cut than the average artist. But even if he gets 50%, the other half of the money stays with Columbia, which will use it to cover various costs, staff, overheads, interest, dividends to shareholders and so on, but not an insignificant part of that money also pays for the studio time and marketing of new artists. Labels are often painted as greedy capitalists and successful artists as a problematic effect of copyright. But you can also think of it as a form of Robin Hood principle: take from the rich, give to the poor.

A long time ago, I was a partner at a game developer studio. We had a contract with a publisher to make a game (Rock Manager, PAN Vision, 2001). It took lots of long hours over three years and much frustration over how the publisher would interfere with our design decisions. We wanted to include drug abuse, but our publisher was afraid to alienate the family audience so our pixel rock stars were limited to alcohol as they were burning out not fading away. In the darkest moments, we bought lottery tickets hoping to win big so we could pay back the advance and walk away. In the end, the game was released in 14 languages. However frustrating the experience of having to deal with the games publisher might have been, the magic word here is advance. They paid us enough money up front to pay the bills for three years. We did not receive the money as the publisher made it, but years before! If the game didn’t sell the publisher would take the hit, not us. So we creators may have invested our best ideas and our blood, sweat and tears, but copyright made the financial investment possible. Keep that in mind when someone ventures, for example, that digital intermediaries sometimes share the ad revenue. Sure, but do they take financial risk? Do they invest?

Some will say the copyright period is too long, but the longer the term and the better the protection (and larger the territory in most cases), the bigger the value of that particular property. That means bigger investment is possible and more money goes to the creators. The biggest problem in the world is not really that those who create art make too much money. In fact, the opposite is true: anyone bold or ill-advised enough to try and make a living from their art will face great challenges. Most will drop out. Others sacrifice their financial security, sometimes family or friends, even their health to create music, literature, television, games or something else. Only a few make it big. We should be grateful that enough people think it’s worth the price or the world would be poorer for it. We can’t all work in insurance or shipping or government, some have to give us art. We should make their lives easier and copyright is part of the answer.

Some will raise developing countries as a case against copyright (or intellectual property on the whole). Because their economies are smaller, copyright stands in the way of the development sometimes, even portrayed as a tool for neocolonialism. But copyright can be licensed by territory and the same content can have very different price points in different places depending on the local demand and purchase power. With intellectual property, developing economies can also grow and attract investment that would have been unavailable otherwise.

Finally, let me say something about pirates. By no means is piracy the only threat to creative content online, but if it were ever a grass-roots movement, it is now clearly a field for organised crime. When police raided Sweden’s no. 1 illegal streaming service Swefilmer, it uncovered close to €1.5 million generated from ad revenue and donations. In a way, this has always been for profit. In 2009, The Pirate Bay was sold for more than €5 million, except the deal fell through when the buyer couldn’t raise the money. Today, VPN subscriptions are a popular way to monetize piracy. Sure, there are legitimate uses for VPN, but if your service is called Ipredator you can be pretty sure subscribers pay so they don’t have to pay for content. There is no reason to see pirates as some kind of internet rebels; it’s a for-profit, illegal business that pays better than a lot of other crime, poses only a small risk of getting caught and doesn’t attract long prison sentences either.

Pirate apologists love to say that pirate distribution can be a great way to attract attention to your work, and that is certainly true in some cases, but that decision is for the creator to make, not the consumer. If it weren’t for copyright, that movie, song, game or book you want to download wouldn’t have existed in the first place.

It’s probably true that it’s impossible to stop piracy altogether. But that’s no reason not to try and reduce it.

Copyright does not stand in the way of any technology, but technology could do a better job supporting copyright.

Digital Myth: It’s About the User Experience

Wednesday, July 8th, 2020

It’s not all about the user experience, also not for those who say so.

You know there is something missing when you read the line ‘We collect personal data to give you the best possible user experience’. A more truthful account would be something like ‘We collect personal data generated not only from your use of our service but pretty much everything we can harvest from your device, to some extent to improve your experience but really mainly because the more data we have about you the more we can monetize, primarily by selling adverts that tend to chase you from one website to the next for days’. It’s like in hotel rooms – who actually believes the management cares about the environment and doesn’t just want to save on laundry?

But what if it were all about the user experience? What if the user experience is the only thing that matters? Would you prefer a single company controlling all the information in the world, giving you the perfect, just-in-time, personalised user experience every time? Or would you rather take less-perfected services from various companies, none of which have all your information? Let’s just say there are other values in life and online than the user experience.

A close relative of the user experience myth is the algorithm myth, as in ‘we don’t have any responsibility for the result; it’s all in the algorithm’

A close relative of the user experience myth is the algorithm myth, as in ‘we don’t have any responsibility for the result; it’s all in the algorithm’. Yes, except you wrote it, you fed it with data and trained it, you tweaked it and keep updating it to deliver … ahem … ‘a better user experience’. It’s like in the comedy show Little Britain where a hospital receptionist takes the most obnoxious stances possible with patients – like signing up a five-year-old for double hip replacement surgery – because the ‘computer says no’. If anyone blames the algorithm, they’re playing dumb in the hope that you won’t call their bluff. Don’t buy that!

A variation of this myth is ‘The Almighty Algorithm’, as in we can’t be responsible for the output of the algorithm. Except you can. While it may be complicated, an algorithm is a set of instructions for how a computer should handle particular situations. My kids have a Lego robot called Bullen. It has a simple graphic programming interface. It’s easy to tell Bullen to, for example, first move forward 30 centimetres then stop and turn 180 degrees when I press the Start button. Every time my eight-year-old presses the Start button Bullen will carry out these exact instructions. It’s predictable and we know what’s going to happen, because we told Bullen what to do – or wrote the algorithm. Algorithms for news ranking, search, dating or financial services are obviously much more complicated than Bullen’s, but basically the same. The owners of those algorithms constantly tweak and alter them for various reasons – improved profits, better function, better security, sometimes even better ‘user experience’. If you don’t like the output of the algorithm, you change it and try again. Then repeat, until it produces the results you want. When someone says the algorithm is too complicated, they may want you to think something like ‘the Lord moves in mysterious ways’, but really they’re just saying they don’t want or can’t be bothered to do what you ask them to.

One great example of how algorithms can be biased was provided by journalist and author Andreas Ekström in a TED talk called ‘The Myth of Unbiased Search Results’. He takes two examples of internet hacktivism in Google image searches: first, one in which racists tagged photos of monkeys with Michelle Obama in order to make the search engine return monkey pictures along with real images of the first lady. To its credit, Google intervened and tweaked the algorithm and image tags so Michelle Obama picture searches would be accurate. But similarly, in the second example, hacktivists tagged images of dog droppings with the name of Norwegian mass murderer Anders Behring Breivik (who, in 2011, killed 77 people, many of them teenagers, in a terrorist attack on a government building and a left-wing youth summer camp). In this case, Google did not intervene. It is easy to agree with Google’s judgement, but don’t tell us the search results are unbiased or that you don’t know what the algorithm does.

Good Samaritan Paradox Paradox

Friday, June 12th, 2020

The classic example of the good Samaritan-paradox is if you intervene at a traffic accident, by moving a victim away from traffic in order to avoid them being run-over by other cars, you should not be prosecuted for damaged caused by the act of moving them. It makes sense, right? The law should not stop people from doing the right thing. Now this law has entered digitaly policy. A recent example is from the European Parliament Internal Market Committee-report The functioning of the Internal Market for Digital Services: responsibilities and duties of care of providers of Digital Services:

6.2. Good Samaritan Paradox

One point of criticism to exclude active role hosting providers from the liability privilege of Article 14 E-Commerce Directive is the so-called “Good Samaritan Paradox”. The “Good Samaritan Paradox” is meant to describe the following: Article 14 E-Commerce Directive with its model provider being neutral and passive may disincentivise the hosting provider from taking precautions against infringements due to its fear of losing safe harbor protection.

Does this sound familiar? If yes, it is because we’ve heard it before. Big Tech loves to say it has no legal mandate to intervene against illegal content. In the next breath, they give the solution: if only the EU could have similar rules as the US: good Samaritan clause as in Section 230 of the Communications Decency Act, nothing could stop them from keeping their services clean.

CDA230 works both as “a sword and a shield” explains Canadian foreign trade expert Hugh Stephens. The “shield” being the intermediary liability privilege often called “safe harbour”: service providers are not liable for what users do (which has created an ocean of nosebleeds, but let’s save that for now). The “sword” is the good Samaritan clause, service providers are protected from consequence if they take action against user content. Such as fact check-labels on tweets. Or take down of nazi videos. You get the picture.

According to Big Tech, EU law only has the shield, not the sword. If only they could also have the sword…

Except, even in the US, where that sword is available, there’s not a lot of sword-wielding going on. Twitter’s fact-check labels are the exception (and very likely legal as freedom of speech anyway, nevermind good Samaritan). Rather, CDA230 is used as an excuse to not take action, for example against sex-traffickers (sorry not sorry for using the same example as in the last post). Or as Hugh Stephens puts it:

It’s not about the law, it’s about the platforms’ lack of will.

Unfortunately it is the shield aspect of the legislation that has been most often invoked by internet platforms, allowing them to ignore all sorts of abusive material on their sites on the basis that they are merely passive bulletin boards, and not responsible for content posted by others. Thus hate speech, content promoting terrorism and violence, revenge porn, sex trafficking, and so on has been allowed to proliferate on the internet with no legal recourse against the platforms providing access to the material. In some cases, platforms have had no incentive to remove access to objectionable material because they have been able to monetize it by attracting consumer eyeballs and thus advertisers. 

That’s the Good Samaritan Paradox Paradox – the US example shows that even if EU law would embrace the Good Samaritan, the internet would be no better off. It’s not about the law, it’s about the platforms’ lack of will.

Back to the IMCO report, turns out the platforms already have the mandate to take action against illegal content without risk of liability anyway:

/../ it is not the “active role” to identify infringements which leads to the hosting provider losing the liability privilege of Article 14 E-Commerce Directive. Rather, it is the active role to promote, present or organise the content. With such an understanding of “active role” no “Good Samaritan Paradox” will emerge from the Article

Good Samaritan Paradox Paradox Paradox, anyone?

EDRi’s Paper and the Two Faces of the Internet Debate

Monday, April 27th, 2020

The conversation about the internet has two completely different understandings of technology and society clashing.

On the one hand is the now nostalgic idea that the internet is a de-centralised technology that if left alone will deliver great things like freedom to the oppressed, economic growth, jobs and a voice to each of us. Democratisation is a key word. The threat is all the bad people who want to control the internet for their own purposes: dictators, lawyers, platforms, entertainment industry, Luddites to name a few. This type of thinking is sometimes labelled “internet exceptionalism”.

On the other hand, we have the view that the internet is part of society and works in concert with the rest of the world, adding to it and taking away from it, amplifying some things, playing down others, disrupting some economies, reinforcing existing hierarchies in others. This thinking is where democracy comes from institutions rather than the absence of them. We can call this idea “internet secularism”.

“We are just a technology company”

Now, the confusion is that these two tend to be mixed up. When Silicon Valley companies are confronted with criticism for anything from abuse of dominant position to providing the tools for genocide propaganda, they will often say something like “first of all, we are a technology company”. As in “we only make the tool and that’s great and we can’t really control what it’s used for”. That fits the Internet exceptionalism-concept. Except Big Tech is not making hammers or anything like that, instead they provide sophisticated services with complex algorithms controlling user behavior and maximizing profits. Which smells much more like internet secularism (stock price first, freedom to the oppressed later).

This can also go the other way, for example when governments internet secularism to established companies, demanding they comply with consumer law or age ratings or something else, thereby conveniently ignoring the creativity of less serious actors who simply move illegal operations to some less strict jurisdiction.

EDRi’s Position Paper on Digital Services Act

One place where these two views are butting heads at the moment is in EU legislation (to be fair, the head-butting has been going on for a decade or so with no sign of slowing down). The most recent addition is the internet activist group EDRi’s position paper on the upcoming Digital Services Act.

EDRi is an organization that wants to “defend rights and freedoms in the digital environment”. It plants itself firmly in the exceptionalism camp (surprise!). The paper is not without merits though, it fiercely criticizes the dominant platforms on lack of transparency, privacy short-comings and “broken” business models. It makes suggestions on how transparency and legal certainty can be improved. It has concrete proposals of how these can be executed. All from a departure point that looks more nostalgic than anything, like here:

The internet was originally built as a decentralised network with maximum resilience against shutdown and censorship. /…/ The social and economic benefits of this architecture were considerable: low costs of innovation, flexible ways to adapt and reuse existing technology for new services, and the freedom of choice for every participant in the network.

That paragraph reads like a definition of internet exceptionalism. Everything was great until the bad guys started messing with it. The problem with this departure point is that it leads to the conclusion that regulation is a threat and the way to deal with problems is to empower users. This is what EDRi’s paper says about opening up social media platforms:

This would allow users to freely choose which social media community they would like to be part of – for example depending on their content moderation preferences and privacy needs – while still being able to connect with and talk to all of their social media friends and contacts.

Sounds great, but not all users have profound knowledge of these things. Maybe that used to be different back in say… the early 90s, when all users where “power users”. With Billions of internet users (if that is even a relevant term anymore), the knowledge will inevitably vary. Another blind spot in that thinking: are “content moderation preferences and privacy needs” really a question only for the users of a service? What about those outside the service whose data or content may be distributed on it? What if the individual user accepts a trade-off of personal data to free content, but the combined effect has a negative impact on society (health data could be one example)? How about if users want a space to spread racist hate speech, building up a momentum for genocide? Is that really only up to those users?

Internet exceptionalism is the red thread in EDRi’s paper and this limited perspective regrettably makes it less useful. Its merit is that it points to the problems of the centralized platform economy and the dangers of expanding its intermediary privileges. To find the answers, we must look elsewhere.