Author Archive

Why Europeans (and Floorball) Benefit from Geo-Blocking

Sunday, December 10th, 2023

Some ideas are bad and some bad ideas die hard. The debate about a ban on so-called “geo-blocking” is approaching a decade. Time has not helped make the idea of a ban any better.

“Geo-blocking” means that some types of online content is not accessible by some consumers. Limiting access is of course completely normal, lots of online content requires subscription or login. The problem here is when the limit is based on geography rather than some other factor. That is a red flag to some European policy-makers.

However, geo-blocking is just the flipside of something more important: tailored consumer offerings. This is one of the strongest features of the digital economy – adapting the content and delivery to the individual. Rather than providing the same feed for everybody, your search-page or social media looks different from mine. Ideally, suppliers will tailor the content in a useful and relevant way. This includes price point.  The purchase power varies throughout Europe. The price of a glass of beer is much higher in Stockholm (where I live) than in Spain (where I go for Christmas break this year) for example. The demand varies across territories.

As I write this, I get news of Sweden’s dramatic win in the floorball (“innebandy”) world cup final. Floorball? Yes, it is a very popular sport in Sweden and Finland and… nowhere else. Swedes and Finns are excited, nobody else cares. Floorball demand peaks in the Nordics. We are happy to pay for floorball. It is hard to see why anyone else would. Why should the price for watching floorball be the same outside the Nordics? There is no single European demand for floorball. In fact, there is no single European demand for most things.

The list of examples can be made very long, but the point is that is a reflection of cultural diversity across Europe. Tailored consumer offerings, in the form of territorial licensing allows commercial investment in places where demand is high. It also allows free access in places where demand is limited. In both cases, audiences benefit. If there is some case where the demand is European-wide, there is nothing in today’s market or rules that stands in the way for a pan-European license. No market failure, nothing broken, nothing that policy needs to fix.

If the idea of the European single market is inspired by the giant domestic market in the US, how does content licensing work there? Is there geo-blocking in the US? No. Licensing is completely dynamic. If you want to license your content to one state: fine. Three states: also fine. Only one city? That’s fine too. You can have any territory terms in your contract. In fact, I’m told American authorities think this is great for competition. Which system do you think works best for innovation, growth, diversity and delivering for the consumer? One where the government decides which contract terms can apply? Or one where the market actors figure it out?

A ban on geo-blocking would make content more expensive in many cases. It would limit investment into content and services. It would hurt jobs and the European economy. It would limit the choice for the audiences. It would make translation and adaptation more expensive, thus more difficult and therefore more scarce. It goes against the concept of cultural diversity. How would anyone think that is a good idea?

On December 13, European parliament will vote on a ban on geo-blocking. I see a risk that this becomes the new daylight savings time. One misguided attempt to make EU policy popular for consumers that ends up in a big mess. And in the end, nobody cares about daylight savings time anymore because our phones adjust the time without us noticing.

Did I say this debate is age-old? If you don’t take my word for it, read this opinion by a number of MEPs. It was posted December 7, 2017. Can we bury this idea now, once and for all please?

Where Did Technology Neutral Go?

Tuesday, November 28th, 2023

Remember when legislation was supposed to be “technology neutral”? We used to hear this a lot in the 2010s when policies like digital single market and copyright were discussed. Technology neutral was more important than anything else. Never mind the result long as the legislation is technology neutral.

Fast forward to 2023 and EU has the Metaverse Regulation and AI Act. In fact, EU brags about making the first AI legislation. Those are not technology neutral. In fact, they are the opposite. They are technology specific. So… are we meant to have specific legislation for each new technology now? The list of technologies is long and growing but at least there will be a great job market for eurocrats and lobbyists.

I’m sure there are reasons, but when did they sunset the technology neutral-principle? I didn’t get the memo.

Don’t want to look for conspiracies, but I can’t shake the feeling that maybe technology neutral was not the real reason back then. But if not that, then what was the real principle? And why was that not brought forward as the reason for those policies? And is that real principle the same real principle now that we’re talking technology-specific? Or did that change and why? I would really, really like to think that the European policy-makers know what they’re doing and just don’t make things up as they go. Do these questions have answers?

I always thought technology-neutral was sort of a pipedream. Every technology is different and by pretending it is possible to make legislation neutral, it shifts the power to other stakeholders. The idea has some merits though, by looking at principles rather than specific applications, policy-makers can focus on the long game and say things like “illegal offline is illegal online”. That’s not possible with technology-specific legislation. Also, technology changes and it is often said that legislation struggles to keep up.

When did the EU policy-makers let go of technology-neutral?

Move Fast and Ban Things

Tuesday, September 12th, 2023

Move Fast and Break Things was the title of Jonathan Taplin’s 2017 book (if you haven’t read it, stop reading this blog and pick it up!). “Move fast and break things” was Facebook’s battle cry in the early days. Now it looks more like move fast and ban things, like news. Or maybe move fast and burn things?

Canada’s PM Justin Trudeau criticized Facebook for its news ban during the wildfires. Access to proper information about fires saves lives, Trudeau argued. But Facebook played the monopolist’s card in its effort to avoid paying news organisations for content – the same standoff between news media, tech companies and policy-makers as seen in Australia and Europe.

Coming up with better things to ban (or burn) than news would be easier than shooting fish in a barrel. Genocide propaganda, fake news, fake ads, phishing, identity theft… add your favourite nuisance. But never mind that, here is a more interesting idea: what if Facebook were to be held to account by the same standards as other media outlets? What if it had to publish corrections? If there were a proper procedure in place for wrongful posts? An appeals function for publishing names or personal information (no not the Oversight Board “deflection”)? Transparent procedure and proper follow-up? An independent body looking after the rules (no, not the Oversight Board!)? You know, media ethics stuff. The systems that have been developed in all democratic countries to protect freedom of speech and the public opinion formation processes. These problems have been solved. Same problem, different technology.

I know, I know: it’s just one blogger’s opinion and there is no reason why tech platforms should agree. But perhaps some prime minister with good hair could look in that direction? Probably more useful than complaining about it in traditional media, which is banned by Facebook anyway.

Move fast and fix things.

Footnote: Jonathan Taplin has a new book out. Review coming at Netopia next month. Watch this space.

Free as in Science – 3Qs to Catherine Blache

Wednesday, August 2nd, 2023

Catherine Blache is senior counsel for international policy at SNE, the French Publishers Association. Netopia met her at Wexfo in Lillehammer in May, where she spoke on how the system for scientific publishing is being challenged and what is at stake. Netopia took the chance to ask three questions:

Why are you concerned for the scientific publishers?

Scientific publishers are faced with policies promoting immediate free access to scientific publications, with no previous dialogue and no previous impact assessment.

As a consequence, some publishers may be driven out of the market, especially SMEs publishing in local languages.

More worryingly, policies seem to aim at doing without scientific publishers.

Moreover the “open access” movement has been attacking the copyright system. In particular, French and European researchers are being advised against granting their copyright to publishers. Instead, they are encouraged to put their publications under a CC-BY Creative Commons licence: this is the most liberal as it allows all kinds of re-use, even commercial re-use. As a matter of fact, it is like inviting authors to give up both their economic and moral rights.

Another recent trend consists of favoring a model called “Diamond” for immediate OA. In this case the costs of publishing are not borne downstream by libraries buying a subscription, nor upstream by the institutions of the researcher. They would be left up to sponsors, public or private.

If the state were to replace scientific publishers, this could only be at the detriment of quality, diversity, economic efficiency and innovation, and therefore of freedom of expression and democracy.

Science is funded largely by public money – should not the results be free?

Over the last years, publishers have endeavored to answer the needs of researchers for an easier access to scientific publications. They facilitate access to archives of their publications after a certain embargo period, or they offer immediate free access to the articles against payment by the institution of the researcher.

In fact, the publication of articles resulting from publicly funded research is subject to a set of high digital publication costs borne by the publisher – whether state-owned or private. An author, a researcher can decide to grant his exclusive rights to a publisher.

This way, the publisher can expect to get a return on his investments to ensure the quality and provide digital services.

By favoring a single model allowing immediate open access thanks to subsidies, one could have a “de facto nationalization of knowledge” as some French MPs called it (Report of March 9th, 2022 of the Office parlementaire d’évaluation des choix scientifiques et technologiques (OPECST) Pour une science ouverte réaliste, équilibrée et respectueuse de la liberté académique) [pdf].

This could only benefit the Big Tech, who could harvest this content without having made the initial investment nor ensured the quality of the original content. They could offer their own products, and therefore make a profit on the basis on the value created by researchers and publishers: using them to train their AI, integrating them in their own databases and therefore increasing their advertising revenues…

What needs to be done to fix the problem?

In order to preserve the balance of the scientific publication system, one should first have a stakeholders dialogue, as well as economic and legal assessment of considered options, in particular on the rights of researchers and the normal exploitation of publications. Following the pandemic, the challenge of proliferation of fake news and the rise of extremism, political decision-makers should keep the quality of science and academic freedom at the center of their policies.

 

Footnote: Mme Blache wishes to share this link to an opinion (in French) published by Le Monde earlier this year, written by SNE President Vincent Montagne: Sciences : « Enrichir Microsoft, Meta ou Google au détriment des éditeurs privés serait une erreur » (lemonde.fr)

Two Ways To Be Harmed – 3Qs to Leandro Demori

Monday, June 26th, 2023

Leandro Demori is the editor of the newsletter “The Great War” – A Grande Guerra and hosts video shows on Youtube focusing on politcs. Leandro Demori used to be executive editor of The Intercept for five years. Netopia met him at World Expression Forum in Lillehammer, Norway, and asked three questions:

What is the situation for free speech in Brazil today?

It’s not an easy issue to say because we have a huge and big country and there is a difference between regions. if you live in a small town inside the Amazon forest, and you talk about dangers of environment or new hydro-electric industry, you have a great chance of being shot dead.

Talk about dangers of environment or new hydro-electric industry, you have a great chance of being shot dead.

At the same time, for journalists living in big cities and main capitals, it is not common to get shot at. Power uses another tactic against you, basically using the judicial system: manipulation of prosecution, trying to hurt your reputation. Doing tons of processes against you, in ways you can barely defend against. Trying to grab all your money and all your time with these judicial wars. So there are two ways to be harmed as journalist – or activist – in Brazil.

Is social media more a tool for activists or power?

Power. They are the power in fact. I started using the internet when it started commercially in Brazil in 1996-97. It’s not only a topic I know very well, but I love it!

I love the internet, why I am speaking here at Wexfo. But we are not talking about internet but platforms – it’s not the same thing. The platforms want us to believe that they are the internet but they are not. They are huge, big companies. Probably the most successful companies in history of capitalism. If you can take for example the great Portuguese or Spanish expeditions, or the industrial revolution, or the automotive industry, the petroleum industry – all of them needed tons of workers and capital, but the platforms don’t need it. You can run a platform and grab billions of dollars in the market with 10-12 or 15 persons in a room.

They are very profitable and they spend tons of money lobbying in all the parliaments – in United States, in Europe, in Brazil. In Brazil we saw it two weeks ago, when we tried to approve a law, basically it’s called the “project of fake news”. It’s just saying to the platforms – we want platforms to be responsible for content that is posted.

We want platforms to be responsible for content that is posted

We tried to approve that project and the platforms created tons of misinformation and fake news, and lobbied all the congress, and we were not even allowed a vote.

They are so powerful, they created an association in Brazil – Meta, Alphabet, you name it, they are all associated with this association –

Fake news officially [created by Platform backd] associations saying that the “project of law” will forbid you to post texts of the bible on the internet

and they created and distributed fake news to congressmen who are linked to protestant religion, the new Pentecostal (which is very strong in terms of votes). They created a fake news officially by the association saying that the “project of law” will forbid you to post texts of the bible on the internet. That is totally fake! They created that, they distributed that to these congressmen. The press discovered fake news, and they admitted it. And it was just a fake news distributed like a Whatsapp message, without date, without name. Pure fake news. They are very powerful, in general don’t care about activists, journalists and local politics.

Are you an optimist or pessimist for the future of free speech?

I need to be optimist, because if not I don’t see myself doing what I’m doing for the next ten or fifteen years. But at the same time, it’s like I’m watching a live show of the power of the money of the big platforms.

It’s like I’m watching a live show of the power of the money of the big platforms.

I know that it’s very difficult to confront them. You know they can’t only manipulate with money, but they can use the algo to manipulate the discourse. It’s a new Leviathan. We don’t know how to fight, that’s the problem. In Brazil people don’t know or understand why it’s important to regulate social media or platforms. People think it’s third or fourth thing we need to discuss in Brazil, but it’s not. It’s about the life of everyone in the country. I’m optimist doing the fight, but I think we’re losing and continue to lose so many battles in the future.

 

Out of the Box – Re-inventing Media Literacy

Tuesday, June 13th, 2023

3Qs to Marlon Julian Nombrado

Of all the ideas for how to make the digital world better, educating the users may be the most popular of all. Except it is often a convenient excuse more than a real strategy. At World Expression Forum in Lillehammer this May, Netopia spoke with Marlon Julian Nombrado from the Philippines. For him, educating the users is the real deal.

You started Out of the Box – why is media literacy so important?

We recognize that there must be a shift in emphasis on educating future media professionals

When my friends and I started our media literacy project eight years ago, the media landscape in our country looked quite different from what it is today. As fresh Journalism graduates then, we simply wanted to share what we learned in school to a wider audience outside our university. We strongly believed that many of the things we were taught in j-school such as media bias, political economy of news, gender-sensitive reporting, etc., were meant to be taught not only to us young aspiring journos but to a wider public. As we witness the slow but steady crumbling of the gatekeeping power of legacy news media, we recognize that there must be a shift in emphasis on educating future media professionals to educating the broad masses of media consumers. This is the mission of media literacy — to place priority on upskilling and empowering the media consumer, now touted as media ‘prosumer‘ (combined producer and consumer of media products and experiences).

In the age of information disorder, data mining, affective polarization and the looming threats posed by artificial intelligence technologies, it is obvious to say that we cannot afford to keep on applying the same kind of education that we’ve administered to our youth for many decades. We need dynamic approaches to education that heed the immense influence of media, information, and communication technologies in our lives and in today’s societies. Only with a media literate public can we effectively push for and implement policies related to the regulation of modern and future digital technologies. Only with a media literate public can we thoroughly push out harmful, abusive, and hegemonic narratives that are seeded in our media streams in the guise of free expression. When we mainstream critical and civic media literacy education in our societies, we give democracy and sustainable development a chance to prevail and prosper in the future.

Is media literacy enough? Who should do what? 

Multisectoral networks led by civil society are in a good position to build alliances with governments, private institutions, and international bodies to ensure cohesion of our interdependent efforts

We strongly campaign for the mainstreaming of media literacy but without the illusion that it is some magic pill for the many intersecting media anxieties that we are facing today. With its advantages, media literacy also comes with obvious limitations. For one, investing in education is a long-term solution, its impact might not be immediately apparent compared to that of journalistic interventions and policy or technological responses aimed at digital platforms. Others are doubtful of the impact of educational interventions as they presume the absolute power of manipulative media technologies over its users. But we should not be cancelling each other out. Instead, advocates of media literacy education should move in the similar direction as other stakeholders in the digital rights and internet freedom agenda. Meaning, we should learn to complement and support each other’s contributions. Multisectoral networks led by civil society are in a good position to build alliances with governments, private institutions, and international bodies to ensure cohesion of our interdependent efforts. Media literacy is a potent tool for all concerned actors not only to critique and create media, but to mobilize the public through and with media to act towards the common good.

 

Avoiding Catastrophe – Netopia Spotlight: Prof. Stuart Russell

Wednesday, December 14th, 2022

Artifical intelligence is no longer the sci-fi future that we have so often used as a panel for projecting our fears and dreams onto. Today it is in every person’s hands (or device) as the raving popularity of text-generating AI-systems as ChatGPT or text-to-image-systems such as DALL-E and Midjourney have demonstrated. What does this mean for the impact of AI?

Netopia spoke to a true veteran and thought-leader in the field, professor Stuart Russell (read Netopia’s review of his 2019 book Human Compatible!) Abstract Intelligence – How to Put Human Values into AI – Netopia Netopia

Professor Russell came to Stockholm earlier this month for the Nobel Week Dialogue and further discussed these topics on two panels on the program, watch them here (starting at 4:30:00): Nobel Week Dialogue – NobelPrize.org

WATCH THE FULL INTERVIEW

Professor Stuart Russell, Welcome to Netopia’s Video Spotlight interview.

It’s nice to be with you. Thank you.

And we are in Stockholm today for the Nobel week dialogue and you have a very impressive resume that you’re a professor of neurological surgery. I was a professor of neurological surgery. for just three years while I was working in a research project. And also computer science has been a focus?

 Yes, computer science has been my day job, And I’ve been in UC Berkeley for 36 years now. And that’s an interesting combination of neurobiological surgery and computer science. The mind leaps to eurological interfaces that connect to your brain.  You might think, but actually, it was a coincidence that some of the basic mathematical ideas that I had. I thought might be useful for some of the problems that come up, really just in keeping people alive in the Intensive Care Unit. So, when you’ve had a head injury… often your brain is unable to regulate your body. And so the Intensive Care Unit is there to do it instead of the brain right to keep you a temperature in the right range. To keep your heart rate, your blood  pressure, your oxygen levels, it has to manage everything and do that. It collects a lot of information. So a patient in the Intensive Care Unit is plugged full of sensor devices that are measuring all these things so you can know when you need to fix it. But it’s very hard for human beings to keep track of all that data. So we thought that we’d be able to use AI systems to watch the sensor values and then determine as soon as possible if something was going wrong and then intervene early and more effectively and It turned out that yeah, we could do that a little bit, but the human body is a very, very complicated thing. And so I think we just scratched the surface of that problem.

That’s really interesting….. It’s the subconscious operations of the body rather than mimicking the mind because that’s something we often think about when we talk about Artificial Intelligence.

That’s right. So the connection is really a coincidence. It’s not, I wasn’t trying to understand how the brain works. It was just trying to stop people from dying but I know much more about the plumbing of the body basically.

You are here in Stockholm. Now for the Nobel Week dialogue and you shall be speaking this afternoon. What’s your topic today?

So there are two panels. One is on living with technology so I’ll do an introduction about artificial intelligence. What’s Happening Now? What are the trends? What’s going to be the big thing inthe future? The second panel is on how to avoid catastrophe , which happens to be what I’m working on for the last seven or eight years. I’ve been thinking about what really one main question Which is: if we build machines that are more powerful than human beings, how do we have power over them forever?

I’ve been thinking about what really one main question Which is: if we build machines that are more powerful than human beings, how do we have power over them forever?

So, that’s the question what I’ve been asking. And so it’s led me in some very interesting directions, including a realization, that actually we really got the field wrong right from the beginning.

How so? So, the way we, the way we thought about AI….was we started doing AI roughly in the 1940s and so  obviously it’s about making machines intelligent. The question is: what does that mean? Does it mean just that they wrote beautiful poetry or, you know, in some people thought, oh, it means that they have to behave just like human beings, right? But that’s really a sort of question of psychology and humans behave in ways that are sort of a lot of accidental results of evolution of structures of our brains and bodies and so on, you can’t really build a mathematical discipline out of that. So the definition that won out was a definition. That we borrowed really from economics and philosophy, the notion of rational behavior, the notion that are our actions can be expected to achieve our objectives. And obviously, if you take an act that you don’t expect to achieve your objectives, then it’s not rational, It’s not intelligent to act in ways that are contrary to your own interest, so that’s the model that we borrow. Right, and for humans, that makes sense because we come in and we have our objectives by, for whatever reason. There are things we want our future is to be like the things we don’t want our futures to be like, but for machines they don’t come with objectives.

So the model that we developed was you make objective of achieving machinery or as we call optimising machinery. And then you have to plug in the objective, right? And so, in the early days of the field, those objectives were logically defined goals. Like, you know, I want to be at the airport before 2 p.m. right more recently. We understand that there’s uncertainty, we have to do with trade offs. So we have a more, a richer notion of what we mean by objective, but the same principle that our actions should be expected to achieve our objectives. And the same for machines…the problem with that model which for some reason we just didn’t notice until recently is that if you if you put in the wrong objective, Then you have a problem right now, you’ve got a machine that’s pursuing an objective. That’s actually in conflict with what you, the human want the future to be like, right? So you’re really setting up a war between humans and machines. Well that’s exactly what we want to avoid.

Then you have a problem right now, you’ve got a machine that’s pursuing an objective. That’s actually in conflict with what you, the human want the future to be like, right? So you’re really setting up a war between humans and machines. Well that’s exactly what we want to avoid.

So one answer might be okay. We just have to make sure that the objective we put in is exactly right. Yes, That it’s complete that it’s corrected covers all conceivable, human interests, no matter how the future actually evolves. And that’s completely impossible because there are there are things that are going to happen in the future that we don’t yet know whether we’re going to like them or not, right? So, The answer seems to be get rid of that model all together, Get rid of the model that we build objective of achieving machines and we put objectives into them. So what we do instead is build machines that know that they don’t know what the real objectives. So they’re actually uncertain about what it is that he would want. Even though their goal is to help humans, get what they want but they don’t know what it is. So that’s a new kind of program where we didn’t have those kinds of programs before and it actually leads to all kinds of desirable behaviours, Because if the machine knows that it doesn’t know what the true objective is then For example, it has an incentive to ask permission. Before doing something that might violate…. Some of our objectives, our preferences, right… in the old way of doing things. There’s never a reason to ask permission because the machine has the objective. That’s what it has to pursue those the right thing to pursue it, right? And so it never ask for permission, right? So so it’s early days and there’s a huge amount of work to do, but I’m reasonably optimistic that this way of thinking about a I will actually turn out to be better and maybe we’ll solve this problem. This long run problem of how we maintain power over machines. So it’s an “Artificial Doubting machine”…. it’s a “Humble Machine”.

Since you wrote the book that we reviewed a few years ago there has been a big change that AI has become something in every man’s hands now with the mid-journey and Dall-E and the issue of creation, artificial intelligences, and also chatGPT, very popular as we speak and it’s all over social media. Yeah, did you expect this to happen? This democratization of it. …of artificial intelligence tools and what’s the impact?

So it’s interesting that you bring up these two examples, the second one, chatGPT is very much along the lines that people have always written about in science fiction. If you think about on Star Trek, you know, the computers there. You could talk to the computer, ask it questions. It gives you very knowledgeable answers; you know, some of the early real AI systems even in the late 60s, were question answering systems. You can ask your questions in English and it would answer you in English and interestingly

ChatGPT is not able to do some of the things that those systems were able to do in the late 1960s

ChatGPT is not able to do some of the things that those systems were able to do in the late 1960s. So for example, in those systems the most famous being system called SHRDLU by Terry. Winograd, the conversation was about a simulated world with where you were moving things around. The table and you could say to it, okay, put the red block behind the green pyramid then you could ask questions. Like well, what’s in front of the green pyramid? And it could tell you, whereas chatGPT very quickly gets confused and you can’t answer those kinds of questions in house.

Abstract understanding of the outside world is that right?

That’s right. So it can’t build and maintain an update correctly, a model of what’s happening in the world. It does some other things, really very impressively but those kinds of sequential tasks, not so much but I think that’s probably not a Time. The other kind of system…

We hope for the best. We have absolutely no idea how the systems do what they do. We can’t predict when they’re going to work, when they are not going to work sometimes they answer questions correctly.

this idea that you could put in some text and then it will produce a picture for you….. And I was I was giving a speech in the House of Lords a few weeks ago. So I just had to have some fun and put in “Members of the House of Lords wrestling in the Mud” And this was on stage one of the stable diffusion systems and It produced a really quite impressive picture ….of you know elderly gentleman wearing long robes covered in mud. It was quite funny but that was never a goal of AI, right? Right. That it just wasn’t something that people worked on it. Just turned out by serendipity, People realize that, yeah. If you train with both text and images, you can get generative models. And I think it came, people found ways of generating images, if you train it on, lots and lots of faces and with a certain kind of technology called Generative Adversarial Network or GAN, Then you can ask that model to generate new faces. It’s very good at that. But then they just realized if you train in parallel with textual descriptions and images, then if you ask for text, you can it produce images. So, a completely new functionality, That wasn’t ever really seriously pursued in AI until very recently, So it’s been a very fascinating period and the kinds of things that are going on in AI…. …they just don’t resemble anything that we did historically in the terms of the methodology, The early question and answering systems that I described from the late 1960s underlying it there was a logical reasoning system with a database and then we would take a natural language, we would find the structure of the sentence we would convert it to into an internal formal representation interface that with the reasoning system and so on. Now we just make basically a big pot of circuit, you know, billions and billions and circuit elements. That are just tunable and we just train it on trillions of words or text. We hope for the best. We have absolutely no idea how the systems do what they do. We can’t predict when they’re going to work, when a not going to work sometimes they answer questions correctly. Sometimes they just output complete nonsense. Any but you know, one of my friends with just sending me examples, he was trying chatGPT, He was asking okay: which thing is not bigger than the other an elephant and a cat and GPT. Confidently says: “Neither an elephant nor a cat is not bigger than the other.”

You speak so fondly of the artificial intelligence almost like we talked about our children or our pets and at the same time, some people think of it as the end of humanity….

Well, I think you can simultaneously enjoy both pictures. I mean the things we have now, in many ways, they’re amusing toys and in some sense, they are like animals in that.

We use dogs for hunting, we use horses for pulling carriages around, and we’ll find ways to use these chat systems.

They are the result of really a sort of process of natural selection to process that, you know, is a stochastic gradient descent algorithm which is sort of what natural selection does and it has some other things to it too. But that process, that sort of; it’s almost like a certain chemical reaction …it’s just sort of like throw lots of stuff in… let it boil for a while and see: you know, maybe it’ll turn into a cheesecake, or maybe something else, right? And it just turns into this thing and you don’t know how it works. And so, you just play with it and you learn what it can do and can’t do like with a cat. We learn that cats. Don’t come when you call them … A can of cat food, then they come. Some things cats can do, some things dogs can do, We just sort of learning…. This is almost like a new species will just learning what they can and can’t do and how how to use them. You know, we use dogs for hunting, we use horses for pulling carriages around, and we’ll find ways to use these chat systems, Well the new generation systems. Something that’s actually much more capable than the old chat was.

As humans think we tend to project things like emotion and intention on living things and and objects…. of course, also on AI. …as I was preparing for this interview, I thought about the old robot dog. AIbo…it was. That’s right. Sony AIBO and it was like a small puppy dog, and it acted like a puppy floppy ears. Do you think that they would be point where, we get a perfect puppy? Or is there something intangible? Something like…. Life or soul… that… In principle we could do that. Whether it would make sense, I’m not sure. And it seems quite likely that the natural direction of technology would take us. In different directions, right?

So, the probabilities given that machines are so much faster than biological frames and as they scale up, they bigger memories. They have much more communication bandwidth with each other, right, they can exchange information, far faster than me humans. Can exchange information with each other. So they’re just going to look very different for biological systems. I think. And I would say the jury is still out on which technological approach will end up working. I know there’s a lot of excitement in our around deep networks and large language models, which chatGPT is an example. But there are reasons to think that those approaches will fail in the end. And we’re already seeing the ways that they don’t work as well as you would like, in the sense that they seem to need far more data than humans do around right. ChatGPT has already read possibly millions of times more text than any human has ever read me. And yet they still get very simple basic questions wrong. The image recognition systems need to see thousands of millions of examples of a giraffe, right? But if you get a picture book, read to your child …you can’t buy a picture of the million pages of giraffes!

The image recognition systems need to see thousands of millions of examples of a giraffe, right? But if you get a picture book, read to your child …you can’t buy a picture of the million pages of giraffes!

is one giraffe and it’s really a simple, you know, yellow and brown cartoon giraffe and that’s enough for that child to recognize giraffes in any context, anywhere in the world for the rest of their lives. From one example of the human learning is much more capable. and I think that illustrates that there are basic principles that we haven’t yet succeeded in capturing in our approaches to machine learning.

I think that we have tended to think of AI as something foreign or something… that comes around the corner someday and maybe it’s been a topic of science fiction. So, back to the democratization, does this change the relationship between us humans and artificial intelligence and the expectations. we might have on it now that we can more easily interact with it?

I think it does. And I think the point made earlier that we probably overestimate its intelligence and whether it’s at, actually reasoning or or even remembering. So it’s it’s very hard to remember that something that is able to generate grammatically correct and coherent text could be doing that using completely unintelligent principles. But that can certainly be done, right? And there are many examples as one of my favourite on the web is called “the Chomsky bot” and so the “the Chomsky bot” is a very, very simple statistical text generator. That was trained on a lot of pages of the writings of Noam Chomsky and it produces paragraphs and you know they’re very coherent you day. They’re very characteristically Chomsky and very complicated sentence structures and complicated logical relationships. Among all things in. If you just ask it to speak or to write a few paragraphs you think “Oh my goodness”…. This is amazing. You know this program is so brilliant but actually if you keep doing it it starts to get repetitive right and then you realize that you could start to see, okay? How is it making its really a party trick… and the large language models, chatGPT, and others are a really more sophisticated versions of that, what the response that they’re giving is in some sense a statistical, average of the kinds of responses that humans have given in to those kinds of inputs. It all the text that the system has as ever, written. So simple example would be if you ask it how are you today? Right? Well, what’s the most common answer? In history to the question. How are you today or? I’m fine. Thanks. How are you, Right? So horribly it says, unless it’s been specially prepared ground to avoid it. It will probably say “I’m fine. Thanks. How are you?” Right!? But that doesn’t mean that it’s fine, it’s actually just parroting what humans say. And it’s the, it doesn’t have any sense that…. Yeah, it exists or that it could be fine or not fine. Or even that the word fine doesn’t apply because it’s a machine. It’s just parroting. So if you just keep that example of mind, right? But it’s not answering your question, It’s been then that the helps to dispel the illusion on the other hand. It makes you wonder, is that what human beings are doing? Most of the time right there? We’re not really doing all the reasoning and thinking remembering that a lot of the time our speech is generated by pulling together patterns that we’ve seen in the past or even things we’ve said in the past. And I can tell you, that’s what I’m doing right now

And sometimes we lie, maybe we are not fine We could just don’t want to talk about it.. Okay, last question. It also appears that input data to AI is a field of a power struggle on many different levels, we have big companies investing in artificial intelligence systems and trying to get access to as much data as possible to train them?

We have super power States doing similar things. How do you see this? this playing out, will the benevolent forces stand tall in the end.. or is this part of the dystopic?

You can be very intelligent without a lot of data humans. You know that, for example the amount of text even in GPT3 which is not the latest generation, right? We’re about to see GPT4 but the amount of text that GPT3 was trained on is roughly the same size as every book ever written.

Well, I actually think that at least I hope that this data race is coming to an end, you know, that this idea that data is the new oil. And the more data you have the more power your systems are going to have and whoever has the most data wins. Will I think that’s a horribly, an incorrect narrative? Because getting more data doesn’t necessarily result in more intelligent system. I think there, there are basic research advances that are going to be determinative of, who is who creates the first real general-purpose AI systems. And one thing that is obvious from looking at humans is that you can be very intelligent without a lot of data humans. You know that, for example the amount of text even in GPT3 which is not the latest generation, right? We’re about to see GPT4 but the amount of text that GPT3 was trained on is roughly the same size as every book ever written. So you’ve already pretty much consumed most of the text in the world, right? You know what else is there was a bunch of on the web, a lot of that text was generated by computer programs spitting, out instructions and news items and things like that, that are machine-generated. So it’s not clear. That adds a lot in terms of creating more intelligence. So I think that we are coming to the end If we haven’t already come to the end, it was soon going to …of the idea that we can create more capabilities simply by having bigger circuits training with more data and this is an opinion, I should say is not a theorem, I can’t prove to you what I’m saying. It’s a gut feeling and other people have a different gut feeling. They feel like, well, would you just get 10 times more data? Ten times bigger circuit something qualitatively new is going to happen but that’s just what you know. That also feels like wishful thinking to me why there’s no scientific basis for that because they don’t even know what happens to produce a qualitative change in behavior.

Thank you Stuart Russell. Thank you so much for coming to the Netopia and giving this interview and good luck with your talk this afternoon.

It’s a pleasure to be nice to speak to you.

 

Footnote:
To the reader, it’s with no irony that this article was created via speech-to-text AI recognition software. Almost all video upload services today have some form of extraction of audio to text. Perhaps this is for the user output, perhaps for advertising input or the positive output being for hard of hearing and deaf users who can avail of the content with subtitles.

In all, we’d rate it as 9/10 for accuracy from input sound to output text, though in all fairness it helps when the input language is in English and the speaker is a Professor using clear and ordered language and sentences.

Enough Abuse Online – Football Stepping Up their Game

Wednesday, December 1st, 2021

Hate speech and racial abuse online has haunted football and maybe peaked with England’s penalty shoot-out against Italy in the European Championship finals this summer. What can football do about it and who else should take action? The English Premier League‘s head of equality, diversty and inclusivity Iffy Onuora joined Netopia’s video spotlight interview series to discuss his work, the role of the legal system and tech companies, as well as equal opportunity in football, social media boycotts and his life in football.

Transcript

Welcome to the Netopia Video Spotlight interview and for this episode

I have a special guest. It’s none other than Iffy Onuora, he is with the Premier League and is the Head of Equality, Diversity and Inclusivity – welcome Iffy. It’s great to have you here on the show. You’re the head of equality, diversity and inclusivity. What does that job description mean?

It can be anything that could be representing and trying to make clear pathways between underrepresented groups, and what I mean by that is, if you look at the Premier League fantastically diverse on the pitch, lots of representation, a lot of nationalities, a lot of black players, lots of players European players.

The Premier League is fantastic visual in that respect, but the thing we found in this country and certainly in Europe was that wasn’t being represented of the pitch so there was a paucity of black executives and that’s not just black and white. We don’t make the transition in this country very welcome, from the pitch to the boardroom. I think other countries that slightly better than us. So it’s a topic and an action for us to address and then obviously the  coaching atmosphere, we think there’s 20-30% of players are black or identify as black, but very few managers, and that’s not just in the Premier League but beyond, so we are always looking at ways we can help turn up those kinda statistics around so that was part of my work previously alongside the coaching with the PFA and when it came to the Premier League I knew this would be my soul focus and that’s what it was a new position for the Premier League. I think there had been a lot of very good work being done well before I came but I think making this job specific to a new incumbent was away the Premier League saying right we’re now going to bring this in-house, because I think they’d gone outside to other organisations, such as Kick It Out to represent some of the anti-discrimination works, but I think bringing that title my roll into the building in-house making a Premier League specific role was the way of saying “right, we’re going to now make this role really distinct and bespoke to the Premier League.

[Per] So you speak about the lack of diversity or the difficult in the transition from the pitch to to the boardroom, but another big topic has been around the racial abuse towards players. But not specific to the Premier League in anyway, but rather a bigger conversation in football.  How do you see that discussion?
What’s been happening in the last 18 months in the lockdown?

We have the killing of George Floyd, which was a pretty seminal event really around the world, you see the protests, you only have to go back a year to see the world wide protests, and there was I think it was the red line in some respects. It was people deciding and people of all races,  I should say, decided that enough is enough. There is this problem of race, that permeated all through society, not just football and it was a sign now of society to take it seriously. We’d spoken about it before and it had been passing phases sometimes it was mentioned it never seemed to go out of fashion. I think the difference now is people have really laid a marker down and said we’re going to do something now about this, so I think in terms of the Premier league, then we set out an action.

“No Room for Racism”, I’ve got the badge here and within that we you’ve got action plans around some of the things mentioned about coaching pathways and one of them is some of the abuse that was outlined you saw in England and the players decided to take the knee before the start of every game that was an action plan that the Premier League have very much behind supporting the players, we didn’t want to dictate to the players but once the players decided they wanted to do this with a must supportive of them.

So we’ve got lots of actions around that, and actions around online and abuse, which you can talk in more details about so I think the important thing is to say that my roll covers so much of that.

It covers some of those pathways, it covers imbedding anti racism within the football stakeholders,  the leagues, the clubs…it covers combatting racism for the social media companies, working with the social media companies working with government, it covers quite a lot, it’s wide-ranging, it’s extensive.

It’s meant to be, if it was an easy task. It wasn’t just going to be solved just like that It’s meant to be a test that are some certain specific works attached to it and its long-term of longitudinal work that can’t be done in a minute, in a few weeks, a year – it is the trying to reverse a lot of things that happened many years and that takes time.

[PER] So to follow up on the specific detail,  you said that the Premier League supported the players who wanted to do the take a knee gesture before games. I know this has been controversial in in some other sports leagues. Was this a difficult decision to get support for that particular action?

Going back to the restart… remember the League closed down due to the pandemic and when it restarted, this was very much in the storm of the Black Lives Matter movement and the players very vocal and wanted to their voice heard around this issue heard.
So, when they restarted, they had the Lives Matter on the shirts,  and they were auctioned off.

I think one thing was just how united they were, all the players black and white wanted to show the support for this bigger objective and that was a thing that drove forward so all the captain’s that were representing their clubs and wanted to do it and then that was, so that  was the restart of the previous season, and we got into summer and the Euros and the England players wanted to carry on doing it.

That’s where some of the maybe push-back we call it occurred and there’s a lot of discussion about whether we needed to keep doing it. Some people resisted and I think was poorly 50/50 but which way was going to go and then you may be aware of what happened when the the team lost in the finals and by the way this team is a young team diversity in wonderfully led by manager Gareth Southgate who’s really led the team and squad fantastically and so I think the public had really got behind the team, they identified with my young team,  developing team, a lot of players like Marcus Rashford and Raheem Sterling chose to use their platforms positively.

So the public are really behind them even though there’s maybe this discussion about whether we be going to continue to support the knee but actually what happened was the players missed the penalties, three black players and they got abused on social media and from then on it was almost like that awareness now came into the mainstream. Everyone kind of thought: “Right, this is why they take the knee” If you weren’t sure before, this is why, we understand now what they object to, what they’re using the platform for?

So I think that made easy and some ways, so at the start of this season now we started discussing going to keep taking the knee that was very little argument against it now so out of that distress emerged something a lot more unified and a lot more accepting a lot, more acceptable now you go to the stadium week in week out and players take the knee. Applauded all throughout the stadium, they understand it, they applaud it and I’m really proud of the Premier League were supportive of that. Not just the players.

We commissioned the short about it as well. So the players didn’t feel it was just them doing it without our support. You very much know if you want to do it we will support we will do it this way if you’re happy and I think it’s been a really good thing.

[Per] Hate speech online and specifically racial abuse is a much bigger issue than particularly to football and also football, the Premier League might be one of the most famous parts of football, but It’s also a part of a big world,  so for all the the success and all the influence that you might have how much can you accomplish?

As a cog in this big machine – Is it possible for you to have real influence on his speech online?

Yeah it’s a really good question. I think that we have to accept we are the cog but what we also recognise is that football and you can see in the summer when the England team playing and doing really well, it captures the imagination.

When the England team is doing well with the big clubs are doing well, the public are so engaged so football has that power, it has the power to connect, like no other sport so I think it’s the Premier League being aware and tapping into that so we’re not saying that this is a football but it’s a society problem, but what can we do as a football as a football industry to drive that form and to support it knowing that our influence it can be extensive.

So that’s what we try and do. Recognise our influence and use it to affect so yes all the things about social media. You’ll see that we have it to our society, high profile people are abused and not just and not just a footballer but obviously football is the number one game, it’s the one that’s spoke about, it reached a crescendo in the summer with the players in the penalty, so we’re just almost setting an example.

Can we showcase and be an example in football for how we address it. We don’t want run away from it,  how do we address it, how do we bring influence on the social media companies some of the big players that have millions of followers on social media, so that can be really influential. How can we use that influence of the players and ourselves as the industry and don’t forget the brand of the Premier League, it’s so extensive. You know we reached out all through the world with our broadcasting partners with the most watched league in the world so within that comes that influence comes that reach and I think that’s what we’re trying to use effectively so we know we’re not government.

we are reactively working with authorities to to track down abusive content and abusers online

We are not legislators, but can we influence legislators, can we influence governments and we do have a policy team that speaks directly with members of the government around these issues and then the social media companies as well, and we have a team team since October 2019, a designated team who work with social media companies proactively to take down abuse as it comes and then we are reactively working with authorities to to track down abusive content and abusers online. Some of that is not generally known and readily known outside maybe ourselves, that how much we do we take on that responsibility.

Maybe at the expense or in the instead of some of the social media companies in we always feel can do more, but we certainly don’t shy away from it, and we have a team who do that.

[Per] That’s really interesting how you have some people actually working to to fight the racial abuse in social media and can you tell us a little bit more about? How does that work?  How do you do that work and what kinda results have you seen so far.

Yeah since 2019, they monitor all the social media accounts of all the players and their families and managers, so they monitor all the accounts centrally and they can track the abuse that comes, and obviously it’s such a vast amount of work and some abuse continue to get through, we’ve seen that. But we’ve been really successful in the liaising. So we have to work very closely with Twitter, Facebook, Instagram around some of these things and we have been really successful at stopping some of these before it gets on the platform or liaison very closely with them and highlighting abuse when it comes.

And there has been some successes with the social media companies. Some of them have moved, some of them have filtering systems around some of the hate speech.

Instagram for example can filter some of it out. We do think there’s an issue around verification which I think is their redline.

Instagram for example can filter some of it out. We do think there’s an issue around verification which I think is their redline.

The redline for social media companies is that they don’t want the full ID verification which is something we’ve pushed for, so there’s always going to be that gap and we’ll wait and see what comes along from the legislation where that gap is filled, if at all.

We have worked with my have moved and significantly from where they once were. In terms of real-life examples and you’ll know of someone like Neal Maupay, the Brighton and Hove Albion striker, he worked closely with us and our team at the Premier League to track down an abuser all the way in Singapore and that’s really significant for us, because it wasn’t just abusers in this country.

We could go further afield and work with the Singaporean authorities which we did and we needed Neal to stay engaged and make sure that there’s no let off to someone who’s sending abuse, and absolutely and we saw that through to a conclusion with a court case. The abuser was named and shamed in Singapore and his name plastered all over that and that we think that’s strong message. We will go that extra mile, literary, outside the borders to track down abuses as and when it happens and we’ve had more recent success domestically with a guy called Romaine Sawyers at West Brom. West Brom are now in the Championship but unfortunately this happened to Romaine in the Premier League when he was with us last season and a Gentleman was tracked down who’d sent abuse to him. He actually received a custodial sentence. He was sentenced for eight weeks. And we think that is significant as well.

a Gentleman was tracked down who’d sent abuse to him. He actually received a custodial sentence. He was sentenced for eight weeks.

The judge made a strong precedent, that now if you send hateful abuse you will not only suffer reputational damage, and his name was plastered everywhere in this country. You will receive a custodial sentence if it reaches that high benchmark of abuse. So we’re pleased about that.

We’re pleased that was starting to show, that’s it’s starting to bear its teeth.

Alot of the work that the team at the Premier League are doing is starting to bear fruition because we want it to be a cultural thing where people now don’t even think that we can do this.

Maybe there’s been a clothes before that maybe you can go to the football match or type away at a computer and this is where you let of steam, and you can say whatever you like, you’re frustrated, you’re angry, in the stadium you can just shout abuse and maybe some of that is  the demographics of fans, and we want fans to be as inclusive, not just women, families, diversity and the more that kind of abuse isn’t tolerated the easier it is for young families, children and people of colour to come to the stadiums and feel welcome, so that’s something ….it’s a work in progress.

[Per] You mentioned ID verification as being on the wish list for four more tools. If that were to be put in place, what more would you be able to accomplish that you cannot do at the moment?

Yeah, we’ll be starting to go down the area beyond my specialist knowledge, but what I will say is that we have pushed for that in the past around the fact that you can actually at the moment.

Set up an account post abuse online and then you can close down an account and leave no digital footprint,

Set up an account post abuse online and then you can close down an account and leave no digital footprint, I’m sure you’ll be aware this far better than me, so the idea is to not allow abusers to be able to do that that some kind of verification would be in place that would leave a digital footprint so you could track down an abuser. Now this is a divide from what we’ve pushed for and what social media companies regard as a little bit more sacrosanct and I’m sure we have a piece of legislation coming through the Parliament at the moment which will see exactly where that where that line is and who and who can and who wins that particular argument.

I think as long as there is space for people to create an account and then disappear, then you’ll always have this problem for persistent abusers. I mean what you want is that we affect some cultural change so people don’t feel as if they can or should do it, and then some real change so actually not only should they not do it, they can’t do it as well. So that for me feels the magic solution at the moment we’re working on both of them. We’re still working on the cultural thing that people don’t feel that they should be able to do it, but actually we still need some movements on whether they allowed to do it.

[Per] Some players have been quite critical of of social media platforms and language even boycotting social media, protesting on to the lack of action by the platform companies, what would you like the social media companies do in terms of stepping up?

I mean beyond ID verification…

Yeah, it goes back to what we were discussing, you know some of those boycotts were born out of frustration, because of the perceived lack of action from the social media companies like to say it’s been slow, but there has been some movements.

Well, we’re just turning keep pushing along, further along that path. And in terms of that boycott, we were very much part of that boycott. We had a boycott 18 months ago, a 48-hour blackout of social media companies and that was meant to be part of that strong statement blackout weekend that really got some traction and all the football bodies came together, the Premier league, certainly all the governing bodies, the FA, the PFA, around this social media boycott. And again, that does have an effect. People say well, it’s only 48hrs that’s it. But in those 48hrs we did see that Facebook and Instagram executives who previously maybe hadn’t shown that public face from a media point of view media point were coming on the airwaves to explain what they were trying to do, it felt very much that they were on the defensive and having to explain what do you doing?

What they were doing and why they couldn’t do it, why abuse is taking so long to be taken down even when it was there for everyone to see, and I think that’s a good thing.

They have to be held to account. We all have a accountability in our work

They have to be held to account. We all have a accountability in our work, if I’m not doing my job, or you’re not doing your job someone gets explain or someone gets to ask you why it isn’t being done. It’s exactly the same as Facebook.

If they’ve been asked to take down abuse within a specified amount of time, which they are, and they go beyond that time and it’s still there then it’s not a me to ask them to explain on their behalf, they would be better served by explaining themselves what happened. It’s too easy sometimes to be faceless and they are massive businesses and it’s not my intention to badmouth social media, it’s the way we all interact now.

I’m at pains to say social media is actually a wonderful thing, it’s the way we connect wonderfully throughout the world. It’s in its infancy compared to newspaper in this country, where they are 200 years old, radio and television 100 years old. Social media is 20 years old at most.

You know it’s still very much in the infancy, and we’re all wrestling on how it’s regulated a bit like television, newspapers and radio is regulated. So that’s what we’re doing.

We’re not necessarily demonize social media but trying to regulate and understand its powerful reach which it has now so regulated so it’s Its used effectively and you know with good intentions.

[Per] Besides what you are doing, or beyond what you’re doing yourselves and beyond what you would like for social media companies to do.

Do you have any other wish….is there somebody else who should take some action to also help and support and take action against online hate speech? That is a gathering movement in this country that we do feel that we got to offer something different.

Not just footballers, celebrities, members of parliament have suffered abuse and so I think there is now a sense that we have to do something so it’s not up to football, we can certainly play a part and you should be proud to play significant part, but it’s not up to football to do this it is I think it’s up to all progressive people who can see the dangers of letting things just drift unchallenged and that’s an effect not just on myself, on yourself but on generations, younger kids and people coming up.

We’re always working and how to best use social media.

I think it’s important that all of us who have that power or platform to challenge and cajole and make things better for those who come after, that’s legacy and I it’s for all of us to play our part.

Whether you have a big role or a small part and it all adds up to, I think a gathering sense that we have to have to try and get this way because there’s generations to come in who will who will certainly benefit from a better and more respectful landscape.

Thank you for closing on an optimistic note. I almost feel like I’m one of the players that coached. That’s great. Thank you so much. Iffy Onuora for coming on the show, the Netopia Video Spotlight interview and wish you best of luck with your work in your role and thanks to you, everybody who watches our little program and will be back with more interviews.

Transparency note: Premier League is a Netopia-sympathizer. All Netopia’s editorial decisions are independent.

Have You Met a Whistleblower?

Sunday, November 7th, 2021

Today, November 8,  Frances Haugen will give testimony in European Parliament’s IMCO Committee. Ms Haugen is the latest Facebook whistleblower to call foul on the conduct of the platform.

Will this be the moment where Facebook steps up to their responsibilities as intermediary? Or will there be yet another round of symbolic action? The European policy-makers hold the keys to holding platforms to account.

This is not the first time that Facebook is in the spotlight,  it’s become something of a habit.

In 2018, Cambridge Analytica was revealed to extract user data for political influence, using Facebook precisely as it was designed – micro-targeting users.

Facebook then failed to intervene when its services were used to broadcast hate propaganda that contributed to the genocide of the Rohingya population in Myanmar.

Last year, former employee Sophie Zhang posted information showing how Facebook knows their services are used for abuse and propaganda, there are internal deliberations whether to intervene in certain cases. Deliberations that stretch far from “it’s the algorithm, we don’t know what it does”.  There are mere examples of scandal, harm and negligence. It is remiss not to mention   the Capitol Hill storming or the murder of British MP David Amess or the Christchurch massacre when discussing the reasons why there’s lots to whistleblow about.

“It may not be your fault.
But it’s your problem”
Steven Levy

In their 2021 book An Ugly Truth (Harper Collins), journalists Cecilia Kang and Sheera Frenkel cover the controversies around Facebook over the last half a decade. The back cover is pure genius: rather than the traditional blurbs bragging about how great the book is, it has a list of quotes from Facebook-founder/CEO Mark Zuckerberg and chief operating officer Sheryl Sandberg! Like these two:

We never meant to upset you” – Shery Sandberg, July 2014

I ask forgiveness and I will work to do it better” – Mark Zuckerberg, September 2017

Beside such vague statements, Facebook’s actions have included hiring a few thousand more moderators and the “oversight board” – which is employed by Facebook and has the power to criticize interventions where content is removed or users restricted, but not cases of non-intervention.

Of course, this is not real self-regulation: proper self-regulation has transparency, independence and teeth. This is well understood in many industries: news, advertising, games etc. There is no reason it could not work in social media.

Facebook’s reluctance to take real action has little to do with some mysterious algorithm that works beyond the control of any human mind, but rather a question of ideology on Facebook’s part. Consider what Facebook Vice President Andrew “Boz” Bosworth said in an internal memo appropriately titled “The Ugly” in 2016:

So we connect more people. That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

As the company develops the new, three-dimensional internet – the “metaverse” – these questions will only become more prominent.

Wired Magazine’s Steven Levy said it best: “It may not be your fault. But it’s your problem.”

Selbstzweck, Nein Danke – the recipe for human-centric technology

Tuesday, June 1st, 2021

Technology must be human-centric, says MEP Sabine Verheyen in this exclusive Netopia video spotlight. Technology should not be “Selbstzweck” – only serve itself. This principle can guide both research and legislation, according to the EPP deputée.

In this interview, madam Verheyen elaborates on how offline regulation can be brought online, the limitations of the policy-makers’ reach and accountability in AI – what if the AI makes a new Rembrandt?

Episode 1 featured MEP  Alex Agius Saliba, this is the second interview in our Netopia video spotlight-series. Enjoy

 

Netopia Spotlight interview with MEP Sabine Verheyen

You’re a member of European Parliament. You’re with the European People’s Party group and you’re also the chair of the culture committee, so, thank you so much for coming to Netopia.

MEP Sabine Verheyen
Thank you for inviting me.

Per Strömbäck
It’s a pleasure. Now, my first question is: What is it that brings you to the digital policy topics? What is your political drive in this aspect?

MEP Sabine Verheyen
When I entered the Parliament nearly 12 years ago, I was foreseen to deal with media questions, as the media landscape is turning more and more digital.

The way people consume and use new technologies, the devices, the way of distribution of content has changed and switched in a very tremendous way because there are new ways of distribution and also new ways of how to create content.

So, it was natural that I have to deal with the digital questions when you want to deal with media, with creativity, with new ideas, with Innovation. And that was the reason why I had to start to work with this. When I studied architecture, I was just skipping the digital era. I was working with my hands. I made all my plans for this and all these new systems came up for the CAD systems after I finished. So, I’m not a digital native, but I have to have to deal with all these things. And I think so, it is for everyone in our society who is older than forty or fifty years of age. We’re not growing up with these huge number of digital possibilities, and so we have to learn. Perhaps that is also why I have understanding for for both sides.

My children grew up with digital tools but there’s also a generation that has to adapt to this and we have to have both in mind and we have to see which chances but also what challenges are in the digital side. But we also have to face the problems and challenges that are coming up with the digitised world.

Per Strömbäck
What are some of those challenges?

MEP Sabine Verheyen
We normally think that the digitisation of the internet brings more diversity to people, but in the end if you really take a look, it’s not always that diverse.

We normally think that the digitisation of the internet brings more diversity to people, but in the end if you really take a look, it’s not always that diverse.

We are meanwhile caught in bubbles, due to the algorithms via the content that already interested us before and the wide range of varieties.

The wide range of diversity is not always presented if you don’t search actively for that.

And that is also something we have to deal with when it comes to Digital Services Act, Digital Market Act. But also to other regulatory tools that we keep the digital landscape, open, and broad, and that we keep the diversity we want to have especially when it comes media policy elementary for media freedom and diversity and also crucial for functioning democracies.

Per Strömbäck

So, what learnings can you bring then from traditional media policy to this digital media landscape? Is there something useful that can be replicated or is it completely different?

MEP Sabine Verheyen

We were dealing with the Audio Visual Media Services Directive in the past, the last legislative period together with Petra Kammerevert, from S&D, I was rapporteur for this and we had a co-rapporteurship on the AVMS and we had already included some parts.

When it comes to these grey zones of what’s legal, what’s illegal and you have some zones in between, you have content that is formally legal, but is harmful…… harmful for democracies like fake news or foreign interference information and propaganda and also other things and there

When it comes, to audio-visual services online to balance out, a level playing field between classical media and digital media, especially when it comes to transparency, in advertisements to have a clear separation between the advertisement and the content, they already had the classical media.

And the question is always how to transpose that to the digital worlds into digital platforms and tools. So that people really can distinguish between different characters of content.

We also have a question and that is something we had media literacy tools. also, in the past, to deal with information, how to detect sources, that was something that was easier in the past because you did not have to wide range of sources and we can also think about what work well in the past, but can be transposed to the new digital times, but also to think about what has to be done differently.
What has to be done in another way then we know it on the classical and media regulation what can be transposed. How can we use best practice?

The best practice, For example, the work of the media authorities, the regulatory authority and the ERGA. That’s the European Regulators Group for Audiovisual Media Services.

I think we can learn from their experience from what they did also in the past especially when it comes to these grey zones of what’s legal, what’s illegal and you have some zones in between, you have content that is formally legal, but is harmful…… harmful for democracies like fake news or foreign interference information and propaganda and also other things and there, I think we can also learn from the experience we made with other media players, classical media players that cannot be adopted one-to-one, but perhaps the fundamental ideas that are behind can be transposed? That’s already very interesting.

Per Strömbäck
Now you talk mainly about the audience and how media policy can help the audience, what about the other end, what about the creators and their business partners?

MEP Sabine Verheyen
That is why the Digital Markets Act is set out to get a better level playing field between platforms that are controlling the market.

To fight for your own, right as a creator, as a publisher, as a distributor of content, is quite difficult sometimes because of the market power, puts you in a less good position when it comes to negotiations and conditions.

The market power of the big platforms, like, Google or Facebook and others is quite high.

And so, as to fight for your own, right as a creator, as a publisher, as a distributor of content, is quite difficult sometimes because of the market power, puts you in a less good position when it comes to negotiations and conditions.

That was the reason why I thought it was very important that we made the Copyright Directive, which included a responsibility for platforms, so that they should not control the content but they have to to take responsibility when know about infringements, for example, copyright infringements.

The second step that comes now is the Digital Services Act that also plays a very important role in the distribution and in the relation between content provider, service providers online, and the platforms and still have the right level of responsibility for the big tech players.

Per Strömbäck
I understand the ambition but sometimes we hear that there is a limit to the reach of the European policymakers.
Do you think these policies can achieve all the things that you hope they will? What’s the reach of the European policy in digital?

MEP Sabine Verheyen

You see it already just in the draft, just proclaiming legislation. Just the draft led to a change in the way how platforms worked, because they see that they cannot go without responsibility in the future.

For example, when you take a look at what’s going on in Australia, Facebook and Google were forced to share the advertising revenue that was generated by publishers in connection, with the presentation of journalistic content.

That is on the basis of what we did also on the European level. So European policies has an impact on this because all the other regions [are acting]. A short while ago I had a chat with a politician from the Canadian Parliament.

She was very interested in what we did in the AVMS with the platforms with the video sharing platforms but also with video-on-demand. That was something we’d previously regulated, but especially the video-sharing, she was interested looking how it works, what we did.

I think we can make a change when it comes to responsibility and also to secure democratic structures, also on the platforms. I think the internet is not a law-free environment, it should be carried and driven by our democratic understandings and by our societal agreements we have, by the values we have, that this is on the one hand Freedom of Expression on the other hand but also that our Freedom of Expression is limited when it becomes harmful for others.

And I think to level this out in the right way, can have an impact on how the platforms are working and how they take their responsibilities, also in the future.

Per Strömbäck
Speaking of the DSM, Digital Single Market Directive, it’s been many years now in the making and there is a Trilogue, but there seems to be a delay in the implementation of that in the Member States. And now the Commission says it will issue guidelines. What do you think of the Commission issuing guidelines after the Trilogue? Is that a way for the Commission to change the outcome of the Trilogue?

MEP Sabine Verheyen
Normally not.
These guidelines should reflect what was discussed during the Trilogues, it’s clear like we also did in the AVMS, but also for the Copyright Directive.

We could not finalize every detail in the legislation It would be better if you want to have a minimum level of harmonisation on the European level, because you cannot touch on digital issues just at national level, because digital services are very often cross-border offers. So it is good not to have a split, diverse structure for digital players in the market but to have a common minimum of level of regulation and that’s what we wanted to do with the Copyright Directive and AVMS Directive to give direction for how we should work. But there are still differences in the Member States, but the general line should be similar or in co.

Per Strömbäck
What do you see as the role of the policy maker in Artificial Intelligence, and in particular for the culture and creative sector?

MEP Sabine Verheyen
First what is important is that Artificial Intelligence technologies are human centric, the technologies should not replace human beings entirely

The last decisions must normally be taken by human beings that also in the creator-sector where I think it is important that the guidance for the framework on how artificial intelligence tools are working should be set by human beings, by the programmers, by those who are using and make applications it should be should be the human beings.

I see in artificial intelligence,  good chances for creation to make things easier or very complicated things that need huge data to work with these things and the creative process as well as the database and what’s behind the picture, and questions have to be answered, for example when I make a new picture in a Rembrandt style? Is it Rembrandt in the end who gave the basis for all with all his pictures with all his work? You’ve done for the new painting because without this data of the Rembrandt paintings the new Rembrandt never would exist not in that way so the question is who is the Rights owner, who is the developer of this? Is it the one with the idea to make a new Rembrandt? Is not important because he has been dead for longer than 70 years, but if you have contemporary artists, it becomes a question that has to be cleared up.

There are legal questions that have to be discussed, but also ethical questions, for example when it comes to implement it in education or for vulnerable people.

What can you do with the data because artificial intelligence always based on mass data and that is the reason why we have to be careful and balance out to promote the chances that are coming up with these new technologies, but on the other hand also see the risks. Or also when it comes to the distribution of cultural and content of works. When it is targeted micro-targeted to a special group, can I guarantee access? What other criterions are in the algorithms? Whether artificial intelligence tools are working with. What is going on also in other areas of the creative sector, like in the gaming sector?

Technology is a tool. Technology is not a value per se and have to set a frame for the development into the right direction.
We don’t want to limit creativity and limits also innovation.

It’s quite interesting to see how software can and artificial intelligence tools can interact. You can have a computer or robots that are working with artificial intelligence. There are so many opportunities, also for art and the creative sector. But that is what we always have to see where are the risks and can we keep the human being in the centre of the work, in the centre of creativity so that we are not overruled by machines one day?

Yes, I think that has been a fear since the days of Frankenstein already been going back to the Gollam and we always fear that man’s creation will bring us down at some point, but that’s already very interesting and we will get the chance to talk more about artificial intelligence.

It strikes me that… when we talk about digital development, digital technology including artificial intelligence, often the point is made that the development is fast, so can the policy maker keep up or is it always one step ahead?

That is the point we are always discussing. Are we fast enough with our legislation processes with political discussions are always jumping behind or are we in the forefront?

We sometimes see that developments go faster can react and that’s the reason why you have to think different. We should not react but we should set the guidelines where to go to.

Technology is a tool. Technology is not a value per se and have to set a frame for the development into the right direction.

We don’t want to limit creativity and limits also innovation.

But we have to set fundamental rules that are technologically neutral and that is something we are trying to implement in the legislation. In the last year’s that we get more towards not targeted legislation to just one single technology but become technological neutral in the principles that we put into legislation or into a Regulation or a Directive.

Per Strömbäck
We’re almost out of time, but I cannot resist asking your question on this topic of the policy maker keeping up with technology, but isn’t it also the case that public funding for a lot of the research that goes into a new technology? For example, the big European research programs. Do you see a connection there, because that part of policy should be ahead of the technology, or?

MEP Sabine Verheyen
That is what we discuss: politics cannot decide what’s good and what’s not good beforedand before we know where we are going or we want is to to enable new developments and the research towards new technologies, new ideas, but we also have to keep up with the development when it comes to fundamental structures and values. Also research is not out of a value framework.

You cannot do everything that theoretically possible just because it’s possible, because that is what I mean with it must be human-centric.
We want technology to be developed to serve human beings, to serve our nature, to serve our environment.

Not just to serve yourself. For me technology has to serve the future development of human beings and our our planet and I think that is if you keep that in mind you can support new developments, also fundamental research on principles. That’s important to lay a basis for technological innovation and new possibilities and sometimes you have to to let it develop and then take a look and see is it a good development or not?

We want technology to be developed to serve human beings, to serve our nature, to serve our environment.

But you need also the power to say when it’s something is going into the wrong direction. Like we see now with some platforms as spreaders of information that has a negative impact on democratic structures.

Then we have to set guidelines. We have to set guidelines then where it where it’s not acceptable for us as a society, because society if a loses its orientation and loses its fundamental and basic values it becomes difficult just because it’s technologically possible, and I think that is always the balance.

We have also to find out balance out in politics between that what is in the interest of technological development innovation of what is technological possible, to support new developments but on the other hand also give guidance which will be how technology will play in our society.

Per Strömbäck
Thank you very much for coming on to Netopia Spotlight and we wish you the best of luck with this to work.

MEP Sabine Verheyen
Thank you.