Author Archive

Big Tech Fines v Big Tech Values

Wednesday, October 30th, 2019

These fines equate like being charged 10 cents for speeding violations – which if translated into the real world would create a toothless precedent to the rules of the road.

Set against the fines, it is clear they do not fear fines. They fear competition. We do not regulate them. They regulate us.

*click image to enlarge

Big Tech Fines v Big Tech Values

Fines levied pale when contrasted against market values of the main tech companies to fall foul of national regulators.

Can Big Fines Bring Big Tech Back in Line?

Tuesday, October 29th, 2019

So Big Tech has too much power. What can we do? Is there no-one with more power than the tech businesses? Here are some candidates: consumers, employees, investors, advertisers and governments. Consumers have influence in theory, but in the surveillance economy we all know they are the “product being sold”. Employees have protested at times against things like sexual harassment, but Google ranks as the most popular employer among business and engineering students around the world, so don’t expect the employees to make a huge difference any time soon.

Investors could in theory, put a lot of pressure on companies to act sustainably and responsibly; there are ethical investment funds, but they tend to look at things like tobacco, fossil fuels, animal testing and guns. A bit of a stretch to things like freedom and truth. No luck there. Also, investors like stocks that beat the market, which tech has done for decades. (Thanks to “disruptive innovation,” which is a different way to say making money from somebody else’s investment.)

Advertisers may be the best bet; they provide most of the revenue after all. Sometimes advertisers get enough and pull the ads, like when $140 Million US worth of toothpaste ads was cut due to unwanted terrorist video visibility. But the general trend is that more advertising bucks go to Big Tech, not fewer.

Lastly, Governments, do they have the power to reign Silicon Valley in? Depends. China doesn’t even need to make threats to keep the internet in line on things like displaying Taiwan as part of China or punishing a sports club whose manager voices support for the protestors in Hong Kong. Not that Netopia approves; two wrongs don’t make a right. The toolset available to Western governments is more limited: competition policy, privacy regulation and one or two more things. We have seen some really big fines over the years. GDPR fines can be as much as 2% of global turn-over. Except even if enforced, such as fine only scratches the paint on the company’s market value: Google-parent Alphabet’s share price is around 25 times the earnings. That means 2% of earnings equals 0.08% of market value. Netopia has an infographic that shows the proportions.

Is there no hope? Is resistance futile? No! There is always hope! Resistance is never futile! The things mentioned may be weak but not pointless. In combination, they may have an effect. Policymakers are beginning to step up to the challenge. A new policy is on the way. There will be a better tomorrow. (Unless, of course, Google’s quantum computer becomes self-aware and decides to kill all humans.)

Sharing Is Cari… for Profit

Wednesday, October 23rd, 2019

File-sharing is not what it used to be. At least not if you think it used to be this altruistic movement with no profit motives. Today, running torrent trackers is a business, not some ideologic crusade to bring down evil copyright empires. Don’t take it from me, take it from Torrent Freak:

While people have always made money from bootleg videos and music, the very early days of file-sharing mostly embodied the “sharing is caring” ethos. Have a tune, give one away. Have a game, pass it around. However, over the past 15 years – the last 10 in particular – there has been a noticeable shift. Does anyone share or provide platforms altruistically anymore, or is money behind pretty much everything?

Yeah. Except 10-15 years ago it was already big business. If file-sharing was ever altruistic, that stopped long ago. Don’t get me wrong, I’m sure a lot of file-sharers really believed they were doing something good (though for most perhaps it was a convenient excuse for not paying for stuff). But for all its peer-distributed hive-mind get-up, there was always a central element and that was always commercial. Maybe because it had to be, it costs a lot of money to run servers. Maybe because they could make a buck. Maybe a bit of both.

Case in point: the world-famous torrent-tracker The Pirate Bay was sold in 2009 for 60 Million Swedish Kronor (or 7,8 Million US-dollars currency conversion as calculated by none other than Torrent Freak itself back when). Yeah, the deal never happened in the end, the buyer turned out to be bankrupt. But if Millions of dollars is not commercial, I don’t know what is. (Wait, 2009? That’s ten years ago. Where’s the party?)

Except The Pirate Bay was commercial long before 2009. When the Swedish police raided its server hall in 2006, the prosecutor collected evidence of three Million US$ ad revenue. A number which, for the record, was challenged by the accused. Again, I’m referring to Torrent Freak as a source. (Thanks for keeping such a good archive, TF!)

Was TPB the exception? All other pirate services altruistic? Hmm, don’t think so: Kim Dotcom had “millions of dollars” seized by the US authorities (along with a list of expensive watches, jetskis and 108-inch TVs worthy of a Bond villain). At least some of that must have come from his MegaUpload-business. While not technically a torrent-tracker, bitlockers like that provide a form of file-sharing.

How about the trailblazer – Napster back in the 1990s? It had an offer of 94 Million US$ in 2002. That was after a US court shut it down, folks. True, that deal also fell through, but if you know somebody who pays 94 Million dollars for a random internet service… give them my number.

File-sharing was always commercial. It was also altruistic on some level. Maybe. Now, this is the point where pirates say “what about Google, it also provides links to illegally shared files”. Correct. It does.

Simple Answers to Hard Questions: What #BigTech Should Do About the Digital Services Act

Friday, October 18th, 2019

If you follow the digital policy debate in Europe, you may be curious on how Big Tech will respond to the concept of the Digital Services Act, floated by the incoming von der Leyen-Commission. The Commission appears to take a broad approach on the problems with the digital society today, without resorting to a one-size-fits-all-solution. There are many aspects worth looking at, but for the moment let’s see how Big Tech has responded.

Case in point, a Linkedin-post by Ebay’s top EU-lobbyist Samuel Larinkari. I’ve met Larinkari, we were on the same panel during the Estonian presidency a couple of years ago and the topics were similar – platform liability and copyright in that case. On that panel, Larinkari argued that rather than things like the copyright-directive mincemeat approach, the Commission might as well open the E-commerce-directive. I’m not sure, but I took that as a bold challenge – sort of when, growing up with the icy winters in Sweden, my friends and I would say “you don’t dare put your tongue to that lamp post”*. I remember thinking that it was a smart move by Larinkari. (Also not sure what my argument was on that panel, but I did bring candy for the audience so at least I got the cheap tricks right.)

The E-commerce directive basically says that internet companies should not be responsible for what the users to do on their services. This is where the post office analogy comes from: “telecoms are like the post office, we don’t ask the post to read the letters”. Those who say this is important to protect privacy and confidentiality of information may have a point (though there are some objections too). These days, however, that exemption from responsibility is the basis of things like the world’s biggest taxi company, the world’s biggest hotel business, the world’s biggest video service etc. All of them build on the same idea of making money off somebody else’s content or offer, sometimes providing value back and most of the time with a take-it-or-leave-it-crybaby-attitude. (Yes, looking at you, YouTube, Uber and Hotels.com, but the list can be made much longer.) Now, this is why there are different rules for “active” and “passive” services. A service that actively interacts with the content, making recommendations, playlist, rankings, what have you, is an active service (duh!). A passive service is what is sometimes called dumb pipe. It doesn’t really do anything with the data, only distributes it. The criticism here is that almost no service is passive anymore, but that is perhaps not the directive’s fault?

On the E-commerce Directive, mr Larinkari says:

The liability regime of the e-Commerce Directive is often criticized for being outdated (dating back to 2001), having been drafted for a set of very different types of hosting service providers. As a result, its fundamental principles are increasingly being disregarded or derogated from in policy and case law, leading to increased fragmentation and legal uncertainty.

Old? Yes. Outdated? Maybe, but the principle of exemption from liability is hailed by tech companies as the most important principle of the “free and open” internet. Different types of hosting services? True, probably no-one thought this would be used for something like the “gig economy”. But does that mean the principles are wrong? Does it not make sense that somebody that actively changes the content thereby assumes some degree of responsibility? Hard to argue against, and words like outdated does not help. My guess is that bigger tech wants bigger exemptions.

So what does mr Ebay suggest the Commission does? Here’s the list, with my comments:

Make sure exemption from liability stays in place so there is legal certainty for platforms

Great, but also keep in mind the legal certainty for everybody else. Like users, content-owners, third-parties, whoever. There is no greater threat online to their legal certainty than these liability exemptions.

It makes sense that platforms do not need to manually monitor user activity for infringements

Not convinced, if Youtube manages to keep pornographic content off their service, why can’t it use the same method for everything else?

Targeted solutions should not be too broad

This is the opposite of the dare mr Larinkari made at our panel back when. I understand him, who wants them to be too broad? But also, don’t make them too narrow, because can we really make a special law for every problem online? Perhaps some general principles may work after all. Let’s say make the targeted solutions “balanced”, okay? That is the most popular word in Brussels anyways.

Good Samaritan principle

Oh, watch out with this one. Sounds great, but this Samaritan is completely different from the guy in the Bible. Normally, good Samaritan-laws protect you from lawsuits if you for example give first aid after a violent robbery and the victim doesn’t make it. In this case, platforms are not trying to help the victims but rather provide the tools for the crime. I have written more on this topic here.

I think I like the shopping list of problems the Commission wants to cover with this legislation. In addition to the points made by Samuel Larinkari and my comments above, here are some ideas that may be useful on the way:

Transparency – make sure the rules platforms write for themselves are transparent also for the outside world. (This include how the algorithms work!)

Accountability – rather than arguing that platforms should have as little responsibility as possible for what users do, how about facing up to the reality and start working toward fixing the problems that they have created? Yeah, internet platforms do a lot of great things, no argument there. They also hold the keys to things like… you know… the survival of liberal democracy. Be part of the answer.

Third-party oversight – being part of the answer is really hard, but there is help out there! Don’t say things like “the algorithm is so complex”, instead say “could you help us with this?”. Don’t say “we’re hiring 10 000 new moderators”, say “let’s we have independent third-party oversight”. That’s how classic media sorted it out. It’s easy if you ask for help, impossible of you don’t. (For the record, mr Larinkari has said none of these things. Others have.)

That’s it. 1, 2, 3. Now you can break for the weekend.

*) In case you haven’t experienced putting your tongue or lips to sub-zero exposed metal, the point is that you get stuck immediately and it’s an incredibly painful affair to pull yourself loose. (Hint: use hot water rather than muscle power!)

Google’s Gingras Gives the Monopolist’s Ultimatum

Tuesday, October 1st, 2019

After fake news and the erosion of classic media, time was ripe for policy action. Lawmakers looked at the causes and found that the ad money had gone to Google and that truth had suffered from the all-information-is-equal-ideology that the same company champions. Sure, the media industry had made its own mistakes on the way, but the playing field was far from level: same audience, same content, different rules.

So what to do about it? Of the many candidates (share data, demand licenses, break up the monopoly, enforce liability) policy-makers picked one: news snippets. By showing parts of a story in search results – the idea goes – Google can sell ads against that content and some users will not go to the news organisation’s site (which created and paid for that content in the first place). News media becomes a double loser: paying for the content Google monetizes without getting a slice of the pie. Yes, you can make the objection that maybe news orgs get more traffic with snippets and perhaps would have had to buy ads to get traffic otherwise, but that doesn’t seem consistent with the way the media market has evolved and anyway this was not what the EU policy-makers decided. The pattern is familiar, Big Tech using its special legal exceptions to profit from other people’s content. So EU policy-makers decided that news snippets should be protected and those who publish them (again, Google) should pay the news organisations that made the content. Enter the “publisher’s right” or “link tax”.

Spain and Germany had already tried something like this – introducing a law that demands Google to pay news media when it uses their content. The response? Google stopped showing those search results. This is of course a move that only a monopolist can make and those who cheer for Google should maybe consider what it means that a single company has that much power. Normal competition appears to be set aside. (See also Metcalfe’s law, for example here.) Without Google’s display, traffic decreased and the news media had to waive their new right. Why would this pattern not repeat itself on the European level if applied by the EU? The question was asked to then Commissioner for Digital Society Günther Öttinger at a seminar three years ago:

No one can survive globally without being active in the European market, Öttinger said. Spain was not big enough. Even Germany is not big enough. All services are welcome but must follow European rules.

As France implements the new copyright directive, let’s see if Commissioner Öttinger was right. Will Google accept that it too has a responsibility for the health of news or will it use its dominant position to ignore the attempts of policy-makers? A blogpost by Google’s Richard Gingras, Vice President of News, points to the latter. Let’s take a look at what he has to say:

When the French law comes into force, we will not show preview content in France for a European news publication unless the publisher has taken steps to tell us that’s what they want. This applies to search results across Google services.

So just like in Spain and Germany before, here’s the monopolist’s ultimatum. Gingras wants to dress it up like a choice, but another option would of course be for Google to make deals with news media and share revenue. In fact, this would be in line with how everything else in the world works: I make something, you want it, let’s make a deal.

Publishers have always been able to decide whether their content is available to be found in Google Search or Google News. And we recently introduced more granular webmaster settings that publishers can use to indicate how much preview information they want to include in search results.

Really? Because I’m told Google Search is used to find pirate content, so Google’s respect for other people’s content at least doesn’t outweigh any of its other priorities. When it comes to the news organisations own websites, that sounds great, but the concept of negotiation is still lacking. “Take it or leave it, because we’re so nice we give you that choice.”

The Internet has created more choice and diversity in news than ever before. With so many options, it can be hard for consumers to find the news they are interested in. And for all types of publishers – whether they are big or small, a traditional news site, a new digital player, a local new…

Sounds great. In this choice and diversity, we also get the blessings of fake news and propaganda, presented on equal terms as real news. What’s not to like?

In the world of print, publishers pay newsstands to display their newspapers and magazines so readers can discover them. Google provides these benefits at no cost to publishers.

Haha, yes but those newsstands also sell the papers! It is a revenue source for the news media, helping to pay for things like… journalism. Seriously mr Gingras, you know this of course. So let’s think of this part as a funny joke. Haha.

We constantly look for new ways to prioritize high quality content in our products and are also investing $300 million over three years in the Google News Initiative, which helps publishers develop new revenue streams and explore innovative ways of presenting news. This includes hundreds of projects from developing new fact-checking efforts, to boosting media literacy, to delivering almost 300,000 trainings to journalists in Europe.

Sounds amazing. By the way, when you say “prioritize high quality content”, are you talking about Russian propaganda, ISIS videos or alt.right-conspiracy theories?

By working together we can continue to make progress.

Yes, you can start by sharing revenue, data and respect the rights of content owners.

It looks like the pattern will repeat itself and perhaps Commissioner Öttinger was still right: each country is not strong enough to push back. The idea of an EU-wide effort perhaps will not work as member states implement the rules separately and not at the same time. Google wins again.

One last point: will Google’s smaller competitors suffer? The thinking is that if a dominant player can ignore the rules, but smaller competitors cannot, competition will be held back. But that is assuming competition works as normal in the online markets, which is not the case (Metcalfe again) and the way to a better digital society should not be a race to the bottom anyway.

Catching the Flame – Television Everywhere or Tailored Content?

Sunday, September 29th, 2019

Europe’s public service media risk becoming irrelevant if they cannot attract younger audiences to justify their licence fees. So innovative ideas for grabbing the attention of “digital natives” packed the conference halls and focused the attention of delegates at the CIRCOM regional broadcasters’ gathering in Novi Sad, Serbia. Poland, Hungary and Austria already have credibility problems in their PSMs because of political and economic pressure from their own governments.

 By 2020, only 1 in 10 costumers will be watching TV on a traditional screen – 50% less than in 2010.

Local TV in Serbia’s autonomous region of Vojvodina, host of the 2019 CIRCOM conference, has still not recovered from the NATO bombing of its headquarters 20 years ago and has yet to move in to its new state of the art studios.

The Ericsson’s ConsumerLab report on TV and Media finds that by 2020, only 1 in 10 costumers will be watching TV on a traditional screen – 50% less than in 2010. Young people (16-19 year olds) spend more than half of their time watching on-demand, whilst 60-79 year olds still spend around 80% of their viewing time watching scheduled linear TV.

In order to be accessible to both, the Dutch media professionals Rutger Verhoeven and Erik van Heeswijk suggest “Forget ‘online first‘! Omnichannel is the way“.

A shift from online first to story first on all channels is necessary. They regard the content alone as semi-finished material – the experience is the product.

According to them, a shift from online first to story first on all channels is necessary. They regard the content alone as semi-finished material – the experience is the product. If the product is only consumed by a section of the potential consumers, it is not complete. This completeness can only be achieved if all the generations can experience it at the same level.  Their product SmartOcto aims to present conten in an understandable way. It uses Artificial Intelligence to sort and filter big data in real time as the TV shows are streaming or airing. In this way the Octo gives insights about when the audience are most engaged, when they interact with the content and when they switch off. The developers believe that great relevant regional content is not enough. In order to improve the connection with the audiences and to find out who is attracted to which story, they splinter the information at the level of each story and enable the producers to find out exactly what attracts the audience.

Another interesting concept involves young people in producing digital video content. It comes from a small state with a large amount of technological expertise: Switzerland. Luciano Lavagetti, Head of New Digital Projects at the Swiss public service broadcaster RSI has created WeTube, a contagious and creative space where the classical media meet young digital creators. It offers young people from the Italian-speaking region of Switzerland a production space as well as workshops where they can acquire competences as a professional video maker and discover new trends and talent.

But the Prix CIRCOM for best video journalist went to Sam Everett for her BBC South documentary “ County Lines“. With camera and editing skills, music and infographics, she showed shocking pictures of teenagers in rural villages being recruited as mules and dealers by London drug gangs.

Sam and her colleague Emily Ford,  produce content that aims to reach and represent young audiences, women and people from ethnic minorities.  “They usually don’t get a voice on the BBC“, Sam says. How do they do it?

“Storytelling for younger audiences has changed so much. We cut out the reporter – there is no reporter in our story. It’s all about the people and their voice and their story. The stories we pick very much focus on individuals and their unique stories. Younger audiences don’t necessarily relate to a guy in a suit, so it’s important that people see someone on the screen they can easily relate to.“

online and social media content has to be different from TV

When the BBC South team started the online video section years ago, they would just take television broadcast content and put it onto the social media channels. The outcome was disastrous. That was when they realised: online and social media content has to be different from TV. They started creating their own social media content and posted more people-related stuff. That’s when the figures went up again.

So the experience of British and Dutch broadcasters seems to point in two different directions: the Dutch believe everything should go out over all channels at the same time, yet the BBC experience shows that social media is different and requires bespoke content in order to grab the younger generations’ attention. Ericsson Consumer Lab’s report hints that the BBC is right: “Social media has far from peaked, and one in five respondents believe they will get more of their news from social media in the next five years.” We shall learn who was right in 2024!

Blame the Data

Wednesday, September 18th, 2019

Book review: Race after Technology by Ruha Benjamin

This is a timely book. It hit the market just as three United States cities – San Francisco, Oakland and Somerville – voted to ban facial recognition cameras. Race after Technology covers much more than just that software’s tendency to classify Black faces as criminals. But this topic exemplifies Ruha Benjamin’s arguments.

As a Black Associate Professor at Princeton, she has access to a wealth of research. Notes and references fill almost one-third of the book’s pages. So this is not a rant against racist robots. It’s a reasoned exploration of what technology means for our concept of race.

On facial recognition, it is easy to chant the geeks’ mantra: “garbage in, garbage out” – to blame the data that are used to train the Artificial Intelligence for any racist outcomes. If the biggest available datasets of Black faces are the custody mugshots of suspects and prisoners, this will “teach” the AI to associate Black with crime. Weighting the algorithm to correct the balance could fix the problem, say some commentators – Harvard’s Dr James Zhou, for example.

Others such as Joy Buolambwini of Massachussetts Institute of Technology insist it is wrong to rely on facial recognition it, since it frequently makes mistakes – especially with Black people. She has founded a civil rights movement, the Algorithmic Justice League, to advocate for public scrutiny in AI use cases. Buolambwini insists facial recognition should never be used where lethal force is required because of this danger of false positive misidentification.

Not cited in the book, but significant for European readers, a new report from the University of Essex notes that in their observation of London’s Metropolitan Police live trial of facial recognition  it frequently made mistakes. Professor Peter Fussey’s report cites research showing “All classifiers performed best for lighter skinned individuals and males overall. The classifiers performed worst for darker females.”

Again, the data on which the AI trained is blamed for this poor performance.

Benjamin rejects this technical approach. Automated decision-making is already selecting job applicants, university students, benefit claimants and prisoners for parole. Human oversight is required, she insists:

Even when public agencies are employing such systems, private companies are the ones developing them, thereby acting like political entities but with none of the checks and balances. …   which means that the people whose lives are being shaped in ever more consequential ways by automated decisions have very little say in how they are governed.”

Here the clever play on words in the book’s title becomes clear. Humans are racing after technology, trying to catch up with its data-hungry evolution. Benjamin, Buolawambini and others want a mandatory PAUSE button, so that before a new AI product or service is rolled out, there is an informed public debate – an Algorithmic Impact Study.

The title’s other meaning questions what will be our notion of race after technology has moulded it? Diving deeper into semantics (well, she is a sociologist after all!) the author contends that “data portability, like other forms of movement is already delimited by race as a technology (her emphasis) that constricts one’s ability to move freely.” Is race a technological construct? That seems like political correctness taken to a wild extreme. Yet if we regard racial classifications the way that Jaron Lanier describes the categories in online forms (You are Not a Gadget, 2010) it makes sense. Personal data must be made to fit categories, so that it can be cleaned and scraped and feed the algorithms in a logical way. Machines make us fit their parameters.

And the current controversy about whether the US 2020 census should include a new question about citizenship shows that the official collection of data – far from being neutral – is a political act.

Equally thought-provoking is Race After Technology’s historical perspective. Kodak film would not capture Black faces because of its chemical composition. Polaroid’s instant camera became reviled amongst Black South Africans during apartheid because it enabled the (White) police to take instant mugshots of suspects. These examples show the author’s wide sweep of archive material.

The book’s stellar array of quotes from Black and female academics makes it worth buying for commentators and companies seeking excellent, well-qualified BAME women to diversify their workforce or extend their range of experts.

And for all who wonder how AI will develop in the near future, Ruha Benjamin quotes a young woman who discovered her social worker using Electronic Benefit Cards to automatically track her spending. “You should pay attention to what happens to us” the woman said, “You’re next.”

Benjamin concludes: “We need to consider that the technology that might be working just fine for some of us now could harm or exclude others….a visionary ethos requires looking down the road to where things might be headed. We’re next.”

 Race After Technology by Ruha Benjamin is published by Polity Press, July 2019.

Artificial Inequalities

Wednesday, September 18th, 2019

Three questions to Ruha Benjamin, author of “Race After Technology: Abolitionist Tools for the New Jim Code”

What do you think are the worst effects of bias in Artificial intelligence?

The obvious harms are the way that technologies are used to specifically reinforce certain kinds of citizen scoring systems and ways in which our existing inequalities get reinforced and amplified through technology. That could be seen as the worst of the worst.

The worst of the worst is when we unwittingly reinforce various forms of oppression in the context of trying to do good

But for me the worst of the worst is when we unwittingly reinforce various forms of oppression in the context of trying to do good. I describe this in the book as “techno-benevolence”. It’s an acknowledgement that humans are biased. We discriminate. We have all kinds of institutionalised forms of inequalities that we take for granted, and here you have technologists who say, ”We have a fix for that. If we just employ this software program or just download this app or just include this system in your institution or your company, we can go around that bias”.

 

Another disturbing element in your book is that certain technologies have been created to judge how a person looks and they are technically not geared up to recognise and perceive differences between people who have dark skin

Facial recognition software are being adopted not just by police but also by stores that are using this to identify people who look like criminals. And so now you have an entire technical apparatus that is facilitating this and so for some people, the first layer was just a question of “Does this technology actually work?

These tools in big crowds and look for people who are exercising their right to protest, that will make people more reluctant to get involved in the democratic process

A number of researchers have shown that in fact it’s very bad at identifying people who are darker-skinned, black women in particular. My colleague Joy Buolamwini at Massachusetts Institute of Technology has demonstrated this really effectively and so just at the level of effectiveness, it’s worse at identifying non-Whites, non-males. Then the added issue is that even if it was perfectly effective, and it could identify everyone perfectly, would we still want it? We have places like San Francisco that have banned it from use among their law enforcement. Other cities are considering legislation and more and more people who are otherwise supporters of more and more types of automated systems – when it comes to facial recognition, they understand the nefarious ways that it could be used. For example, it can be used to dampen social protest. If you can deploy these tools in big crowds and look for people who are exercising their right to protest, that will make people more reluctant to get involved in the democratic process, in the process of holding politicians accountable, if there are these technologies that are surveilling them at a distance. These are the next-layer questions that we have to wrestle with.

 

But it’s not fair to blame the technology, is it? Surely the datasets on which these Artificial Intelligence were trained provide the information on which they base all their decisions. If you could fix the input to the AI then the output would also follow in a more racially balanced way.

It’s a great question and I think you’re right that blame is not necessarily the right framework. But I do think that there’s plenty of responsibility to go around. At many different points there are places where we could be making better decisions. Yes, the existing data is one point. Yes, the input data is biased because of widespread racial profiling practices, so that if you’re black or Latinx you’re more likely to have a criminal record. Therefore, you are training the software to associate criminality with these racial groups.

But another point of responsibility or place we have to look is at the level of code, where you’re weighting certain types of factors more than others and so for example if you live in a community in which there’s a high unemployment rate, many algorithms in the criminal justice system take that as meaning you’re more at risk for being a repeat offender. So the unemployment rate of your community is then associated with you being a higher risk. Now that’s a problem! You have no control over the high unemployment rate. What if we trained our algorithms to look at how hospitals produce risk rather than saying “This individual is high risk” and that’s a different way of orienting the use of technology and also thinking about where the danger in society lies. It’s not about individual level danger. It’s about how our social policies and our social order produce danger for different communities at very different rates also.

Ruha Benjamin is Associate Professor of African American Studies at Princeton University.

Race After Technology: Abolitionist Tools for the New Jim Code is available now, and published by Polity.

GDPR – Springtime for Spammers

Thursday, July 11th, 2019

The much-lauded GDPR has failed to achieve its hyped expectations.

The General Data Protection RehoSubject Access Requestsaxing and Phishinggulation (GDPR) has led to loopholes and interpretations moulded to suit business practice.

Greece, Slovenia, and Portugal have not fully implemented the directive, and Bulgaria, the Czech Republic, Estonia, Finland, Slovenia, and Spain were almost a year late in transcribing it.

In the data-industrial complex, the GDPR promise is flawed mostly because privacy compliance is just an inconvenience to business. Most do the bare minimum to comply. Why would they do more?

Despite this, there is a vast ecosystem of corporate compliance tools. There are even GDPR experts claiming to be certified and official GDPR consultants, when in fact there is no official standard.

Since GDPR came into force, Ireland’s Data Protection Commission (the body that monitors Big Tech in the EU) says it has launched 19 statutory investigations, 11 of which focus on Facebook, WhatsApp, and Instagram.

The International Association of Privacy Professionals states 94,000 individual complaints and 64,000 data breach notifications in the first year of GDPR, yet only €56 million in fines were issued in the first year.

In the first year, if GDPR achieved one thing: it managed global, if fairly toothless, hype.

Helen Dixon, the Irish data protection commissioner, says: “The intention was to modernize the law and harmonize it across Europe. It’s clear we’re moving away from that.”

GDPR is also ailing because there are ‘soft-opt-in’ rules that allow for ‘related products’ to be marketed to users. A company can legally pitch a secondary product or service to their list or data group. There’s also ‘legitimate interest’ that allows for direct marketing. Again, many of the rules are debateable and poorly enforced. For instance, what is “Strictly necessary”?

To the public, GDPR was about email. It’s no wonder that hoaxing and phishing are commonplace during ‘public service announcement’ periods, like the run-up to GDPR, when people lower their guard.

HOW SPAMMERS AVOID GDPR
One way is they use Facebook, Google, Twitter, LinkedIn platforms, or send emails with unsubscribe options (which is within the USA—CAN-SPAM Act of 2003 for email so long as there’s an opt-out). If the email is a ‘service update,’ GDPR can be circumvented. For instance, a cell phone company email might include a sales pitch. The email might read: “Our pricing is changing. To view the new tariffs, go here; to view our new phones and upgrade to a package deal including broadband and TV, go here”. Or ask users to reconfirm their details for security while also slipping in a sales message. These are simple examples, but show how the platforms offer a cover for spammers.

Customers SUFFER WITH fees
Pre-GDPR, UK airline FlyBMI sent 3.3 Million emails to an opt-out list and received a £70,000 fine for doing so. From a spammer’s angle, the fines are relatively small. Post GDPR, British Airways are appealing a £183 million fine from the UK ICO office. Ironically, for leaking details of customers, who will now see fares rise and perhaps contribute to the payment of the fine should they book with the airline in the future, the same goes for Marriott Hotels. Breaches are not new. Fines are not new. The levels may have changed, the data practices not so much.

In the UK, ‘cold pitches’ to corporations are permissible and therefore a spammer might just buy lists of corporate email addresses. If you can’t spam people privately, spam companies or politicians!

So long as the rewards are achievable, any fines are worthwhile. There are gambling websites paying upwards of €300 per new customer referred to them. AirBnB pays €360 per new host referred. Almost every subscription service on the net has an incentivised affiliate programme (payday loans, get-rich-quick schemes, digital downloads, gambling, software/wares, etc).

Making money is simple arbitrage between cheap traffic, an email list and using non-confirming platforms.

WORLD WITHOUT MIND vs. DIGITAL WELLBEING
Mobile notifications are an intrusion of privacy. A like, a poke, a retweet, a friend request and alerts about what other contacts in your network are doing, not what you have done. Each intrusion comes as a notification that entices a response. As Foer called it, a “World Without Mind”, where algorithms dictate a reaction from us.

It’s leading to health issues and companies have introduced dashboards that show the total time used on an app. Apple termed this ‘digital wellbeing’.

Let’s not kid ourselves; the well-being is contradicted by a ‘take it or leave it’ ultimatum to consent to Ts & Cs of the Platform. They may have limited third-party data sharing and they have removed many third-party data targeting options. But they left one glaring hole wide open: email uploads and targeted advertising.

THE MONEY IS IN THE LIST
GDPR can’t compete with a list of emails or a remarketing list in Google Ads or Facebook.

Privacy regulations can’t keep up with side-loaded apps. They can’t keep up with data warehousing and transfers.

Privacy regulations can’t keep up with opaque algorithms.

Yet, Facebook still allows advertisers to upload email lists that can target at an individual level. They match users to emails uploaded and then create custom lists. Custom lists mean it’s possible to spam news feeds.

Then, in a somewhat Cambridge Analytica fashion, an advertiser (that’s anyone with a bank account) can expand the targeting to ‘look-alike’ audiences. Those lookalikes are the Facebook users classified as a cohort of the original email contact. If you ‘like’, ‘check-in’, or post from a particular location and exhibit an interest in something, Facebook can expand the data set and match those actions or demographics to similar people. Therefore, an email list of one hundred entries might end up targeting ten thousand ‘look-alike’ users. Just as Cambridge Analytica ran a personality quiz and then expanded the data set on those who responded to it, so too can email uploads deploy a similar payload.

Even if Facebook anonymizes the data, they still offer cheap traffic and arbitrage for bad actors like work from home scams, US-facing gambling companies, penny stocks, and until recently, bitcoin exchanges were rampant on the platform.

Take an email list and create an event on Facebook with a link to a product or website (include a re-marketing pixel), or create a group and upload the emails to LinkedIn. People respond, they click, and they are added to a list. Use the Twitter API to automate everything, record the user IDs of those who respond, then target them with ads. None of this is without work (or cost), but once on a platform, GDPR is largely irrelevant.

Google Ads is a similar story. AdWords can be used to target advertising at a certain list of emails. Google permits any list of emails to be uploaded. No checks. Just upload 10,000 emails, choose your targeting method, and sit back. No compliance, no GDPR.

SCORCHED EARTH IN ADLAND
Like Facebook, Google has been affected by GDPR, but not in a bad way! Google has attempted to foist its rules on to publishers, telling publishers to gain consent as a data processor when Google are in fact a data controller because they hold the user data and sell it (not the publisher). The data controller should gain consent.

The other quagmire to come out of GDPR is third-party exchanges, like AppNexus, who hitherto sold traffic to Google, who then sold it to their advertisers. The problem being that some of the ad exchanges are questionable, and Google could not confirm that each exchange had gained user consent. So, Google Display Network and DoubleClick For Publishers stopped serving traffic from many third-party ad exchanges. Reports note a 25–40 percent drop in programmatic ad sales. For better or worse, the losers have been third party ad exchanges. In turn, Google’s market dominance increased.

Google and Facebook command 84 per cent of global spending on digital advertising. GDPR has consolidated the dominant position. That fewer programmatic ads were available has lifted the price of those that are delivered. The market is rapidly moving to duopoly, leaving publishers cap-in-hand to either Facebook or Google. Both platforms are real winners from GDPR in terms of advertising.

Publishers must accept Facebook and Google’s GDPR terms or remain outside of the advertising eco-system. GDPR was meant to protect the user, not the platforms (or spammers!).

Facebook says: here are our terms; this is how we harvest and profile you. Don’t agree? OK, goodbye. To the user, it’s a Faustian pact to stay—and most do. #DeleteFacebook campaign never touched the edge of harming the company. Facebook reported mouth-watering 2018 financial results. Despite the scandal and techlash, fourth-quarter results beat projections for earnings and revenue as profit hit $6.88bn, up from $4.27bn a year before.

But it’s completely irrelevant whether you are registered on the site, because even if you have never signed-up, ‘Zuck’ is still tracking you. The data is gathered from websites you visit that contain a ‘like’ button or Facebook pixel.

Facebook cookies are placed on your device while you surf the open web or from contact lists uploaded by friends or family. If you were in those contacts, Facebook has a file on you.

Google is no better. One example of non-compliance. When a user turns off all tracking on their phone but checks a Google map. Their position is recorded, and they are monitored.

GDPR ACHILLES HEEL
The weak spots are not hard to locate. Adhere to GDPR by never holding the data. Use others to do that for you: LinkedIn, Twitter, Facebook, Google… even eBay and PayPal.

The way a spammer avoids GDPR compliance is along the lines of the way Google has acted towards publishers. Google wants publishers to gain user consent, while Google makes the money. The spammer wants the platform to gain consent, while the spammer makes money.

GDPR tried to change everything, and if everything changes, everything stays the same. So not a lot has actually changed—save the level of fines. Subject Access Requests were seen as the control mechanism, but firms can choose what to include in them, or in some cases simply ignore them.

The walled garden of platforms follows terms and conditions, not laws and regulations, and they can afford to pay or fight the fines.

Fighting Fake News with Bots and Buns

Monday, July 8th, 2019

Bots have a bad name in online news. They breed false stories, distort elections and spread hate speech. Avaaz reports that the 2019 European Parliament elections produced three million examples of this. Yet bots such as Voitoo (pictured) and TextRobot are being hailed as the saviour of loecal news, and as digital tools for democracy.

The theory: regional public TV and radio’s elderly audience is dying out. To engage young people in democratic processes, they must innovate – but without alienating the grannies.

To engage young people in democratic processes, they must innovate – but without alienating.

So it’s no coincidence that Europe’s public service broadcasters chose Novi Sad for their 2019 CIRCOM conference. Serbia’s second city, home of the EXIT music festival is the current European Youth Capital. And its public broadcaster RTV Vojvodina produces news in 16 languages and is about to move into new high-tech premises, replacing the buildings that NATO bombed in 1999.

Fake Nausea

Showcasing solutions at CIRCOM, Jarno Kopenen of YLE Finland presented a grisly new game called Troll Factory. “People say ‘I’m nauseated. I didn’t know this was happening.’ It’s like a vaccine to make them aware” he told me. YLE have been pro-active against hate speech since their reporter Jessikka Aro took on her trolls and won.

They now also have a digital ‘co-worker’ to ease their news workflow. Meet Voitoo: “The name means Victory – it’s not masculine or feminine. We’ve had it produced as a cuddly toy so that the journalists feel it’s their friend and helper,” Koponen told me. He admitted Voitoo has limitations, since it cannot yet cover the full range of topics that pop up in daily regional news.

“A robot can’t replace an ambitious, talented reporter, It can only make that person’s job easier,”

“A robot can’t replace an ambitious, talented reporter’” agreed Robin Govik, chief digital officer of MittMedia. “It can only make that person’s job easier,”

Govik insists that automating data-gathering “frees up” journalists to concentrate on tasks that bots cannot do. Mittmedia’s most successful news machine is the Homeowner bot. It scours the Land Registry for data points: location, size, price, etc. It spots outliers –  a high price or a famous owner, for example. If it were a reporter, you would say it had “found a news angle”. But it’s not. The result is a short machine-written article bylined “by MittMedia TextRobot”.

Similar bots write localised weather reports. Automated sport reporting means not only football and ice hockey but also minority sports all get a match report every time they are played.

‘Garbage In, Garbage Out’

If anything goes wrong, Govik blames a human for providing bad source data. Here the old saying “garbage in, garbage out” can result in a catastrophic loss of trust in the news provider.

Deepfake videos, multi-lingual lip syncing and face-swapping are tools of the trade for purveyors of disinformation. But like all software, they can also be used for legitimate purposes.

Europeans apparently don’t care that some of the news is human-free.  According to the European Commission Eurobarometer, 75% of respondents are positive towards new technologies. It’s the direct opposite of US users’ views. The Pew Research Center’s 2017 survey found that 72% of respondents expressed worry about automation.

Deepfake videos, multi-lingual lip syncing and face-swapping are tools of the trade for purveyors of disinformation. But like all software, they can also be used for legitimate purposes. Jacob Markham from the BBC Blue Room explained to CIRCOM how these apps can support national and regional identity – for example by dubbing the news into a different language or dialect.

Yet bots are only part of the answer. Trustworthy news demands a personal connection. When Bavarian Broadcasting’s young presenters ditched TV and moved into a “flatshare” on Instagram, that worked. The reporters chat in their “kitchen” about Venezuela, and who ate the last avocado in the fridge. Insta followers have now reached 41,400. BBC Brexitcast and YouTuber Rezo’s “Destruction of the CDU”(14 million views) also point to new forms of youth engagement.  But in the European Youth Capital Novi Sad, the real teens working with RTV’s Ivana Miloradov feel patronised by blue-haired presenters and dizzying special effects. They want their views to be heard in an old-school audio podcast!

Giving a voice to older viewers, Sweden’s public service television SVT’s Anne Lagercrantz’s team toured the country with coffee and cinnamon buns, to chat about the news they need.

German public broadcaster ZDF’s experience was more hardcore. Their crew lived in a flat in a Soviet-style block in Cottbus for six weeks, interviewing locals. Cottbus witnessed 2018’s most vociferous anti-immigrant protests  and attacks on camera crews. ZDF ’s experiment revealed what motivates people to act in this way.

Joining up viewers with the political process takes even more guts and patience. RTV Oost Netherlands’s listening exercise produced a CIRCOM prize for innovation – and a book of grievances. They delivered it to the newly-elected mayor – all captured on camera, and broadcast as live. The mayor promised to act on them.

If he does, this might qualify as “constructive journalism”. The concept made many conference delegates squirm and roll their eyes.

“But it’s not about happy stories”, insisted Cynara Vetch of the Constructive Institute. She cites research showing almost half of the respondents (48%) believe there is too much negativity and 37% don’t believe it. Political polarisation comes from broadcasters pitting political opponents against each other, says Vetch. She argues instead for a “calm, curious space” such as a round-table debate and a real-life follow-up. Bots can’t deliver that. Yet.