Author Archive

Google’s Gingras Gives the Monopolist’s Ultimatum

Tuesday, October 1st, 2019

After fake news and the erosion of classic media, time was ripe for policy action. Lawmakers looked at the causes and found that the ad money had gone to Google and that truth had suffered from the all-information-is-equal-ideology that the same company champions. Sure, the media industry had made its own mistakes on the way, but the playing field was far from level: same audience, same content, different rules.

So what to do about it? Of the many candidates (share data, demand licenses, break up the monopoly, enforce liability) policy-makers picked one: news snippets. By showing parts of a story in search results – the idea goes – Google can sell ads against that content and some users will not go to the news organisation’s site (which created and paid for that content in the first place). News media becomes a double loser: paying for the content Google monetizes without getting a slice of the pie. Yes, you can make the objection that maybe news orgs get more traffic with snippets and perhaps would have had to buy ads to get traffic otherwise, but that doesn’t seem consistent with the way the media market has evolved and anyway this was not what the EU policy-makers decided. The pattern is familiar, Big Tech using its special legal exceptions to profit from other people’s content. So EU policy-makers decided that news snippets should be protected and those who publish them (again, Google) should pay the news organisations that made the content. Enter the “publisher’s right” or “link tax”.

Spain and Germany had already tried something like this – introducing a law that demands Google to pay news media when it uses their content. The response? Google stopped showing those search results. This is of course a move that only a monopolist can make and those who cheer for Google should maybe consider what it means that a single company has that much power. Normal competition appears to be set aside. (See also Metcalfe’s law, for example here.) Without Google’s display, traffic decreased and the news media had to waive their new right. Why would this pattern not repeat itself on the European level if applied by the EU? The question was asked to then Commissioner for Digital Society Günther Öttinger at a seminar three years ago:

No one can survive globally without being active in the European market, Öttinger said. Spain was not big enough. Even Germany is not big enough. All services are welcome but must follow European rules.

As France implements the new copyright directive, let’s see if Commissioner Öttinger was right. Will Google accept that it too has a responsibility for the health of news or will it use its dominant position to ignore the attempts of policy-makers? A blogpost by Google’s Richard Gingras, Vice President of News, points to the latter. Let’s take a look at what he has to say:

When the French law comes into force, we will not show preview content in France for a European news publication unless the publisher has taken steps to tell us that’s what they want. This applies to search results across Google services.

So just like in Spain and Germany before, here’s the monopolist’s ultimatum. Gingras wants to dress it up like a choice, but another option would of course be for Google to make deals with news media and share revenue. In fact, this would be in line with how everything else in the world works: I make something, you want it, let’s make a deal.

Publishers have always been able to decide whether their content is available to be found in Google Search or Google News. And we recently introduced more granular webmaster settings that publishers can use to indicate how much preview information they want to include in search results.

Really? Because I’m told Google Search is used to find pirate content, so Google’s respect for other people’s content at least doesn’t outweigh any of its other priorities. When it comes to the news organisations own websites, that sounds great, but the concept of negotiation is still lacking. “Take it or leave it, because we’re so nice we give you that choice.”

The Internet has created more choice and diversity in news than ever before. With so many options, it can be hard for consumers to find the news they are interested in. And for all types of publishers – whether they are big or small, a traditional news site, a new digital player, a local new…

Sounds great. In this choice and diversity, we also get the blessings of fake news and propaganda, presented on equal terms as real news. What’s not to like?

In the world of print, publishers pay newsstands to display their newspapers and magazines so readers can discover them. Google provides these benefits at no cost to publishers.

Haha, yes but those newsstands also sell the papers! It is a revenue source for the news media, helping to pay for things like… journalism. Seriously mr Gingras, you know this of course. So let’s think of this part as a funny joke. Haha.

We constantly look for new ways to prioritize high quality content in our products and are also investing $300 million over three years in the Google News Initiative, which helps publishers develop new revenue streams and explore innovative ways of presenting news. This includes hundreds of projects from developing new fact-checking efforts, to boosting media literacy, to delivering almost 300,000 trainings to journalists in Europe.

Sounds amazing. By the way, when you say “prioritize high quality content”, are you talking about Russian propaganda, ISIS videos or alt.right-conspiracy theories?

By working together we can continue to make progress.

Yes, you can start by sharing revenue, data and respect the rights of content owners.

It looks like the pattern will repeat itself and perhaps Commissioner Öttinger was still right: each country is not strong enough to push back. The idea of an EU-wide effort perhaps will not work as member states implement the rules separately and not at the same time. Google wins again.

One last point: will Google’s smaller competitors suffer? The thinking is that if a dominant player can ignore the rules, but smaller competitors cannot, competition will be held back. But that is assuming competition works as normal in the online markets, which is not the case (Metcalfe again) and the way to a better digital society should not be a race to the bottom anyway.

Catching the Flame – Television Everywhere or Tailored Content?

Sunday, September 29th, 2019

Europe’s public service media risk becoming irrelevant if they cannot attract younger audiences to justify their licence fees. So innovative ideas for grabbing the attention of “digital natives” packed the conference halls and focused the attention of delegates at the CIRCOM regional broadcasters’ gathering in Novi Sad, Serbia. Poland, Hungary and Austria already have credibility problems in their PSMs because of political and economic pressure from their own governments.

 By 2020, only 1 in 10 costumers will be watching TV on a traditional screen – 50% less than in 2010.

Local TV in Serbia’s autonomous region of Vojvodina, host of the 2019 CIRCOM conference, has still not recovered from the NATO bombing of its headquarters 20 years ago and has yet to move in to its new state of the art studios.

The Ericsson’s ConsumerLab report on TV and Media finds that by 2020, only 1 in 10 costumers will be watching TV on a traditional screen – 50% less than in 2010. Young people (16-19 year olds) spend more than half of their time watching on-demand, whilst 60-79 year olds still spend around 80% of their viewing time watching scheduled linear TV.

In order to be accessible to both, the Dutch media professionals Rutger Verhoeven and Erik van Heeswijk suggest “Forget ‘online first‘! Omnichannel is the way“.

A shift from online first to story first on all channels is necessary. They regard the content alone as semi-finished material – the experience is the product.

According to them, a shift from online first to story first on all channels is necessary. They regard the content alone as semi-finished material – the experience is the product. If the product is only consumed by a section of the potential consumers, it is not complete. This completeness can only be achieved if all the generations can experience it at the same level.  Their product SmartOcto aims to present conten in an understandable way. It uses Artificial Intelligence to sort and filter big data in real time as the TV shows are streaming or airing. In this way the Octo gives insights about when the audience are most engaged, when they interact with the content and when they switch off. The developers believe that great relevant regional content is not enough. In order to improve the connection with the audiences and to find out who is attracted to which story, they splinter the information at the level of each story and enable the producers to find out exactly what attracts the audience.

Another interesting concept involves young people in producing digital video content. It comes from a small state with a large amount of technological expertise: Switzerland. Luciano Lavagetti, Head of New Digital Projects at the Swiss public service broadcaster RSI has created WeTube, a contagious and creative space where the classical media meet young digital creators. It offers young people from the Italian-speaking region of Switzerland a production space as well as workshops where they can acquire competences as a professional video maker and discover new trends and talent.

But the Prix CIRCOM for best video journalist went to Sam Everett for her BBC South documentary “ County Lines“. With camera and editing skills, music and infographics, she showed shocking pictures of teenagers in rural villages being recruited as mules and dealers by London drug gangs.

Sam and her colleague Emily Ford,  produce content that aims to reach and represent young audiences, women and people from ethnic minorities.  “They usually don’t get a voice on the BBC“, Sam says. How do they do it?

“Storytelling for younger audiences has changed so much. We cut out the reporter – there is no reporter in our story. It’s all about the people and their voice and their story. The stories we pick very much focus on individuals and their unique stories. Younger audiences don’t necessarily relate to a guy in a suit, so it’s important that people see someone on the screen they can easily relate to.“

online and social media content has to be different from TV

When the BBC South team started the online video section years ago, they would just take television broadcast content and put it onto the social media channels. The outcome was disastrous. That was when they realised: online and social media content has to be different from TV. They started creating their own social media content and posted more people-related stuff. That’s when the figures went up again.

So the experience of British and Dutch broadcasters seems to point in two different directions: the Dutch believe everything should go out over all channels at the same time, yet the BBC experience shows that social media is different and requires bespoke content in order to grab the younger generations’ attention. Ericsson Consumer Lab’s report hints that the BBC is right: “Social media has far from peaked, and one in five respondents believe they will get more of their news from social media in the next five years.” We shall learn who was right in 2024!

Blame the Data

Wednesday, September 18th, 2019

Book review: Race after Technology by Ruha Benjamin

This is a timely book. It hit the market just as three United States cities – San Francisco, Oakland and Somerville – voted to ban facial recognition cameras. Race after Technology covers much more than just that software’s tendency to classify Black faces as criminals. But this topic exemplifies Ruha Benjamin’s arguments.

As a Black Associate Professor at Princeton, she has access to a wealth of research. Notes and references fill almost one-third of the book’s pages. So this is not a rant against racist robots. It’s a reasoned exploration of what technology means for our concept of race.

On facial recognition, it is easy to chant the geeks’ mantra: “garbage in, garbage out” – to blame the data that are used to train the Artificial Intelligence for any racist outcomes. If the biggest available datasets of Black faces are the custody mugshots of suspects and prisoners, this will “teach” the AI to associate Black with crime. Weighting the algorithm to correct the balance could fix the problem, say some commentators – Harvard’s Dr James Zhou, for example.

Others such as Joy Buolambwini of Massachussetts Institute of Technology insist it is wrong to rely on facial recognition it, since it frequently makes mistakes – especially with Black people. She has founded a civil rights movement, the Algorithmic Justice League, to advocate for public scrutiny in AI use cases. Buolambwini insists facial recognition should never be used where lethal force is required because of this danger of false positive misidentification.

Not cited in the book, but significant for European readers, a new report from the University of Essex notes that in their observation of London’s Metropolitan Police live trial of facial recognition  it frequently made mistakes. Professor Peter Fussey’s report cites research showing “All classifiers performed best for lighter skinned individuals and males overall. The classifiers performed worst for darker females.”

Again, the data on which the AI trained is blamed for this poor performance.

Benjamin rejects this technical approach. Automated decision-making is already selecting job applicants, university students, benefit claimants and prisoners for parole. Human oversight is required, she insists:

Even when public agencies are employing such systems, private companies are the ones developing them, thereby acting like political entities but with none of the checks and balances. …   which means that the people whose lives are being shaped in ever more consequential ways by automated decisions have very little say in how they are governed.”

Here the clever play on words in the book’s title becomes clear. Humans are racing after technology, trying to catch up with its data-hungry evolution. Benjamin, Buolawambini and others want a mandatory PAUSE button, so that before a new AI product or service is rolled out, there is an informed public debate – an Algorithmic Impact Study.

The title’s other meaning questions what will be our notion of race after technology has moulded it? Diving deeper into semantics (well, she is a sociologist after all!) the author contends that “data portability, like other forms of movement is already delimited by race as a technology (her emphasis) that constricts one’s ability to move freely.” Is race a technological construct? That seems like political correctness taken to a wild extreme. Yet if we regard racial classifications the way that Jaron Lanier describes the categories in online forms (You are Not a Gadget, 2010) it makes sense. Personal data must be made to fit categories, so that it can be cleaned and scraped and feed the algorithms in a logical way. Machines make us fit their parameters.

And the current controversy about whether the US 2020 census should include a new question about citizenship shows that the official collection of data – far from being neutral – is a political act.

Equally thought-provoking is Race After Technology’s historical perspective. Kodak film would not capture Black faces because of its chemical composition. Polaroid’s instant camera became reviled amongst Black South Africans during apartheid because it enabled the (White) police to take instant mugshots of suspects. These examples show the author’s wide sweep of archive material.

The book’s stellar array of quotes from Black and female academics makes it worth buying for commentators and companies seeking excellent, well-qualified BAME women to diversify their workforce or extend their range of experts.

And for all who wonder how AI will develop in the near future, Ruha Benjamin quotes a young woman who discovered her social worker using Electronic Benefit Cards to automatically track her spending. “You should pay attention to what happens to us” the woman said, “You’re next.”

Benjamin concludes: “We need to consider that the technology that might be working just fine for some of us now could harm or exclude others….a visionary ethos requires looking down the road to where things might be headed. We’re next.”

 Race After Technology by Ruha Benjamin is published by Polity Press, July 2019.

Artificial Inequalities

Wednesday, September 18th, 2019

Three questions to Ruha Benjamin, author of “Race After Technology: Abolitionist Tools for the New Jim Code”

What do you think are the worst effects of bias in Artificial intelligence?

The obvious harms are the way that technologies are used to specifically reinforce certain kinds of citizen scoring systems and ways in which our existing inequalities get reinforced and amplified through technology. That could be seen as the worst of the worst.

The worst of the worst is when we unwittingly reinforce various forms of oppression in the context of trying to do good

But for me the worst of the worst is when we unwittingly reinforce various forms of oppression in the context of trying to do good. I describe this in the book as “techno-benevolence”. It’s an acknowledgement that humans are biased. We discriminate. We have all kinds of institutionalised forms of inequalities that we take for granted, and here you have technologists who say, ”We have a fix for that. If we just employ this software program or just download this app or just include this system in your institution or your company, we can go around that bias”.

 

Another disturbing element in your book is that certain technologies have been created to judge how a person looks and they are technically not geared up to recognise and perceive differences between people who have dark skin

Facial recognition software are being adopted not just by police but also by stores that are using this to identify people who look like criminals. And so now you have an entire technical apparatus that is facilitating this and so for some people, the first layer was just a question of “Does this technology actually work?

These tools in big crowds and look for people who are exercising their right to protest, that will make people more reluctant to get involved in the democratic process

A number of researchers have shown that in fact it’s very bad at identifying people who are darker-skinned, black women in particular. My colleague Joy Buolamwini at Massachusetts Institute of Technology has demonstrated this really effectively and so just at the level of effectiveness, it’s worse at identifying non-Whites, non-males. Then the added issue is that even if it was perfectly effective, and it could identify everyone perfectly, would we still want it? We have places like San Francisco that have banned it from use among their law enforcement. Other cities are considering legislation and more and more people who are otherwise supporters of more and more types of automated systems – when it comes to facial recognition, they understand the nefarious ways that it could be used. For example, it can be used to dampen social protest. If you can deploy these tools in big crowds and look for people who are exercising their right to protest, that will make people more reluctant to get involved in the democratic process, in the process of holding politicians accountable, if there are these technologies that are surveilling them at a distance. These are the next-layer questions that we have to wrestle with.

 

But it’s not fair to blame the technology, is it? Surely the datasets on which these Artificial Intelligence were trained provide the information on which they base all their decisions. If you could fix the input to the AI then the output would also follow in a more racially balanced way.

It’s a great question and I think you’re right that blame is not necessarily the right framework. But I do think that there’s plenty of responsibility to go around. At many different points there are places where we could be making better decisions. Yes, the existing data is one point. Yes, the input data is biased because of widespread racial profiling practices, so that if you’re black or Latinx you’re more likely to have a criminal record. Therefore, you are training the software to associate criminality with these racial groups.

But another point of responsibility or place we have to look is at the level of code, where you’re weighting certain types of factors more than others and so for example if you live in a community in which there’s a high unemployment rate, many algorithms in the criminal justice system take that as meaning you’re more at risk for being a repeat offender. So the unemployment rate of your community is then associated with you being a higher risk. Now that’s a problem! You have no control over the high unemployment rate. What if we trained our algorithms to look at how hospitals produce risk rather than saying “This individual is high risk” and that’s a different way of orienting the use of technology and also thinking about where the danger in society lies. It’s not about individual level danger. It’s about how our social policies and our social order produce danger for different communities at very different rates also.

Ruha Benjamin is Associate Professor of African American Studies at Princeton University.

Race After Technology: Abolitionist Tools for the New Jim Code is available now, and published by Polity.

GDPR – Springtime for Spammers

Thursday, July 11th, 2019

The much-lauded GDPR has failed to achieve its hyped expectations.

The General Data Protection RehoSubject Access Requestsaxing and Phishinggulation (GDPR) has led to loopholes and interpretations moulded to suit business practice.

Greece, Slovenia, and Portugal have not fully implemented the directive, and Bulgaria, the Czech Republic, Estonia, Finland, Slovenia, and Spain were almost a year late in transcribing it.

In the data-industrial complex, the GDPR promise is flawed mostly because privacy compliance is just an inconvenience to business. Most do the bare minimum to comply. Why would they do more?

Despite this, there is a vast ecosystem of corporate compliance tools. There are even GDPR experts claiming to be certified and official GDPR consultants, when in fact there is no official standard.

Since GDPR came into force, Ireland’s Data Protection Commission (the body that monitors Big Tech in the EU) says it has launched 19 statutory investigations, 11 of which focus on Facebook, WhatsApp, and Instagram.

The International Association of Privacy Professionals states 94,000 individual complaints and 64,000 data breach notifications in the first year of GDPR, yet only €56 million in fines were issued in the first year.

In the first year, if GDPR achieved one thing: it managed global, if fairly toothless, hype.

Helen Dixon, the Irish data protection commissioner, says: “The intention was to modernize the law and harmonize it across Europe. It’s clear we’re moving away from that.”

GDPR is also ailing because there are ‘soft-opt-in’ rules that allow for ‘related products’ to be marketed to users. A company can legally pitch a secondary product or service to their list or data group. There’s also ‘legitimate interest’ that allows for direct marketing. Again, many of the rules are debateable and poorly enforced. For instance, what is “Strictly necessary”?

To the public, GDPR was about email. It’s no wonder that hoaxing and phishing are commonplace during ‘public service announcement’ periods, like the run-up to GDPR, when people lower their guard.

HOW SPAMMERS AVOID GDPR
One way is they use Facebook, Google, Twitter, LinkedIn platforms, or send emails with unsubscribe options (which is within the USA—CAN-SPAM Act of 2003 for email so long as there’s an opt-out). If the email is a ‘service update,’ GDPR can be circumvented. For instance, a cell phone company email might include a sales pitch. The email might read: “Our pricing is changing. To view the new tariffs, go here; to view our new phones and upgrade to a package deal including broadband and TV, go here”. Or ask users to reconfirm their details for security while also slipping in a sales message. These are simple examples, but show how the platforms offer a cover for spammers.

Customers SUFFER WITH fees
Pre-GDPR, UK airline FlyBMI sent 3.3 Million emails to an opt-out list and received a £70,000 fine for doing so. From a spammer’s angle, the fines are relatively small. Post GDPR, British Airways are appealing a £183 million fine from the UK ICO office. Ironically, for leaking details of customers, who will now see fares rise and perhaps contribute to the payment of the fine should they book with the airline in the future, the same goes for Marriott Hotels. Breaches are not new. Fines are not new. The levels may have changed, the data practices not so much.

In the UK, ‘cold pitches’ to corporations are permissible and therefore a spammer might just buy lists of corporate email addresses. If you can’t spam people privately, spam companies or politicians!

So long as the rewards are achievable, any fines are worthwhile. There are gambling websites paying upwards of €300 per new customer referred to them. AirBnB pays €360 per new host referred. Almost every subscription service on the net has an incentivised affiliate programme (payday loans, get-rich-quick schemes, digital downloads, gambling, software/wares, etc).

Making money is simple arbitrage between cheap traffic, an email list and using non-confirming platforms.

WORLD WITHOUT MIND vs. DIGITAL WELLBEING
Mobile notifications are an intrusion of privacy. A like, a poke, a retweet, a friend request and alerts about what other contacts in your network are doing, not what you have done. Each intrusion comes as a notification that entices a response. As Foer called it, a “World Without Mind”, where algorithms dictate a reaction from us.

It’s leading to health issues and companies have introduced dashboards that show the total time used on an app. Apple termed this ‘digital wellbeing’.

Let’s not kid ourselves; the well-being is contradicted by a ‘take it or leave it’ ultimatum to consent to Ts & Cs of the Platform. They may have limited third-party data sharing and they have removed many third-party data targeting options. But they left one glaring hole wide open: email uploads and targeted advertising.

THE MONEY IS IN THE LIST
GDPR can’t compete with a list of emails or a remarketing list in Google Ads or Facebook.

Privacy regulations can’t keep up with side-loaded apps. They can’t keep up with data warehousing and transfers.

Privacy regulations can’t keep up with opaque algorithms.

Yet, Facebook still allows advertisers to upload email lists that can target at an individual level. They match users to emails uploaded and then create custom lists. Custom lists mean it’s possible to spam news feeds.

Then, in a somewhat Cambridge Analytica fashion, an advertiser (that’s anyone with a bank account) can expand the targeting to ‘look-alike’ audiences. Those lookalikes are the Facebook users classified as a cohort of the original email contact. If you ‘like’, ‘check-in’, or post from a particular location and exhibit an interest in something, Facebook can expand the data set and match those actions or demographics to similar people. Therefore, an email list of one hundred entries might end up targeting ten thousand ‘look-alike’ users. Just as Cambridge Analytica ran a personality quiz and then expanded the data set on those who responded to it, so too can email uploads deploy a similar payload.

Even if Facebook anonymizes the data, they still offer cheap traffic and arbitrage for bad actors like work from home scams, US-facing gambling companies, penny stocks, and until recently, bitcoin exchanges were rampant on the platform.

Take an email list and create an event on Facebook with a link to a product or website (include a re-marketing pixel), or create a group and upload the emails to LinkedIn. People respond, they click, and they are added to a list. Use the Twitter API to automate everything, record the user IDs of those who respond, then target them with ads. None of this is without work (or cost), but once on a platform, GDPR is largely irrelevant.

Google Ads is a similar story. AdWords can be used to target advertising at a certain list of emails. Google permits any list of emails to be uploaded. No checks. Just upload 10,000 emails, choose your targeting method, and sit back. No compliance, no GDPR.

SCORCHED EARTH IN ADLAND
Like Facebook, Google has been affected by GDPR, but not in a bad way! Google has attempted to foist its rules on to publishers, telling publishers to gain consent as a data processor when Google are in fact a data controller because they hold the user data and sell it (not the publisher). The data controller should gain consent.

The other quagmire to come out of GDPR is third-party exchanges, like AppNexus, who hitherto sold traffic to Google, who then sold it to their advertisers. The problem being that some of the ad exchanges are questionable, and Google could not confirm that each exchange had gained user consent. So, Google Display Network and DoubleClick For Publishers stopped serving traffic from many third-party ad exchanges. Reports note a 25–40 percent drop in programmatic ad sales. For better or worse, the losers have been third party ad exchanges. In turn, Google’s market dominance increased.

Google and Facebook command 84 per cent of global spending on digital advertising. GDPR has consolidated the dominant position. That fewer programmatic ads were available has lifted the price of those that are delivered. The market is rapidly moving to duopoly, leaving publishers cap-in-hand to either Facebook or Google. Both platforms are real winners from GDPR in terms of advertising.

Publishers must accept Facebook and Google’s GDPR terms or remain outside of the advertising eco-system. GDPR was meant to protect the user, not the platforms (or spammers!).

Facebook says: here are our terms; this is how we harvest and profile you. Don’t agree? OK, goodbye. To the user, it’s a Faustian pact to stay—and most do. #DeleteFacebook campaign never touched the edge of harming the company. Facebook reported mouth-watering 2018 financial results. Despite the scandal and techlash, fourth-quarter results beat projections for earnings and revenue as profit hit $6.88bn, up from $4.27bn a year before.

But it’s completely irrelevant whether you are registered on the site, because even if you have never signed-up, ‘Zuck’ is still tracking you. The data is gathered from websites you visit that contain a ‘like’ button or Facebook pixel.

Facebook cookies are placed on your device while you surf the open web or from contact lists uploaded by friends or family. If you were in those contacts, Facebook has a file on you.

Google is no better. One example of non-compliance. When a user turns off all tracking on their phone but checks a Google map. Their position is recorded, and they are monitored.

GDPR ACHILLES HEEL
The weak spots are not hard to locate. Adhere to GDPR by never holding the data. Use others to do that for you: LinkedIn, Twitter, Facebook, Google… even eBay and PayPal.

The way a spammer avoids GDPR compliance is along the lines of the way Google has acted towards publishers. Google wants publishers to gain user consent, while Google makes the money. The spammer wants the platform to gain consent, while the spammer makes money.

GDPR tried to change everything, and if everything changes, everything stays the same. So not a lot has actually changed—save the level of fines. Subject Access Requests were seen as the control mechanism, but firms can choose what to include in them, or in some cases simply ignore them.

The walled garden of platforms follows terms and conditions, not laws and regulations, and they can afford to pay or fight the fines.

Fighting Fake News with Bots and Buns

Monday, July 8th, 2019

Bots have a bad name in online news. They breed false stories, distort elections and spread hate speech. Avaaz reports that the 2019 European Parliament elections produced three million examples of this. Yet bots such as Voitoo (pictured) and TextRobot are being hailed as the saviour of loecal news, and as digital tools for democracy.

The theory: regional public TV and radio’s elderly audience is dying out. To engage young people in democratic processes, they must innovate – but without alienating the grannies.

To engage young people in democratic processes, they must innovate – but without alienating.

So it’s no coincidence that Europe’s public service broadcasters chose Novi Sad for their 2019 CIRCOM conference. Serbia’s second city, home of the EXIT music festival is the current European Youth Capital. And its public broadcaster RTV Vojvodina produces news in 16 languages and is about to move into new high-tech premises, replacing the buildings that NATO bombed in 1999.

Fake Nausea

Showcasing solutions at CIRCOM, Jarno Kopenen of YLE Finland presented a grisly new game called Troll Factory. “People say ‘I’m nauseated. I didn’t know this was happening.’ It’s like a vaccine to make them aware” he told me. YLE have been pro-active against hate speech since their reporter Jessikka Aro took on her trolls and won.

They now also have a digital ‘co-worker’ to ease their news workflow. Meet Voitoo: “The name means Victory – it’s not masculine or feminine. We’ve had it produced as a cuddly toy so that the journalists feel it’s their friend and helper,” Koponen told me. He admitted Voitoo has limitations, since it cannot yet cover the full range of topics that pop up in daily regional news.

“A robot can’t replace an ambitious, talented reporter, It can only make that person’s job easier,”

“A robot can’t replace an ambitious, talented reporter’” agreed Robin Govik, chief digital officer of MittMedia. “It can only make that person’s job easier,”

Govik insists that automating data-gathering “frees up” journalists to concentrate on tasks that bots cannot do. Mittmedia’s most successful news machine is the Homeowner bot. It scours the Land Registry for data points: location, size, price, etc. It spots outliers –  a high price or a famous owner, for example. If it were a reporter, you would say it had “found a news angle”. But it’s not. The result is a short machine-written article bylined “by MittMedia TextRobot”.

Similar bots write localised weather reports. Automated sport reporting means not only football and ice hockey but also minority sports all get a match report every time they are played.

‘Garbage In, Garbage Out’

If anything goes wrong, Govik blames a human for providing bad source data. Here the old saying “garbage in, garbage out” can result in a catastrophic loss of trust in the news provider.

Deepfake videos, multi-lingual lip syncing and face-swapping are tools of the trade for purveyors of disinformation. But like all software, they can also be used for legitimate purposes.

Europeans apparently don’t care that some of the news is human-free.  According to the European Commission Eurobarometer, 75% of respondents are positive towards new technologies. It’s the direct opposite of US users’ views. The Pew Research Center’s 2017 survey found that 72% of respondents expressed worry about automation.

Deepfake videos, multi-lingual lip syncing and face-swapping are tools of the trade for purveyors of disinformation. But like all software, they can also be used for legitimate purposes. Jacob Markham from the BBC Blue Room explained to CIRCOM how these apps can support national and regional identity – for example by dubbing the news into a different language or dialect.

Yet bots are only part of the answer. Trustworthy news demands a personal connection. When Bavarian Broadcasting’s young presenters ditched TV and moved into a “flatshare” on Instagram, that worked. The reporters chat in their “kitchen” about Venezuela, and who ate the last avocado in the fridge. Insta followers have now reached 41,400. BBC Brexitcast and YouTuber Rezo’s “Destruction of the CDU”(14 million views) also point to new forms of youth engagement.  But in the European Youth Capital Novi Sad, the real teens working with RTV’s Ivana Miloradov feel patronised by blue-haired presenters and dizzying special effects. They want their views to be heard in an old-school audio podcast!

Giving a voice to older viewers, Sweden’s public service television SVT’s Anne Lagercrantz’s team toured the country with coffee and cinnamon buns, to chat about the news they need.

German public broadcaster ZDF’s experience was more hardcore. Their crew lived in a flat in a Soviet-style block in Cottbus for six weeks, interviewing locals. Cottbus witnessed 2018’s most vociferous anti-immigrant protests  and attacks on camera crews. ZDF ’s experiment revealed what motivates people to act in this way.

Joining up viewers with the political process takes even more guts and patience. RTV Oost Netherlands’s listening exercise produced a CIRCOM prize for innovation – and a book of grievances. They delivered it to the newly-elected mayor – all captured on camera, and broadcast as live. The mayor promised to act on them.

If he does, this might qualify as “constructive journalism”. The concept made many conference delegates squirm and roll their eyes.

“But it’s not about happy stories”, insisted Cynara Vetch of the Constructive Institute. She cites research showing almost half of the respondents (48%) believe there is too much negativity and 37% don’t believe it. Political polarisation comes from broadcasters pitting political opponents against each other, says Vetch. She argues instead for a “calm, curious space” such as a round-table debate and a real-life follow-up. Bots can’t deliver that. Yet.

Fake News Kills – 29 Deaths from Measles in Europe Last Year

Tuesday, July 2nd, 2019

Measles is back in Europe. More than 8,000 cases were reported in 2018. Twenty-nine people died after catching the disease, which until recently was all but eradicated through childhood immunisation. Social media companies are blamed for amplifying scare stories by anti-vaccination campaigners—the so-called “anti-vaxxers.”.

Italy has banned non-vaccinated children from attending school and threatens non-compliant parents with fines.

The current measles outbreak coincides with a drop in the rate of vaccination. According to the European Centre for Disease Control (ECDC), based in Stockholm, most countries fail to meet the required standard of 95% full vaccinations. That provides “herd immunity.”. It makes it most unlikely that anyone would catch or transmit a disease and protects vulnerable groups, such as very young babies who cannot be vaccinated. Only Hungary, Slovakia, Portugal, and Sweden have achieved this.

ECDC spokesperson Niklas Bergstrand comments: ”It is unacceptable that children and adults in EU countries die from complications of vaccine-preventable diseases… If the goal of eliminating measles in Europe is to be reached, vaccination coverage needs to increase in a number of countries.“

The ECDC has not tried to curb false news. Instead, it provides Train the Trainers courses to address “vaccine hesitancy.”. US public health officials are more proactive. The CEO of the American Medical Association, James L. Madara, has written to all internet platforms, urging them to change the algorithms that amplify anti-vaxxer content.

Crowdfunding platform GoFundMe responded positively to a similar request. Dublin-based owner Rob Solomon has removed all appeals from anti-vaccinationists. Now, if you type “anti-vax” into the search box, GoFundMe offers the opposite: people raising money to vaccinate teenagers whose parents refused when they were babies, and a pro-vaccine man who is denouncing anti-vaxxers on tour across the United States.

In an editorial in Science journal urging a task force to tackle vaccine hesitancy, Heidi J. Larson and William S. Schulz call for more emotion and fewer dry statistics in pro-vaccine messages.

A rare example of this is the open letter by children’s author Roald Dahl to his seven-year-old daughter Olivia, who had measles.

“One morning, when she was well on the road to recovery, I was sitting on her bed showing her how to fashion little animals out of coloured pipe cleaners, and when it came to her turn to make one herself, I noticed that her fingers and her mind were not working together and she couldn’t do anything.

“Are you feeling all right?” I asked her.

“I feel all sleepy,” she said.

In an hour, she was unconscious. In twelve hours she was dead.“

Dahl’s appeal to parents to immunise their children still circulates on social media. But only people who are already convinced are likely to see it because of the filter bubble effect: the networks automatically provide more of what we like and approve.

Fake news and “hesitancy” are not the only causes of the current measles epidemic. Experts at the London School of Hygiene and Tropical Medicine’s Confidence in Vaccines Project reckon it may have its roots in the war in Ukraine. Faulty Russian vaccines and the civil conflict that interrupted vaccination programmes played a part. Power outages meant the refrigeration that keeps the vaccine fresh may not have maintained the correct temperature. Close proximity to Ukraine might also explain the high incidence of measles in neighbouring Poland and Romania.

Israel also has a measles epidemic, and Orthodox Jewish visitors to New York City have been blamed for spreading it. So fake news spawned an evil twin: anti-Semitic hate speech.

Europe’s own anti-vaxxers are less strident. The European Forum for Vaccine Vigilance says it is pro-choice and warns that newly-vaccinated people pose a risk of infection since they carry small doses of a live virus. EFVV’s Sally Fallon Morell advocates, “Health officials should require a two-week quarantine of all children and adults who receive vaccinations to prevent transmission of infectious diseases.”

Official ECDC statistics do not include Germany, but the Robert Koch Institute says they are lagging behind the rest of Europe. Cinemas in some German cities recently screened the 90-minute film Vaxxed, made by disgraced British doctor Andrew Wakefield. His 1998 press conference claiming to have established a link between the Measles, Mumps, and Rubella (MMR) vaccine and a new form of autism is believed to have affected the public perception of the triple vaccine and vaccines in general. A Sunday Times investigation by journalist Brian Deer revealed that Wakefield and his wife planned to set up a business supplying single vaccines, which he maintained were “safer” than the triple doses favoured by the medical establishment.

BBC investigative reporter Janet Trewin scrutinised the evidence before immunising her own two children. She says: ”The controversy was immense and, even for a working journalist, confusing. We were both frightened at the prospect of not vaccinating at all. Our research suggested that, if there was any problem at all to a baby, it was most likely to come from the child’s system having to handle all three at once. Consequently, we gave all the injections but with many weeks in between to allow the body to overcome the onslaught.“

Meanwhile, Andrew Wakefield was struck off the General Medical Council register and banned from practicing medicine in the UK. He relocated to Texas. A number of parents of autistic children worked with him on Vaxxed. The fact-checking service Snopes has investigated this and found them false.

Andrew Wakefield has allies in politics and Hollywood. Donald Trump met him on the 2016 election campaign trail, and since he became president, Trump has repeatedly tweeted his support for separate vaccines and sympathy for parents of children with autism.

Medical studies in Denmark, Ireland, and the UK have all failed to establish a causal link between MMR and autism.

However, it is an undisputed fact that some people are badly affected by vaccination, either because of a faulty vaccine, wrong dose, allergy, or other misadventure.

In Europe, this was established in the European Court of Justice on 21. June 2017 by the family and lawyers of Mr. J.W., an adult who contracted multiple sclerosis after a Hepatitis B injection and died five years later. The case had been dismissed by all the lesser courts in France. But the EJC ruled the vaccine manufacturer Sanofi Pasteur was liable in this case. The judgement makes it clear that whatever the general medical research, the cause and effect in this case are demonstrated by the short time it took after the vaccination for MS to develop, Mr. J.W.’s previous good health, and his lack of family history of MS. This does not only count in France but throughout Europe. Yet similar cases are rare. In response to a Freedom of Information request in 2017, the UK Department of Health revealed that 759 claims had been made from 2008-2017 and 11 were successful.

Footage of vaccine-damaged children evokes enormous sympathy, whatever science says. Yet emotive disinformation needs only a small dose of fact to puncture its appeal. Europe’s award-winning anti-fake news campaign, Lie Detectors, describes the effect of its school visits: in the words of founder Juliane von Reppert Bismarck, “It’s like an immunisation. Just a short, sharp wake-up that sensitises young people and makes them question what they see.” As European Immunisation Week gets under way, the authorities are hoping for a similar effect in parents, too.

The Surveillance Capitalist’s Secret Sauce

Tuesday, June 11th, 2019

A how-to explanation in four minutes, including a review of Harvard-sociologist Soshana Zuboff’s latest book

Soshana Zuboff’s book “The Age of Surveillance Capitalism” has received an enormous amount of praise from prominent voices even before it was published. “From the very first page I was consumed with an overwhelming imperative: everyone needs to read this book”, claims author Naomi Klein, herself well-known for her criticism of corporate globalization. Not too many people will follow that advice, probably: Zuboff’s book contains almost 700 pages. (If you are asked to write a review: “From the very first page” sure is an interesting approach, demanding not too much reading.) The book’s size is a problem. The message is buried in too many pages, too many topics and asides. If there’s something like “surveillance capitalism”, and it is really different from the form of capitalism we’ve got used to in the last decades and centuries, it should be worth explaining it somewhat more to the point.

So here’s the recipe for surveillance capitalism, in brief. As always, Google leads the way. From the very start in 1997 Google collected data on users’ search behavior as a byproduct of the query activity. Initially, these data were treated as mere waste. Then the company realized that data related to search activities could be used to improve the search engine. A surveillance capitalist would say: Google re-invested all the data-revenues from search to improve its service. But the more interesting step followed later, when the company discovered that it could market the behavioral data in other areas than search. This was when Google started to build its advertisement business, which relies heavily on the data from search. That’s the trick: Offer a free internet search engine (or whatever) to create a surplus of behavioral data. Then make your money with that data surplus!

It’s misleading to say that people “buy” search and pay for this with giving away their data. Nothing is sold or bought. There are no customers.

The first takeaway from this is: Surveillance capitalists have a hidden agenda. They are notoriously obscuring their real ambitions to their customers and society. Concerning this point, Zuboff is really clear. It’s misleading to say that people “buy” search and pay for this with giving away their data. Nothing is sold or bought. There are no customers. We – the users “are the source of free raw material that feeds a new kind of manufacturing process”, says Zuboff.

What’s in it, economically?
That’s evil. But from the viewpoint of a surveillance capitalist, your question should be: “What’s in it, economically?” Zuboff is eager to point out that the new business model is not only about advertising, but about predicting and, finally, steering people’s behavior. Surveillance capitalists convert the “behavioral surplus” (which is the data created, by but not used by the original service) into “prediction products” which are sold on “markets for future behavior”. Surely you want to know: Where are these markets?

Data have become the new gold or the new oil, some people say. Economists calculate that in the US, the market for data amounts to 130 billion Euro a year. Sounds a lot. But then, Google alone generates 80 Billion dollar a year by revenues just from digital advertising. Is the “new oil” basically just advertisement – annoying things that pop up on our screens? If you read into the details of the story on the “new oil”, you will find that much more of data-related value is realized within companies. That’s fine – but far away from a marketplace for future behavior where you could sell your prediction products.

Here’s an answer: The market which is the surveillance capitalist’s playground is the infrastructure of everything. Cars and roads, for instance. “Google and Amazon are already locked in competition for the dashboard of your car, where their systems will control all communication and applications. (…) Google already offers applications developers a cloud- based ‘scaleable geolocation telemetry system’ using Google Maps”, Zuboff explains. In the near future, Google will not only be able to run our automated our autonomous vehicles and to deliver smart traffic management. It will be in a position to run our cities – much better than city governments who do not possess the relevant data (and who are forced by open data policy-guidelines to give away the data they produce or possess themselves to Google & Co). Insurers, to mention another field, see Google “as a potential rival and threat because of its strong brand and ability to manage customer data”, says Zuboff. Insurance too, has potential to innovate. Just think of an insurance contract which includes not only close observation of your driving behavior, but actual interventions in the driving process, such as an automatic speed-limit which is activated as part of the insurance policy that you opted in? Medicine would be another field: the use of big data improves diagnoses and treatment. Learning analytics. And many other fields. (It’s a pity that Zuboff isn’t interested to work out what the “markets for future behavior” she mentions again and again really could look like and how it is functioning economically.)

The value of data is mostly realized within companies. Big companies. “Surveillance capitalism” will be about Google, Amazon (and maybe a few others) running our cities, our homes, or hospitals – everything.

There really are fantastic options out there. But they are not up to you and me. There is no “market” for prediction products. Again: the value of data is mostly realized within companies. Big companies. “Surveillance capitalism” will be about Google, Amazon (and maybe a few others) running our cities, our homes, or hospitals – everything. That is the real secret sauce. They will do this following their own agenda and interests. Parts of that agenda will overlap with the concerns of customers and citizens. Other parts will follow the surveillance capitalists’ urge to gather more and more data – with the expectation that new products and services (which then can be sold for money) will be invented one day or another.

A further attribute of surveillance capitalism, which is not part of the economic mechanism, is worth mentioning in this context: its’ disentanglement with society. Surveillance capitalism basically works without the kind of social relations that still are part of a great part of the old-fashioned capitalist economy – be it with customers or employees. In the book, there is a whole chapter on this aspect. “Mass production”, Zuboff points out, “was interdependent with its populations who were its consumers and employees. In contrast, surveillance capitalism preys on dependent populations who are neither its consumers nor its employees and are largely ignorant of its procedures.”

Not too great of an outlook? Maybe Soshana Zuboff will talk about suitable approaches for intervention in her next book. In “The Age of Surveillance Capitalism”, she doesn’t.

Who Ate the Copyright Reform?

Wednesday, April 10th, 2019

Who Ate the Copyright Reform?

Bland Cricket: UK and EU Initiatives on Fake News Fall Short

Tuesday, December 11th, 2018

Clear evidence has been presented to the UK government that Facebook and Twitter have been used to spread fake news created by fake users. The effects are toxic—sometimes deadly.

Meanwhile, MEPs have been in Silicon Valley receiving reassurances and support for digital media literacy initiatives to immunise internet users against fake news.

Evidence presented to the UK Media Select committee shows the massacre of hundreds of Rohingya Muslims in Burma (Myanmar) is directly linked to Facebook’s policy of giving “Free Basics” to Burmese people to enable them to get online for the first time. They get basic internet with access to a limited number of websites and apps, including Facebook. MPs on the Media Select Committee heard testimony explaining how the social media platform spread untrue statements about Rohingya and whipped up religious hatred in the comments under fake posts. The Committee’s Interim Report urges the UK government to publicly condemn Facebook for allowing these interventions. It points out that Britain’s aid program to the country, which has just emerged from years of isolation under a military dictatorship, has been undermined by the killings of Rohingya. However, the government response, published on October 22, 2018, with replies to all 38 of the Committee’s recommendations, is non-committal.

Twitter’s role in spreading anti-Muslim hate and threats has emerged from the think tank Demos, which analysed a dataset of tweets from the year October 2017 to 2018. It tracks the emergence of fake accounts or bots, believed to have been created by Russian state actors at the Internet Research Agency, a Kremlin-backed “troll factory” in St Petersburg.

Bots Under the Radar Playbook

At first, the bots operate ‘under the radar’, attracting almost no attention in terms of re-tweets or replies. Then they start to tweet about fitness and diets—popular themes that are not directly connected with politics or religion. Next, the fake account holders start to include a large number of names and Twitter handles in their tweets to make it look as though they have many online friends and followers. Then, at times of heightened tension between religious communities, for example, after the terrorist outrages at London Bridge, Manchester Arena and Westminster, the fake accounts become very active and achieve thousands of re-tweets.

Demos concludes from its study that the fake accounts originate in Russia. It considers the possibility that their real target is the US audience, since the number of tweets referring to specifically British events, such as the Brexit referendum, is relatively small. Once again after the Brexit vote, the fake users’ comments are anti-immigrant and anti-Islamic in tone and content.

Even more threatening for the democratic discourse in Great Britain is the role played by Facebook and Cambridge Analytica in collecting personal data and micro-targeting voters in the 2016 United States presidential election and the Brexit referendum. MPs on the Select Committee note that in the United States, the Robert Mueller inquiry has been pro-active in examining and punishing these distortions to the democratic process. It calls on the UK government to engage with Facebook, restrict microtargeting, demand full transparency of political advertising online and enforce the election spending rules.

In the bland manner of the English Civil Service, the responses all deflect the MPs’ demands—like a cricketer who whacks each successive ball into a different part of the field or over the fence into the long grass.

The government replies that it must not pre-empt the deliberations or duplicate the activities of:

  • The Election Commission,
  • Committee on Standards in Public Life,
  • National Crime Agency,
  • Helsinki Centre for Hybrid Threats,
  • White Paper on Online Harms,
  • Cairncross Review of the sustainability of quality journalism,
  • Work of the enhanced Office of the Information Commissioner (ICO),
  • Defence, Science and Technology Lab,
  • Digital Charter,
  • Consultation on Protecting the Debate: Intimidation, Influence and Information
  • The planned UK Centre for Data Ethics and Innovation,
  • Freedom Online Coalition of 30 national governments,
  • “Any future Industry Code of Ethics”,
  • Bringing media literacy education into the curriculum for state schools

In addition, MPs are assured that the UK government “regularly engages with Facebook”.

No New Legal Category for Platforms

government does not accept the MPs’ recommendation that a new category of “tech companies” should be created in relation to regulations on tax, competition, freedom of expression and data protection. This proposal aims to get around the vexed question of whether Google, Facebook, Amazon, et al are platforms or publishers. Such a new classification would oblige them to act ethically by taking responsibility for the role they play, irrespective of whether or not the laws governing media outlets should apply to the tech companies.

In response, the UK government warns that any new legal position must not damage the UK’s own ‘vibrant’ tech industry. And it hides behind the current EU definitions as set out in the eCommerce Directive: “mere conduit”, “cache” and “host”. It notes that most tech companies are treated as “hosts,” which means they are not liable for abuses committed on their online platforms if they “act expeditiously” to remove harmful content when they become aware of it.

At the EU Commission, months of fractious meetings by the High Level Expert Group (HLEG) on Disinformation (they reject the term ‘fake news’) have resulted in recommendations. Cricket is not played much in Brussels, but the tenor of EU findings is the same as in Britain: bland and uncontroversial, emphasising quality journalism and media information literacy.

But not all members of the HLEG were happy with that. Reporters Without Borders (RSF), a respected civil society group that has advocated for press freedom for over 30 years, insisted its dissenting statement was included in the final report. It expresses concern that the EU commission is too soft on the tech companies, distorts priorities and threatens regulation ( or “self regulation”) that could damage media freedom.

“The main leverage to limit the distribution and monetisation of falsehoods, propaganda and disinformation rests with platforms in their exclusive role as information intermediaries,” says the RSF minority report.

However, those intermediaries—Facebook, Twitter, Google and Mozilla—were also represented on the High Level Expert Group, vastly outnumbering RSF’s lone voice. So it is hardly surprising that they did not press for tougher regulations and fines on themselves or a new definition of their status.

Side-stepping the Issue

Instead, they emphasize the large financial contribution they are making to numerous initiatives across Europe. Google’s Digital News Initiative, the Mozilla information Trust Initiative, Facebook’s International Fact-Checking Network, Twitter’s #fact-checking hashtag, Facebook’s “about this website” pop-up information service… there are numerous ways in which the tech companies are trying to prove their corporate social responsibility and convince policy makers that new laws are not required.

Meanwhile, Facebook is collaborating with an NGO—Correktiv!—in order to better police the content, which may be culturally specific and have hidden meanings.

 The Hoaxmap

Some Correktiv! journalists also work on a separate project, the hoaxmap.  The map plots the geo-location of false news stories, mainly reports claiming that immigrants and asylum seekers are committing crimes. The Hoaxmap displays the true facts behind each rumoured crime, including a small minority that are true.

Fake news circulating in social media often produces violent reactions in Germany. Racist attacks, arson at asylum reception centres, mass ‘funeral marches’ by neo-Nazis and right-populists and aggression against journalists have become commonplace. They are usually opposed by even greater counter-demonstrations and tightly controlled by hundreds of police in riot gear, mounted on horses, driving water cannons or armed with tear gas. The European Centre for Press and Media Freedom (ECPMF) has its own map of truth. It plots and verifies attacks against journalists arising from the right-populist attitude that the media are “liars” who spread “fake news”.

ECPMF Founder and  Editor of Special Projects at Germany’s Stern magazine, Hans-Ulrich Jörges, commented at a public meeting on 29. October that these days Russia does not need to send in the tanks—the online attack is achieved with just a click. He cited the case of “Lisa,” a teenage girl in Berlin who was falsely reported as a rape victim, as an example of Russian interference in Germany’s social media.  Jörges insisted that mainstream journalism is capable of re-building trust without any new laws. “We just have to do our jobs and remain independent,” he said. And he admitted that he himself does not take part in social networks “I don’t want to waste my time with this wave of hatred”.

The Power of Independence

Remaining independent in this atmosphere presents challenges. One of Europe’s new media literacy programmes, the Lie Detectors, has turned down offers of funding from Google and other tech companies.

Founder Juliane von Reppert Bismarck explains: “We get our funding from a wonderful organisation called the Woods Foundation in the US, which normally deals with land conservation”.

She insists she would not compromise the project’s independence by accepting grants from Big Tech. It operates by training journalists to present fact-checking sessions in their local primary and secondary schools in Germany and Belgium. Reppert Bismarck admits that a Lie Detector lesson is only a tiny pin-prick in the great scheme of information overload. Yet she hopes it will provide a sort of “immunisation”.

”We can’t beat disinformation. There are too many commercial and political gains to be made from circulating wild conspiracy theories, blatant lies and cleverly splicing truth and fiction. We need to take the thorn out of it. There were always wild stories—alien landings in Utah and God knows what! We need to be able to just acknowledge that stuff as entertainment and then leave it there”.

She emphasises that “pre-bunking” is more effective in combating false assertions than de-bunking because falsehoods presented as facts are often received as facts—especially by adults who may lack the curiosity of the younger generation.

Indeed, new research at the US Pew Research Centre indicates that young Americans aged 16-49 are significantly better at telling facts from opinion and recognising truth from fiction than the over-50s. And psychologists at the University of Western Australia have found that over the age of 65 people are far more likely to continue to believe a myth—even after they have been told that it is untrue. Both groups of researchers find that repetition reinforces trust and belief.

So the viral messages that appear online feed the human tendency to believe what we want to believe and to belong to a group of fellow believers—as well as driving the business model for tech companies that rely on millions of clicks to support their advertising revenue.

What will this mean as the next round of elections gets underway, with the European Parliament election in the spring of 2019 ? Studies in both the US and France indicate that people will still vote for their original choice—even when fact-checking proves that their statements are not true.

Even without troll factories and hostile state actors, it’s a vicious circle.

However, the furore caused by the Mueller inquiry’s revelations about fake news from Russia did at least provoke a bigger turnout than usual in this year’s mid-terms.

If even millennial popstar Taylor Swift has noticed that an election is going on, there may still be hope for democracy. Swift has declared her support for the Democrats and urged her millions of Twitter followers to go and vote in the midterms with the words “Don’t sit this one out”. Perhaps pop music will save democracy? After all, the lyrics don’t have to be true…

Footnote: On Dec 6th, Lie Detectors—mentioned in this story—was awarded the 2018 EU Digital Skills Award