Author Archive

What Could Make Tech Platforms Do the Right Thing?

Thursday, July 9th, 2020

There have been many candidates including pressure from policy-makers and regulators, but thus far no end of congressional hearings or EU fines have done more than dent the stock price. Calls from organisations like Amnesty International and United Nations to Tech to stop supporting genocide and doing mass-surveillance have failed to convince Silicon Valleys’ superstars to clean up their services. Staff protests in the form of “walk-outs” have not changed the direction. The consumer power that is “only one click away” is an illusion. What is left?

This writer can think of two last resorts, one is the advertisers who are the real customers in the world of freenomics. The other would be the investors. Not sure if we should expect Wall Street to put things like democracy, peace, life, health and freedom before the profits, though. But now at last, the ad boycott method has been attempted. Let’s take a look!

Ad-pocalypse Redux

Many big-name advertisers have joined the #StopHateforProfit campaign, receiving attention and applause. In fact, so many that CNN has made a special page with the list of companies. Well-known consumer brands. Will this have an impact on the platforms? A small impact on the stock price has been recorded, but longer term it may have more of a symbolic than financial effect. This Wall Street Journal infographic gives a hint: the top 8 boycotters yearly ad spend (all channels) is only 1% of Facebook’s total annual advertising revenue.

I have written about Metcalfe’s law before, but it might help explain this. The law describes the value of a network, say of fax machines. If two people have fax machines, they can telefax each other. If four people have them, there are not twice but four times as many connections. The value of the network is exponential. 100 Million users are not worth ten times more than 10 Million users, but 100 times more. This is the logic that drives the platforms to dominate their respective niches and makes competition irrelevant. (Why would anyone start a competing auction service? Or search? Or video?)

But Metcalfe’s law not only explains the user side of the market, but also applies to the advertiser side. Sure, big name brands can afford to go to alternative channels like television or billboards or something completely different. But the Millions of small businesses have few or no options. Facebook advertising is how they access the customers. Maybe if they want to reach the millennials rather than the boomers, advertisers can go via Instagram, but that is still Facebook. YouTube is the gatekeeper for video. In practice, there is no place else to go. The platforms control both sides of the market. You can’t argue with Metcalfe.

The #StopHateforProfit is a great initiative and of course it’s better to do something than nothing. But it will not be what turns the tide on tech platforms, who seem to put their efforts into pushing for more exceptions from liability rather than facing up to their responsibility as dominant players.

This is Netopia’s newsletter July 9 2020

What Is Section 230 and Why Did President Trump Attack It?

Friday, June 5th, 2020

As has been widely reported, last week Twitter marked two of President Trump’s tweets with a fact-checking label, effectively saying that the US President did not speak the truth. (Perhaps no news; “alternative facts” was a term that arrived early in the Trump presidency.)

Good for Twitter. By contrast, Facebook predictably refused to act, once again badly misjudging the historical moment. They are now facing growing criticism and anger in the US for their refusal to be accountable for how their users abuse their platform. Indeed, current employees staged a virtual protest, and more than thirty of Facebook’s earliest former employees posted an open letter lambasting the company for its moral decay.

President Trump responded to Twitter’s actions by issuing an executive order targeting the (in)famous Section 230 of the Communications Decency Act of 1996. Why this particular law?

According to Big Tech, the internet wouldn’t exist without CDA 230’s liability protections for content posted by users – no matter how vile. And given the ever-increasing pressure in the US to reform CDA 230, Silicon Valley is desperate to export it to other places so as to complicate efforts to amend it domestically. For instance, they successfully included it in the US-Mexico-Canada-free trade agreement and are eager to see it incorporated into European policy.

The law’s appeal is clear; CDA 230, on the one hand, gives internet intermediaries an exemption from liability for what users do:

(1) Treaty of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

… and on the other hand, gives them freedom from liability when they do take action (“good Samaritan”):

(2)Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

(A)

any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B)

any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1). [1]

Blessed if you do, blessed if you don’t. Untouchable. Keep doing whatever you’re doing and don’t mind anyone else. What’s not to like?

Of course, President Trump’s Executive Order appears to be illegal on its face. Twitter did not abuse President Trump’s freedom of speech by adding the fact-check labels; rather, Twitter was exercising its own freedom of speech in doing so. That right is guaranteed by the US Constitution’s First Amendment, not CDA 230. And that’s great—if there is one thing Netopia would like to see more of it is intermediary action (that is the topic of almost every post on this blog!). Twitter is a big part of the problem in terms of hate speech, propaganda, threats, and more, but a pat on the back is deserved for doing something.

If anything, Trump’s EO is a distraction from real and legitimate debates about how CDA 230’s overbroad liability shield is a root cause of many of the internet’s most intractable problems—it’s the same law Google argued against amending despite child-trafficking website Backpage.com’s reliance on it’s safe harbour. And the growing CDA 230 reformer movement in the US is understandably concerned that Trump’s executive order is a distraction from their cause.

Presidential executive orders are not the answer to fixing the internet. CDA 230 is also not the answer. Until Big Tech voluntarily faces up to “do the right thing,” we must keep looking. Suggestions welcome.

The Mother of all Oversight Boards

Tuesday, May 19th, 2020

Pressure on Facebook shows no sign of decrease. The first line of defense – “we’re just a tech company” – did not hold. The second line – “we will hire thousands of moderators” – also failed. The latest attempt – “oversight board” – is a step in the right direction but will not be enough. Here’s why:

THE BENEFIT OF THE DOUBT

Facebook accepting some degree of responsibility for the content is welcome. A generous interpretation of the oversight board could be a statement like “we think this is difficult and need some help”. Such an approach deserves some sympathy or at least the benefit of the doubt. Thorny issues of freedom of opinion clashing with hate speech. The climate movement and #metoo have benefitted from Facebook and other platforms, but so have white supremacists and Russian election intervention. To humbly ask for help is a good idea, but only if the request is honest.

THE PROBLEM

This balancing act is nothing new. Facebook is not the first to face it. In fact, every new media form has gone through similar steps. The then new medium of radio is sometimes said to have helped the Nazi party to power in 1930’s Germany. Now regarded as pillars of democracy, newspapers were once called “gutter press” and the tabloid looking for scandals rather than the truth is still an archetype. Video games have often been criticized for violent content. And so on. In that view, Facebook is in good company and if history is any guide, it will be held in better regard in the future. Except that does not happen on its own. One could get the impression that it is the world that adjusts to the new medium and learns to accept it. But the reality is that the new medium needs to make necessary changes to earn trust.

THE NEIGHBOURS

The way those examples have gained trust is by putting in place systems that help the audience and check the content. Broadcast media is regulated through law and permissions, many countries have public service organisations and oversight boards or authorities. Newspapers follow similar patterns. The journalist profession requires years of education, there are organisations for professional development, forums for discussing press ethics, special media covering media – the ethos is that difficult publishing decisions are well-served by being discussed by others. Academics do research on press and media ethics. And so on, a very advanced and professional system and, importantly, a learning system that evolves as the world changes. Same for video games, the industry keeps in close touch with researchers, organisations and authorities. Europe has PEGI – a co-regulation system funded by the industry, run by an independent organization and governed by an advisory board appointed by the member states (experts in teaching, child psychology, media, medicine, sociology and many more). Similar systems exist in other parts of the world. The point is that these media forms did not say “we’re just a tech company”, they faced the criticism and made a system to do better.

THE ANSWER

Self-regulation or co-regulation systems can be the way forward also for the internet platforms. There are some criteria that need to be met:

  • Independence – self/co-regulation should be independent of individual companies
  • Transparence – the rules and rulings should be available to anyone
  • Legal certainty – both parties can appeal a ruling to some form of appeals board
  • Rest on actual regulation – the self/co-regulation system falls back on law
  • Avoid conflict of interest – the delegates in the system should not be appointed directly by the companies they regulate

Done right, self/co-regulation tries more cases, moves faster and achieves better results than courts and authorities can on their own. (Co-regulation is when the state is represented in some way, self-regulation is independent of the state.)

Now, this is your homework: check the Facebook Oversight Board against the bullets above. If it ticks all boxes, good news for Facebook and digital citizens anywhere. If it ticks only a few boxes, Facebook needs to do more work to cope with fair criticism and avoid government intervention. (Send your answers to mark.zuckerberg@facebook.com)

THE COINCIDENCE

Fun fact: besides the three liberal law professors in the oversight board core, the surprise guest is Danish former prime minister Helle Thorning-Schmidt. When she ran Denmark, her Minister of Economy and Interior was Margrethe Vestager. Yes, the same Vestager, European Commission Vice President and digital watchdog. What a fortunate coincidence for Facebook!

Netopia will follow the oversight board’s work with great anticipation. Will it stop or reduce hate speech and troll factories? Fake news? Election meddling? Genocide propaganda? Privacy breachs? Terrorist content? Abuse of dominant position? Restrictions to free speech? Content theft? Bring back “sexy” emojis? That would be terrific, thanks!

RIP Henrik Pontén

Monday, May 18th, 2020

He was known to the public as the lawyer who put The Pirate Bay-operators behind bars. Henrik Pontén was Sweden’s most famous anti-pirate and around those years the target of many pirate pranks. I know, I was there.

Some of the pranks were cute. Like when the Pirate Party youth section gave Henrik Pontén ginger bread cookies for Christmas. In Sweden, the folklore is that ginger bread cookies makes one nicer.

Some of the pranks were funny but obnoxious. Like when somebody changed Henrik’s name in the Swedish tax registry, so that his legal name was Pirate Pontén. Took him some bouts with bureaucracy to change it back.

Some of the pranks were aggressive. Like the boxes in the mail nobody dared opened. The shattered glass window in the office door (they brought in a security guard to patrol the office, unheard of in Sweden). The flowers to his board members with the note “the first part of your funeral bunch”. The list goes on. The pressure took a toll on Henrik’s health, but he did not fold. He kept winning his cases.

In 2009, a couple of months after the first trial against The Pirate Bay and only weeks after the Pirate Party was elected to European parliament by 7,1% of the Swedish voters, Henrik Pontén and I were invited to do an onstage Q&A at a digital festival called Dreamhack. Dreamhack has more than 10,000 participants and is a beautiful example of digital grass roots culture. Back in those days, it was also a popular gathering for software pirates. We came by car and were advised to park it just outside the stage entrance, radiator facing out so we could make a speedy escape if necessary. The door opened and we were met by flashing news cameras, like in a movie scene. On stage, we were meant to give proper answers to questions asked online by the audience. Some questions were pranks, some were serious and interesting. But it was heard to make any sense in the noise. Somebody threw a tomato (and missed). Henrik thought that was hilarious. There are no tomatoes at Dreamhack. They had gone through the trouble of exiting the festival, finding a supermarket and buying tomatoes only so they could throw it at us. That episode captures Henrik Pontén’s personality. He always found humour in the most absurd situations.

Henrik Pontén was a national-team fencer, goat shepherd, horse-rider, motorcyclist, husband and father of three. He passed away this weekend, 54 years old. The gap he leaves is immense.

Getting Rid of Outdated Regulation

Monday, May 4th, 2020

Some call the E-Commerce Directive from 2001 outdated. Many of them want to replace it with something like the US Communications Decency Act Section 230. Which dates back to 1996.

Getting Rid of Outdated Regulation

Getting Rid of Outdated Regulation © Rodrigo

 

As Editor Per Strömbäck writes: “The mythology is strong, though. Tech gurus are fond of saying that immunity from responsibility is the heart of how the internet works. Without it, no Wikipedia. No freedom of speech. No innovation.” To read the full post, go here

Analog Virus, Digital Disease

Tuesday, April 28th, 2020

The global pandemic has shown some extremes of rights and regulations protections. Amid the lockdowns we’ve seen privacy under sharp attack, copyright abused, and questionable overreaches in civil society. The activities fall into three brackets: some that use Covid-19 as a cover. Some that are well-meaning but need scrutiny. Some that have never happened before and would have taken years of planning.

Even Instagram set an “out of office” on content moderation.

Governments have taken drastic steps to control populations; streaming sites have lowered stream quality to save bandwidth (and energy). The BBC pulled news broadcasts, and Amazon, the “Everything Store” rumoured to generate £10,000 a SECOND, reneged on the famed next day Prime delivery subscription (aka a consumer tax) and faced tough questions around the safety of their warehouse staff. No convention remained sacred. Even Instagram set an “out of office” on content moderation. A flagrant cop-out.

Unsurprisingly, platforms trade on disaster in one form or another, each perfectly placed to profit from increased online activity. While raking in billions, they ask for donations. Spotify, acting like a food bank in a supermarket, now has a “tip jar” for artists.

Spotify acting like a Foodbank in a supermarket, now has a “tip jar” for artists.

Internet Archive founder, the multimillionaire Brewster Kahle, was so convinced rules didn’t matter that he suspended waitlists and permitted 1.4 million copyright-protected books to be distributed online. The so-called “Emergency Library” is tantamount to piracy, as the same book can be shared 1000s of times simultaneously, resembling no library known in the history of humankind.

On-cue The Pirate Party chief dog whistler, Julia Reda, has gone as far as to blame intellectual property for holding up a Covid-19 vaccine! And for creatives, Ms Reda reckons self-isolating creative workers should turn to Patreon and sell via their web shops. Sell T-shirts, did we hear you say? Or was that? Spotify: 6,549 streams, Apple Music: 2,554 streams, YouTube video: 27,027 to generate the same income as one T-shirt, so 14 T-shirts per day times five band members equals 1500 per month (equal to 180,650 streams). No mean feat.

The scale of the corona “cop-outs” is deep, and it is a tremendously long way back to platform responsibility.

The so called “Emergency Library” is tantamount to piracy as the same book can be shared 1000s of times simultaneously, resembling no library known in the history of humankind.

If the techlash was a dream, one need only remember Cambridge Analytica to see that some of the current ideas have far-reaching consequences for societies the world over.

In the current climate disaster, capitalism has given way to overreaches, and below is an inexhaustive list of just some of the solutionist responses noted in the past weeks.

No one is discounting the seriousness of the global pandemic, unless that is when it comes to rights or regulations, and by the looks of things, governments just handed the keys to society over to platforms.

CONTACT TRACING

Google and Apple agree to work on a Bluetooth contact tracing app, and there’s also a pan-European effort. The apps would require Bluetooth to be enabled, but both companies considered “platform updates” to force the app onto handsets, from where users could self-report as experiencing symptoms. The upshot, then, is that all devices within their range for the preceding period are alerted to isolate because they’ve been in contact with a potentially infected person. The dangers around privacy are obvious. There’s the question of reliability. How long would it be before a prankster colleague calls in sick, taking the entire workplace down? Just saying they have symptoms that could bring an entire company into isolation. The race to develop the app is a “rising balloon” in business terms.

COUGH DETECTION

A Siri-type voice assistant microphone snooping analysis tool. “Cough detection algorithms may be able to identify your cough and count the number of times you’re coughing in an hour or a day, which might be able to tell your doctors how well you’re recovering.”

CCTV FEVER DETECTION

A CCTV “Coronavirus Detection System” via Facial Recognition. “Thermometers on buses to detect coronavirus symptoms, which scan passengers’ faces at the entrance of the bus and alert the driver if an anomaly has been detected.” Similar technology is available from numerous companies and is sold directly to the public in the US.

SOCIAL DISTANCING WATCHES

In theory a good innovation, Samsung manufactured watches or arm bands that report if an employee at a Ford factory in Plymouth, Michigan, comes within two metres of a co-worker, but then: “supervisors also receive alerts and reports that can be used to monitor social distancing and clustering in the workplace”.

FITBIT ASYMPTOMATIC VIRUS CHECK

Arguably the most positive innovation comes from Fitbit. The volunteer-led, opt-in type fitness trackers look at elevated heart rates and temperature as an early warning sign and instruct self-isolation for the user. Not only does this deal with the issue, but it does so in a collaborative way. “Smartwatches and other wearables make many, many measurements per day — at least 250,000, which is what makes them such powerful monitoring devices,” say Stanford Medicine researchers who are working with Fitbit.

ROBOT HEALTH ADVISORS

The Promobot informed the public around Times Square about the symptoms of coronavirus and how to prevent it from spreading, with plans for the machines to take the temperatures of passers-by. Lacking a permit, the first robot was excluded from a New York park. Like delivery robots, they are often monitored by even more lowly paid offshore workers than actual gig workers on site, so neither an innovation nor a robot, you could say. Work to be done there!

NATIONAL CURFEWS BY APP

The most prevalent government snooping overreach comes via smartphone location tracking and, in some cases, backdoor access to user devices. “One of the most alarming measures being implemented is in Argentina, where those who are caught breaking quarantine are being forced to download an app that tracks their location.”

IMMUNITY PASSPORT

Residents in Wuhan are already subject to “health certificates”, and for travel or return to work, “Immunity Passports” are mooted. Numerous countries are likely to enforce strict border entry requirements; many will go further than asking to list your socials. The airline Emirates Covid-19 test is taking pre-boarding temperatures and carrying out blood analysis with 10-minute result window for passengers flying with them.

GDPR OUT THE WINDOW

Suffice to say, every company that ever had your email has emailed with their “how to survive lockdown” tips email, with a letter from the CEO also suggesting some products they have that may help during the crisis. Add the sharing of data from health agencies to governments, with governments and all phone providers to governments, and it’s safe to say GDPR is in name only for some.

BEYOND HEALTH

Ways solutionists have come up with “limiting” the spread of coronavirus while ignoring the real rights of citizens and workers are plentiful.  In prisons, mention of Covid can lead to the isolation block. Liechtenstein residents have been offered Covid bracelets; workplaces are snooping ever deeper. The use of thermal cameras takes facial recognition to new realms. In the post-lockdown, will DUI be joined by DWH (Driving While Hot)?

This is the status a few months into the pandemic. Who knows what’s next?

Flattening the Curve – Why Exponential Growth in AI May Be a Mirage

Tuesday, April 7th, 2020

Time spent in lockdown can be used to think about the big things in life. Like artificial intelligence. Netopia has a mini-theme on artificial intelligence, with Peter Warren’s story on insuring self-driving cars and Ralf Grötker’s review of Stuart Russell’s Human Compatible. Both put artificial intelligence in context. No, artificial superintelligence will not eliminate humans anytime soon. Yes, there are completely different issues we need to debate to make AI useful for humans.

Exponential growth is a popular way to think about digital phenomena. Innovations added to innovations in an ever-accelerating pace. This mindset leads to quotes like “change will never again be so slow as it is now”. It brings the conclusion that there will come a point where innovation happens all at once and the speed of change explodes into the “singularity”.

In this virus pandemic, we have been looking at a lot of exponential growth curves and thinking about ways of flattening them. While the virus might be disruptive in its own way, it is driven by mutation rather than innovation.

Exponential growth is seductive. Apply it to any process and you get mind-blowing results. The problem of course is that not all processes can accelerate exponentially. Let’s look at self-driving cars. If computing power (whichin theory grows exponentially according to Moore’s Law), that would make the AI smart enough to take over from the driver anytime soon. The first problem is that AI relies on a number of other technologies, such as GPS, cameras, lidar, radar and many other sensors and communication technologies. Each of these technologies develops at its own pace, but not necessarily with exponential speed. 5G telecom networks, for example, are often mentioned as a key to self-driving cars, but the roll-out pace is held back by many factors: legal, financial, political etc.

The second problem is the data-set. One way to make AI useful is to train it on a big set of data and make it find the patterns. This is what machine-translation has done. In the old days, machine-translation tried to replicate how humans learn languages with grammar and glossary and such. The break-through came with predictive statistics applied to huge samples of real language (corpuses), where the AI can guess with some accuracy what the most likely next word will be from the context. This is what auto-maker Tesla tries to do with its auto-pilot system, which silently observes how real drivers deal with various traffic situations (such as stopping at red lights), uploading that to a central system which then analyses the best driver behaviours and feed them back to the autopilot. That brings the philosophical question if all traffic situations can be predicted and simulated. If the answer is no, self-driving cars will never be 100% self-driven, just like machine-translation can never be 100% accurate.

Put together, this means that exponential growth in self-driving cars may exist in simulation but not on the road. In real-life, incremental innovation is a better explanation. We make the camera a little better, so the car can navigate in rainy conditions. Then we make the wheel sensors a little better, so the AI can get more feedback about the tires grip and calculate braking power better. Then we add more videos to the data-set so that the AI better recognizes road indications and can keep the car in lane. All these things little steps amount to great progress, but it makes for systems that assist the driver (lane departure warnings, automatic high beams), rather than the computer replacing him. Mathematically this is logarithmic growth, which means diminishing returns, also know as Achilles and the Turtle.

In many real-life AI applications, logarithmic growth may be a better explanation than exponential, only not as spectacular. If we keep this distinction in mind we can have a better informed debate about the threats and opportunities of artificial intelligence. Also, next time somebody says exponential growth, you can ask “sure it’s not logarithmic”?

Now, if we could only flatten the corona-virus curve in the same way…

Hospital Pass – Is Insurance a Bump in the Road for Self-Driving Cars?

Tuesday, March 31st, 2020

In the minds of many, autonomous vehicles are the future dream, promising cheap, accident free driving where the old get around for longer and the young can get into cars earlier.

For if cars drive themselves and no-one has their hands on the wheel then the responsibility must be at the wheels of the robot car makers.

A world of independence conferred by Artificial Intrelligence. Where cars are fuelled with free electricity from solar panels and we throw away our insurance cover and pass our travel cares to the vehicle manufacturers. There will be no more draughty, smelly petrol stations to separate us from the best part of €100 for carbon fuel, because the solar panels on our house’s roofs promise unlimited environmentally friendly travel due to the smart metres measuring your contribution to the grid. This means that you will be able to pay for the electricity from charging points with your contributions to the power grid. Even better, according to the dream, will be the end of the expensive annual insurance policy. For if cars drive themselves and no-one has their hands on the wheel then the responsibility must be at the wheels of the robot car makers. The day of the driver will have passed the only care we will have will be what will we watch and how we occupy ourselves.

Well that is the dream, now unfortunately for a little bit of reality. Drivers will have to keep their hands on the wheels of their autonomous vehicles for a little longer due to their evolving systems and because of the mix of manual and autonomous vehicles expected on our roads. It is a problem that the insurance industry thinks could get worse due to technological dependence.

After a certain time the vehicle will come to a stop wherever it is and that could be somewhere very dangerous

“There are some vehicles which at certain points will require a driver to take back control and if that driver doesn’t respond after a certain time the vehicle will come to a stop wherever it is and that could be somewhere very dangerous,” said Sarah Cordey of the Association of British Insurers.

“Certainly, we are keen to see the increased automated technology on the roads because it has exciting potential. But if there is a stage where it actually becomes more dangerous because it leaves drivers too disengaged from the driving task to be properly involved, then insurers might prefer that that stage is skipped, and we just go straight to full autonomy.”

Which would be a tremendous challenge to the way technology has been introduced to date because the infrastructure would have to be put in place, tried and tested before everyone could step into their shiny new robot cars.

There continues to be a worrying lack of clarity around how Automated Driving should be defined

Last year Matthew Avery, Director of Research for Thatcham Research which carries out safety testing on behalf of the insurance industry said: “By 2021, Automated Driving Systems on some new cars could allow motorway drivers to essentially become passengers in their own vehicles. However, there continues to be a worrying lack of clarity around how Automated Driving should be defined and crucially, the role of the driver when a car is in automated mode.

“Our position is that driving systems that rely on the driver to maintain safety are not recognised by the insurance industry as being automated.”

At a UK Government consultation on the issue, Avery said that discovering how an Automated Driving System must safely hand back control to the driver in certain scenarios is crucial. For example, in the event of a system failure the vehicle must be capable of carrying out a managed hand back to the driver or reach ‘safe harbour’ on its own in the event of an emergency.

Drivers take around 35 seconds to psychologically get used to driving again when control is handed back to them by an automated vehicle. The motoring equivalent of a rugby ‘hospital pass

It is a topic exercising the insurance industry according to Cordey: “So, insurers are looking to vehicle manufacturers to address that by ensuring that a vehicle will first find itself a safe harbour or a safe place to stop before it becomes immobile. So, there’s an awful lot of details here that the insurers have really been getting into to try and help shape things for the future and make them as safe as possible.”

According to research by Professor Natasha Merat at the University of Leeds and Dr Dick de Waard of the University of Groningen’s Psychology Department drivers take around 35 seconds to psychologically get used to driving again when control is handed back to them by an automated vehicle. The motoring equivalent of a rugby ‘hospital pass’ where you get the ball just as you are lined up for a tackle.

A problem of tech dependence not exclusive to cars, world aviation authorities now insist that pilots carry out a minimum number of manual landings rather than using the autopilot. It has also been noted in the armed forces that operators are loathe to over-ride automated weapons systems through fear of being responsible for their actions.

It has also been noted in the armed forces that operators are loathe to over-ride automated weapons systems through fear of being responsible for their actions.

A buck passing that means that the combination of manual and automated traffic in the interim phase before complete automation presents a nightmare for drivers and insurers.

“Liability issues are a big one to sort out if a vehicle with a lot of smart technology on-board is involved in a collision with another vehicle,” said Cordey.

According to Mark Deem, a lawyer for Cooley the world’s largest legal practice which includes impressive technology household names among its clients, the next few years will legally be as tertiary as they are for vehicles.

“The law always looks for workable definitions of products, services and harms for which legal solutions and interventions are required but the speed of technological change and the measured pace of legal change means that legal definitions cannot be nailed down in transition.

Speed of technological change and the measured pace of legal change means that legal definitions cannot be nailed down in transition

“Problems will exist in the tertiary stage of development where legal solutions will be needed to deal with products at differing stages of automation, varying degrees of precision and in different environments. The question of responsibility will evolve with the technology.

So, what does this mean for the age of the autonomous vehicle, in charge without responsibility? Like the insurance industry?

Mark Deem see’s this an evolution, not only will the vehicle change so will our insurance.

“Once we are through that tertiary stage, we should see more fundamental and permanent shifts changes to deal with risk – perhaps a change in insurance where we see travelling in an automated vehicle as an extension of personal travel insurance, rather than belonging to the vehicle owner.”

Header Image © Rodrigo. See the original here

Abstract Intelligence – How to Put Human Values into AI

Friday, March 27th, 2020

Book Review: Human-Compatible – Artificial Intelligence and the Problem of Control (Viking 2019) by Stuart Russell

AI has severe limitations. Still we have reasons to worry – both because of these limitations and because they could be overcome in the future. In his new book “Human-Compatible: Artificial Intelligence and the Problem of Control” explains the principles that govern the action of autonomous AI-systems and makes proposals for how such systems should be designed to make them beneficial rather than evil.

This is a book about which principles are needed in order to create beneficial Artificial Intelligence-systems. It’s original, and it’s important.

To start with: Stuart Russell is professor of Neurological Surgery at the University of California, San Francisco and Professor of Computer Science at Berkeley. He is vice chair of the World Economic Forum’s Council on AI and Robotics. He is a fellow of the American Association for Artificial Intelligence. And so on. Reputation isn’t something that one gets for nothing. More than any arguments that the author presents, his outstanding position in those fields of science that are relevant for AI is a strong reason to listen to him.

First of all, Russell provides us with a clear estimation of where we stand. For the near future, there still will be major tasks which AI is far away from being able to tackle. The success of AI in winning over human champions in board games such as chess or Go, Russell explains, should not seduce us to think that AI has magic powers in other fields, too. The reason for this is that AI works, to a great extent, with methods of machine learning, that is autonomous learning. With games such as Go, the approach works surprisingly well, because the game is regulated by strict rules. The real world is much less convenient. One reason for this that our daily life consists of thousands little tasks which we accomplish rather effortlessly, but which are very difficult to program or to learn for an AI.

One difficulty is that very often actions and tasks that we perform intuitively are not very easy to discern and to define from an abstract point of view. “What we want is for the robot to discover for itself that [e.g.] standing up is a thing – a useful abstract action”, Russell explains. “I believe this capability is the most important step need to reach human-level AI.” So far this has not been invented.

AI cannot find by itself ways to proceed from general rules to concrete actions, if there are no human-defined rules for this. Thus, AI basically lacks the capability to plan and perform actions.

This is a major point. AI, Russell explain, cannot perform abstract reasoning. AI-machines such as IBM’s Watson, he explains, can extract simple information from clearly stated facts – “but cannot build complex knowledge structures from text; nor can they answer questions that require extensive chains of reasoning with information from multiple sources.” Or take Alpha Go – Google Deep Mind’s AI-system for playing the board game Go. “Alpha Go has no abstract plan. Trying to apply AlphaGo in the real world is like trying to write a novel by wondering whether the first letter should be an A, B, C, and so on.” This is a broad limitation. AI cannot find by itself ways to proceed from general rules to concrete actions, if there are no human-defined rules for this. Thus, AI basically lacks the capability to plan and perform actions. “At present”, Russell writes, “all existing methods for hierarchical planning rely on a human-generated hierarchy of abstract and concrete actions.” Computers that learn these hierarchies by themselves have not been invented so far. The reason: Human scientists “do not yet understand how such hierarchies can be learned [by an AI] from experience.”

From a cognitive point of view, the function of goals is that they have a focusing effect on one’s thinking. AI-machines do not have goals.

Besides abstract thinking, machine often lack something which, in cognitive science, is called smart heuristics. Smart heuristic stands for the many abbreviation and tricks that human perform to solve tasks and problems – without employing too much calculating power. It’s not just tricks, but also embedded in practical concerns. One example are goals that we are striving for. From a cognitive point of view, the function of goals is that they have a focusing effect on one’s thinking. AI-machines do not have goals. Current game-playing AI-Systems “typically consider all possible legal actions”. This is where they are superior to human players, who cannot foresee such a variety of different paths. But here lies AI’s weakness, too. Because AI cannot limit its scope, even a super-equipped AI will be overwhelmed with the variety of different paths of action in real life. Human have acquired techniques to reduce that kind of complexity. AI hasn’t – a least not in a way that we deem trustworthy.

These are severe limitations. Still we have reasons to worry – both because of these limitations and because they could be overcome in the future. Russell actually thinks that human-level AI is not impossible in principle. To the contrary: Super-intelligent machines, he warns, could actually take control of humanity. A whole chapter is devoted to this issue.

One not so-technical aside of interest to Netopia-readers refers to the inherent drive or rationality of AI-systems. It’s about maximizing clicks – getting users to visit a website in order to generate traffic. How would an intelligent system maximize click-rates? One solution: simply to present items that the user likes to click. “Wrong”, says Russell. The solution which an intelligent system would choose “is to change the user’s preferences so that they become more predictable…. Like any rational entity, the algorithm learns how to modify the state of its own environment – in this case, the user’s mind – in order to maximize its own reward.” This is thrilling – and a good example how AI can pose threats even before becoming super-intelligent.

The solution to the threat of super-intelligence and, at the same time, evil or questionable AI-systems is the same as the solution

The solution to the threat of super-intelligence and, at the same time, evil or questionable AI-systems is the same as the solution Russell sketches for the problem of coping with the limitations of current AI: “Machines (…)  need to learn more about what we really want”, Russell point out, and this learning should happen “from observations of the choices we make and how we make them.”

There are two lines of reasoning underlying this proposal. The first is: If human-level AI is something that we should expect to happen, then this super-intelligence should preferably be benevolent. Being benevolent, though, is something that can neither be programmed, nor can it be learned by super-intelligent machines themselves. Even if AI could acquire the capability of abstract reasoning, they could not pursue the goal of being benevolent. The reason for this genuine philosophical: “Benevolent” cannot be defined in any unambiguous way, because there are just too many competing and not compatible values around. (To build this argument, large sections of the book are devoted to philosophical endeavors to rationally construct human preferences and perceptions of utility, both on an individual level and on a group level.)

The second line of thinking refers to the above-mentioned problems that today’s AI has with abstract thinking. It is this part of the book which is definitely of practical interest to people designing AI systems today – both on the level of software as on the level of human-machine-interaction.

A better solution, Russell thinks, would be for the AI to ask the user routinely and automatically questions

One example is the gorilla problem. Some years ago, a user of the Google Photos image-labeling service complained that the software had labelled him and his friend as gorillas. The interesting point of this incidence is that it makes clear the value-proposition build into the software. Obviously, the image-labeling service assumed that the cost of misclassifying a person as a gorilla was roughly the same as the cost of, e.g., misclassifying a Norfolk terrier as a Norwich terrier. As reaction to the incident, Google manually changed the algorithm – with the result that later, in many instances, the software just refused to do labeling in cases that were unclear. A better solution, Russell thinks, would be for the AI to ask the user routinely and automatically questions such “Which is worse, misclassifying a dog as a coat or misclassifying a person as an animal?”. Answers to questions of this kind could help to tune the labeling-service according its users’ needs.

This is what, in the end, it all boils down to. Where possible, machines need to learn about what human users really want from observation. If observation is not possible, asking is a suitable approach. Human-level AI is not about more or better computation. It’s all about the design of human-machine-interaction in order to feed human values and preferences into the system.

 

Virus Action Reveals Big Tech’s Double Standards

Thursday, March 12th, 2020

Youtube CEO Susan Wojcicki published a statement yesterday on how the video platform responds to the corona virus outbreak. It is worth reading.

Wojcicki says “It remains our top priority to provide information to users in a responsible way.” Sounds great. Could that not be the policy always? So maybe we would not have to live with alt.right-propaganda, election interference, ISIS execution videos and such things that would never be published in proper media. That would make the internet better, regardless of virus outbreaks, right?

“YouTube will continue to quickly remove videos that violate our policies when they are flagged” – Another great idea. Hope it’s not like pirate videos where rights holders must go through a slow bureaucratic process just to have the videos uploaded again right after. Except the really responsible thing would be to not have those videos posted in the first place. That would really help stop virus misinformation! (and the creative economy)

Perhaps that can’t be done because of the way Youtube works? But wait, Wojcicki has the answer:

“In the days ahead, we will enable ads for content discussing the coronavirus on a limited number of channels, including creators who accurately self-certify and a range of news partners”

Great idea, perhaps similar things can be applied to the other problems Youtube brings upon the world?

Thanks to Susan Wojcicki for speaking out. There are two problems here. The first is that this is not enough if Youtube wants to live up to its parent company’s motto Do the right thing. This fits more into the familiar pattern of “do as little as possible” which is justified with anything else would “break the internet” (or Google’s profit forecast). The second is that at previous occasions, the standard response from Google, Youtube and most of Big Tech has been something along the lines of “it’s the algorithm we don’t know what it does” or “that would be like in China”. Wojcicki’s statement shows that they can if they want to. Great, now do that more often. And better.

(A similar issue related to a smear campaign against Michelle Obama. Watch this great TED Talk!)