Putting Ethics into the Machine (Part 2)

ETHICAL DIALOGUE WITH THE MACHINES

‘It’s not a matter of there being a set of ethics for machines and another for human beings; we argue that there is just one thing called ethics. We want to make sure that machines have this ethics built into them,’ says Professor Susan Anderson, who says this needs to be an exhaustive process.

‘In order to try to capture the ethical principles needed we need to have a dialogue with the machine that is centred just around whatever the domain is that the machine will be functioning in, and try to discover the ethically-relevant features that the machine will have to encounter or deal with, the prima facie duties that the machine should be aware of and the decision principles that in the last analysis should govern its behaviour.’

Professor Anderson says that in the course of an ‘interactive dialogue’ between the machine and one or more ethicists, the machine would be able to ‘tease out’ ethical elements that are relevant to its domain. ‘Like, could someone be harmed? That is something that ethicists feel is ethically relevant and should be taken into account,’ she says. ‘Also, in the area of biomedicine, respect for the autonomy of the patient is another example of an ethically-relevant feature and then from that prima facie duties are discovered by figuring out what the ethicist says that the correct action is, and whether that involves maximising or minimising the features in question.’

Supporters of the idea of ethical code argue that one way to roll it out may be on a country by country basis, in which individual states or areas are perceived as politically mature and democratic because of their willingness to deploy ethical code.

This process would involve underlining the key ethical requirements for the machine. ‘So harm is something that you would want to minimise, respect for autonomy is something that you would want to maximise, causing benefit is something that you would want to maximise,’ says Professor Anderson.

The problem comes, she says, when these ‘prima facie’ duties come into conflict with one another, as we saw from the healthcare example described earlier.

‘So for example you might have a situation where a machine is trying to remind a patient that they have to take their medication and the patient says that they don’t want to take it now,’ she says. ‘You have a conflict between whatever the purpose was of taking that medication to prevent harm, or cause a benefit, with respect for the autonomy of the patient. It will then depend on input from the doctor to help the machine to figure out what should be dominant.’

Professor Anderson adds: ‘This will allow the machine to be able to work out at what point it will hit the time when the patient will be harmed, and the medication reminder system needs to inform the doctor and say “you’d better intervene, there’s a real problem here”.’

DATA PROTECTION AND PRIVACY

Key issues that will certainly loom large with the advent of the IoT will be data protection and privacy. Almost all of the experts we spoke to agreed that there was a need for protection for particular machines, certain data and the programs that would manipulate that data and that this was ethical.

Above all there is the question of how to approach the data – which of course will include OUR data – that is being generated.

In Professor Adrian Cheok’s view, the pervasive nature of the new internet of things will mean that privacy becomes impossible, and that the only option left open to us will be to be as transparent as possible. ‘I think that what is going to happen is that the majority of us will, by default, just become totally public because of the amount of data that is online about us, because for the average person it is just a lot easier. People use credit cards now because it is more convenient, data use will be the same. We will use our data to make a transaction and to say who we are. Most of us will go transparent,’ he says.

Digital tools prioritise the preservation of data over deletion, we have built that by default into the system but it does not reflect us

However this solution only works if we are guaranteed that the IoT and related artificial intelligence systems are also utterly transparent and therefore allow one to see what is being done and how it relates to individuals.

This approach will also require adding a new concept into the new internet world, says, Professor Mayer-Schönberger, that of ‘relevance’.

He argues that much of the data that is stored upon us is no longer relevant and therefore gives an inexact picture of what we are now. Moreover, that inaccuracy will be imported into the big data collected by the IoT – and distort its usefulness. Past data, he argues, may simply no longer reflect who we are.

‘I may have once had a girlfriend who was keen on gardening so we did this as a mutual activity but we have now split up and I am no longer interested in gardening but Amazon and Google still try to direct me to gardening books. In so doing they might be upsetting me rather than pleasing me,’ says Mayer-Schönberger.

‘Digital tools prioritise the preservation of data over deletion, we have built that by default into the system but it does not reflect us. We start to forget things almost immediately and that has an impact on our decision-making and our ability to abstract. Too much information gets in our way,’ he argues.

Professors Cate and Cheok, meanwhile, agree over the need for the transparency of data. Professor Cate proposes that our data should have binding conditions attached to it, governing how it is used. The point is reinforced by Professor Mayer-Schönberger. ‘The biggest issue relating to data is, how data will be re-used,’ he says. ‘How it is collected will be of less importance – how it is used is the important issue.’

In other words, the Andersons’ concern with the ethics of the machines themselves should also be extended to data, its use and its deletion.

DATA INTEGRITY AND QUALITY

Given that the essence of the internet of things is the generation of data, and that crucial policy, commercial, military and consumer decisions already are and increasingly will be made on the basis of that data, the data’s integrity has to be viewed as sacrosanct.

In the future, the backbone servers of the internet will have to be zealously guarded against attacks by hackers, because of the potential impact upon humanity. According to Melissa Hathaway, organisations have to acknowledge that they owe a duty to the people whose data they collect.

‘I think that governments or the private sector have to realise that information is their greatest asset,’ she says. ‘Putting more and more data into data centres and not really thinking of putting in place the appropriate safeguards for those assets is unacceptable. We are seeing more and more breaches and people are beginning to realise that their data is vulnerable.’

LIMITATIONS OF THE LAW

Meanwhile lawyers across Europe admit they face profound challenges keeping up with the pace of technological change. According to Michael Drury, former Director of Legal Affairs for GCHQ, the UK government’s communications centre, developments in areas such as social media alone have quickly made legislation obsolete.

As a result Drury says we are currently dependent on technology companies imposing ethical constraints up on what they are doing with data; for the legislation does not exist to guide their actions. For example, according to Drury, when the UK’s Regulation of Investigatory Powers Act (RIPA) was drafted it did not envisage the development of social networks or ‘the cloud’.

Another good example was the EU e-commerce law which did not make allowance for the rapid uptake in ADSL broadband connectivity even though the technology was known about at the time of the drafting of the legislation. And according to Sir Bryan Carsberg, the first director of the UK’s telecoms regulator Oftel, the organisation only ever expected mobile phone use to be at around 500,000; now there are three phones for every person, a figure more or less replicated across Europe.

‘How do you define and safeguard for the future? It is a very difficult thing to do, given that no one knows what developments will occur next and no-one really knows what the future development of social media sites will be, to take one example,’ says Drury.

‘I think that there is a case that due to technological change that we may be on the edge of what can legislated for under the law. Any statute may be potentially unwieldy and there may be a case to look at a set of principles, defined by a code and regulated by a standing committee.’

Larry Lessig, the noted American legal academic and technological thinker, has argued that the law should give way to computer code. He says that if we want to control what is possible, code is much more efficient than law, a conclusion that backs the views of Professor and Dr Anderson on the necessity of introducing ethics into the computer code itself.

Thus the development of an unprecedented system for the collection of data from humanity has coincided with a time of great weakness in the protection of the interests of those who are the object of that information collection – namely us. This is because of the pace of technological change, a lack of understanding of technology among legislators, a regrettable lack of political attention and, most importantly of all, a lack of understanding of a system that humanity has become frighteningly dependent upon.

THE CASE FOR MACHINE RIGHTS — TO PROTECT HUMANS

As we have already seen, one question that has been raised is whether there should be some form of ‘rights’ for the machines that will be helping to run our digital world. It should be made clear again that what we are concerned with here is not a ‘robot charter’ for super-intelligent androids, an issue beloved of science fiction writers. Instead it means asking whether the key role of machines in helping us run our lives should be reflected in the conferral of some level of rights on them – essentially to give humans greater protection.

Warwick University’s Dr Cave argues, for example, that there is a case for the creation of an ethical framework for the protection of the smart phone avatars discussed earlier, and which are currently under development.

‘I am not saying that machines should have rights in and of themselves, but I do think that two things are true,’ he says. ‘Firstly, that if they do not have something that looks like a right – the power to take decisions and act on them, for example, or to learn from experience and behave in ways that they were not originally programmed to do, to act as autonomous systems – I don’t think my interests, our interests, would be served by our networked interactions.

‘Moreover, the internet as it exists would not exist because it depends on these autonomous systems operating. The question as to whether they should have human rights, though, depends on whether, in acting on the internet, we are acting as human beings.

‘Because if I am being nudged around by all of this information so that I am responding to it but it is impossible for me to know or verify that information and I simply react to it, then I have acted – but I cannot be said to have “decided” or to have made a choice.’

HUMANS BECOMING MORE LIKE MACHINES…

There is an irony here. Traditionally, the Turing Test is used to determine whether a machine is acting intelligently, akin to a human being. But Dr Cave wonders whether, confronted by the endless mass of data around us in the digital world, it is we humans who are at risk of behaving more like machines.

‘So it could be that the Turing Test gets failed in the other way,’ says Dr Cave. ‘It’s not so much that machines can masquerade as human beings, but that human beings, in a sufficiently immersive and interactive world, begin to behave like machines because they know that the decisions that they are making are too hard for them to understand, or they don’t have enough time to make them properly, or the consequences are so awful that if they thought about them they would not actually choose at all.’

CONSUMER RIGHTS AND MACHINE INSURANCE

The issue of machine rights may seem theoretical and remote from the consumer. But this is not so. For given the increasing role of machines in our lives – and their semi-autonomous nature – the question will arise when something goes wrong: who do I sue, the machine or humans?

In their 2011 book ‘A legal theory for autonomous artificial agents’ the philosopher Samir Chopra and the lawyer Laurence F. White make a powerful legal as well as philosophical case for giving ‘autonomous artificial agents’ a form of legal status. This status would be analogous to the ‘agency’ status that already exists in law; in other words ‘people’ with the legal authority to act on our behalf.

Chopra and White further argue that such artificial autonomous agents should be given legal ‘personhood’, taking their place alongside humans and corporations as legal entities that can, in theory, be sued. ‘There is no reason in principle that artificial agents could not attain such a status, given their current capacities and the arc of their continued development in the direction of increased sophistication,’ they write. In terms of ‘punishment’, the authors of the book say that artificial agents that control money ‘would be susceptible to financial sanctions, for they would be able to pay damages…and civil penalties or fines‘. Chopra and White also note that such agents could also be restrained in other ways, including by being ‘disabled’ – in other words, turned off.

One risk of making software machines liable is that it opens the way for yet more time-consuming and expensive litigation. This is why Chopra and White, and others, have floated the idea of insuring machines against damages they cause. ‘One move … would be the establishment of a registry that would stand behind registered autonomous artificial agents and insure them when things go wrong, so as to provide some financial backing to the idea of artificial agent liability‘.

One risk of making software machines liable is that it opens the way for yet more time-consuming and expensive litigation.

In a conference paper written in 2012, Dr David Levy went even further. Admittedly he was talking specifically about robots for household use or entertainment purposes, but the principle holds good for any ‘intelligent’ software-based entity. He suggested that a compulsory no-faults strict liability insurance scheme, that would pay out when something goes wrong, whoever is to blame.

One reason why Levy is so keen to see a no-faults insurance system – a level playing field for all – is that he fears the impact that widespread litigation would have on software and robot development. ‘One of the negative effects of all this litigation is that the growth of robotics as a research field and as a branch of commerce will be stunted because commercial robot development, manufacture and marketing will become such risky businesses,’ he suggests.

The same problem could affect the developers of internet-based software programmes that form the internet of things. It is a problem Chopra and White also address in their book, noting that while software providers have up to now largely been given legal protection that would be thought ‘unacceptable’ for dangerous tangible goods, that situation looks set to change as more and more software is embedded in machines and objects. ‘Suppliers of defective artificial agents may face increasing liability under professional liability theory, particularly if the judiciary comes to recognise software engineering as a profession with applicable codes and standards,’ they write.

Fears that insurance might reduce accountability – as developers would fall back on the fact that they were insured – may be outweighed by the fact that litigation would lead to increased premiums for those involved, a factor that exerts considerable pressure on professions such as architects who have to carry liability insurance for the buildings that they create.

In his 2012 paper, Levy highlights the obstacles to progress that the threat of litigation can cause. He cites the example of a 1970s computer programme called MYCIN developed at Stanford University in the US to identify bacteria that caused severe infections such as meningitis, and to recommend suitable antibiotics treatment. A comparison between the programme and five human experts at Stanford Medical School showed MYCIN’s ‘acceptability’ performance was 65%, significantly better than the human experts whose ratings were between 42.5% and 62.5%.

But despite this superiority, says Levy, the MYCIN software was never used in clinical practice. ‘One reason was the legal objections raised against the use of computers in medicine, asking who should be held responsible if the program were to proffer a wrong diagnosis or to recommend the wrong combination or dosage of drugs,’ he wrote.

Go back to Part 1

This article is part of a series of articles published from the Netopia report Can We Make the Digital World Ethical? Exploring the Dark Side of the Internet of Things and Big Data, by Peter Warren, Michael Streeter and Jane Whyatt.