Putting Ethics into the Machine (Part 1)

We have seen how the internet of things and the growing phenomenon of ‘big data’ will throw up major problems for consumers and citizens, problems that have as yet barely been grasped by most policy-makers. In this world of growing complexity, the potential for an unintended consequence becomes greater and greater from machines performing an action that was not anticipated. There are key issues, too, about our reliance on data at a time of massive data generation, data storing and data preservation which have the potential to both obscure results and generate injustices.

Perhaps the greatest issue that we now face is caused by our blind faith in machines. We have invested them with certainty and – as we have pointed out – we trust them. Part of the reason for this is an odd confusion that has conflated the machines of the industrial age with the machines of the information age.

As a result of this we trust that machines will do what they are meant to do virtually implicitly.

We assume that our cars will start, that our washing machines will wash and that our electric drills will bore holes. When their mechanical controls are replaced by software controls we still assume the same thing mainly because most of us are unaware that this has happened.

However, as we have seen with the Snowden affair, the extent that software systems have penetrated our world is not widely known by the population at large and the ramifications of that are now just being appreciated.

While it could be said that this has generated risks to privacy and freedom, at the back of this sits a question of ethics. This is not simply the ethics of covert surveillance of populations; it is the wider issue of the ethics of creating a world so complex it is incomprehensible to humanity and beyond its control.

This world is becoming so complex that those involved in its creation warn that it has the capacity not only to do things that we are unaware of – such as start thinking – but also start a chain of events that we would be the victims of; either as a series of decisions, or as the result of machine error leading to a catastrophe.

So how can we start to improve the system? One way is to ensure that only the safest and best code is used in this complex system. Until now we have had a poor understanding of this issue; computer games consoles are currently more secure than medical computers that control patients’ lives. And in the recent tests on the large hadron collider at CERN, used to discover the Higgs Boson, the so-called god particle, the scientists had to employ a number of code specialists to weed through the programs being used to ensure the veracity of the code and thus avoid a false conclusion from the experiment.

As Professor Shanahan makes clear, perhaps the greatest potential risk is the lack of human restraint in the system and the potential for ‘it’ – the system – to make decisions that have an impact on us without our knowing.

‘I think if we get it right this symbiotic relationship is beneficial and that largely this technology is pretty good stuff, but we can get things wrong and because of that, that can mean bigger implications for us today than it did in the past.

‘A minor programming error in the past might just be confined to your desktop whereas now something can be released into the ‘wild’ and cause all kinds of problems, and I think the potential impact of small engineering mistakes – let alone malicious mistakes – is going to increase as time goes on,’ he says.

As Professor Shanahan and Dr Cave pointed out, with more Artificial Intelligence technology embedded in the system the risk is that we lose control of technology, with the potential process of machine ‘evolution’ increasing this risk to the possible detriment of humanity.

With the potential process of machine ‘evolution’, the risk is that we lose control of technology

‘We all know about computer viruses and computer viruses that can become increasingly intelligent, that can be made to be increasingly intelligent – they can also be made so that they can improve themselves so that they can “evolve” in which case they can change in unpredictable ways,’ says Professor Shanahan.

‘So there you would have little packages of intelligence that were moving around, as it were, but you will also have AI that is in static systems that is doing all kinds of things like deciding whether we should be given a mortgage or insurance or surveillance systems making decisions about us,’ he says, admitting that given the potential for catastrophe, there is now a need to implement a much more rigorous system to check computer code before it is released.

‘Certainly one thing that [we can do is] try to build our code so that we are better able to verify in a formal mathematical way whether it is working properly – and whether its security has been violated in some way.’

THE PERILS OF UNTESTED SOFTWARE

Yet currently we still allow the computer industry to road-test unfinished software on the internet in Beta – trial – form. Gary McGraw of the computer security software company Cigital, says: ‘In some cases the beta software is doing things such as controlling nuclear power stations.’ McGraw notes that many politicians are unaware of technology issues and suggests that in the field of computer security Europe is 18 months behind Washington – which is itself off the technological pace.

‘Washington lags very much behind the cutting edge of technology and computer security is very much at the cutting edge of technology. It’s a little like when buildings were going up faster than the legislation in places such as San Francisco and Chicago [editor’s note, in the 19th and early 20th centuries] and there were no fire codes and it took burning a couple of cities down to the ground for us to say: “Maybe there’s a better way to do this”.’

As Professor Shanahan points out, culpability over problems caused by software in the system currently still lies with the computer manufacturers. This means there is massive potential exposure to a disaster, one that the computer industry would rather not consider.

‘Putting ethics into it is a difficult thing to do, because it is very much like passing the buck by the engineers to the computer and saying that “the computer says no” and “the computer says kill” and that’s a very back-to-front story – because the responsibility is down to the programmer to make sure that the thing works correctly. We are not envisaging yet some kind of future where the AI is genuinely autonomous like we are, and having consciousness,’ says Shanahan.

Professor Susan Anderson and her husband Dr Michael Anderson are adamant that computer systems should not be deployed in situations where the consequences the ethics isn’t clear for a machine functioning in a particular domain we are opposed to putting machines in the domain and we say that repeatedly,’ says Professor Anderson.

Many in the technology industry would reject this as idealistic and unworkable. After all, much of the modern world is already run on software and machines, and restricting their use for security or ethical reasons could have economic consequences. Howard Schmidt, President Obama’s former cyber security czar, who sat on two White House Committees, one for cyber security and one for economics, admits that balancing the interests of commerce and security is not an easy task.

‘We would be in a situation on the cyber security committee where we would say “no, that’s it – we are going to pull the plug and stop this right now”,’ says Schmidt. ‘And then I would go into the economics committee and they would say “no, you just can’t do that’.”

Already there are calls for a radical overhaul of the base code of the internet and computing on which we rely, to make it more secure and to build security in from the beginning. Before Bill Gates ceded control of Microsoft he committed the company to adhering to the Trustworthy Computing Initiative to improve the company’s software.

According to many observers such moves, while welcome, are not enough. The amount of poor code already developed at huge speed due to commercial pressures in past decades has left us dependent on an internet system that is as unsafe as the car industry was in the 1930s. And it is onto this unsafe, some would say rickety, infrastructure that we are now planning to launch the internet of things. This is a process for which no one person or organisation has overall responsibility; while people releasing software do so with no concern for any over-arching architecture or infrastructure.

In other words, there is no guidance to state whether or not you have released a safe or unsafe vehicle. As a result the internet and computing have to a large extent become an ‘ethics free zone’. As we have seen from the actions of those companies and organisations seeking to harvest our data, there is little concern for the rights of individuals because the computer code and the internet of things turn them into data and strip them of their humanity. The same is true of computer software as we have seen from the row regarding the NSA’s use of data culled from the mobile phone app ‘Angry Birds’, a game mainly played by children. The NSA’s ‘exfiltration’ of data is exactly the same as the actions of tens of thousands of commercial companies that have built apps for exactly the same purpose.

It should, therefore, come as no surprise that if there is such a wilful disregard for individual privacy rights then the same holds true for the development of other software systems.

Indeed there is widespread ignorance of the fragility of the system, or of our dependence on it. As a number of experts have pointed out, the power grids in both the US and Europe are particularly vulnerable because of this uncontrolled evolution.

While the results of that may be a practical risk to humanity, there are also ethical considerations about the fact that this situation has been allowed to develop and that the problem is now predicted to accelerate due to the emergence of the IoT.

It would be better, say some observers, to introduce the equivalent of a Federal Drug Administration to prevent the roll-out of untested systems and to ensure that safeguards are built in – and to build in a system of control that allows human beings to effectively assert their rights.

A European Software Certification Agency would, inevitably, be criticised early on for being unwieldy or for slowing the pace of commercial competition and hampering the development of software in Europe. But demands for light touch legislation will only be tolerated until it is deemed that legislation is essential because light-touch administration has failed.

Post-event legislation frequently follows rapid technological change, as has been mentioned with the automotive legislation of the 1930s, and the close control exerted on the avionics industry by bodies such as the European Aviation Safety Agency following concerns over the safety of air travel.

Industry-specific legislation was also drawn up following particular crises, as was the case with large companies such as Enron and WorldCom that led to the Sarbanes-Oxley and Basel II legislation, the regulations that have been ushered in in the wake of the credit crisis in the US and Europe, and the industry-wide reform of the US hotel industry following the rape of singer Connie Francis and her subsequent $2.6m lawsuit against the Howard Johnson Motel group.

It is a post-event legislative culture that can be avoided, according to Professor Susan Anderson and her husband Michael, by the introduction of a new form of computer code that places ethics at the heart of the new communication systems.

Go to Part 2

This article is part of a series of articles published from the Netopia report Can We Make the Digital World Ethical? Exploring the Dark Side of the Internet of Things and Big Data, by Peter Warren, Michael Streeter and Jane Whyatt.