Internet of Things – Blueprint for Action

The internet of things and the era of big data will bring great benefits. But many of those benefits risk being overshadowed if the real potential problems posed by this new technological revolution are not addressed.

As we have seen, we are now allowing an unprecedented and unregulated explosion of data, data gathering and data analysis which our leading lawyers say the law is unable to keep up with because the technology is moving so fast.

Also, at a time when in virtually every other field – from medicine to transport and communications to energy – we have regulators, the only area we do not regulate is computer software. Yet it is this very computer software that will control the internet of things and with it the fabric of the world we live in.

Too much focus, we believe, has been placed on the technological advantages that the internet of things and big data can bring. Not enough attention has been given, on the other hand, to the impact on humans of living in a world in which we increasingly hand over control of everyday functions to machines and to the new gold standard of the modern world, big data. The benefits of the IoT have been stressed while the dark side of the changes have been largely ignored.

There is now, therefore, an urgent need for policy makers to consider key practical questions about how to ensure the IoT works for the people, and not independently of the people.

There is also an urgent need to ensure that the system itself is safe and protected. At the moment it is worryingly vulnerable.

To address some of the issues raised in this report we recommend the following:

1 – Consideration should be given about how to bring ethics into computer programs/software to ensure that human consumer rights and privacy are protected.

Citizens’ privacy needs to be much better protected from the world of big data, whether through protecting access to that data in the first place or, as many of the experts we have spoken to suggest, placing controls over how that data is used once it has been gathered.

The rights of citizens and consumers in relation to the internet of things and internet software need to be codified in a short and simple form. This could include giving machines some form of legal status to ensure that we humans are given extra protection.

2 – We call for an end to the current practice of road testing software on the population at large. New software destined to be used in the public arena must be properly regulated and checked for safety and compatibility before it is released. This would require the setting up of a new European technology regulation body; part of its role would be the equivalent for software of the Federal Drug Administration in the United States.

Funding of this new software regulation body: we are aware that the IT industry does not just mean large organisations such as Microsoft and Google and that it is a vibrant and developing industry – thus the costs of proving software should not be shouldered entirely by the smaller companies, and they should be helped to ‘prove’ their work. The patent system is currently unwieldy due to costs and is a significant disincentive to companies to try to work within it, which has led to many companies trying to find ways around the issue. At the same time, we believe it would be unfair for the taxpayer to fund software regulation. So to protect the interests of both consumers and small-scale developers we suggest the IT industry should provide a sliding fund for the proving of technology, based upon company size.

Another key function of this new Europe-based technology regulation organisation should be to inform governments and politicians of the significance of technologies.

Another key function of this new Europe-based technology regulation organisation should be to inform governments and politicians of the significance of technologies. Already much good work has been done by the EU in bringing companies such as Microsoft to account. This has meant that the EU is now seen as taking a lead in this area. This new organisation would set the benchmark for the rest of the world and ensure that Europe is seen as a centre of probity.

The new technology body would also have the key role of informing the public. There is an urgent need to increase the awareness of the population at large about the significance of the Internet of Things and what it means for them. This is something that the IT industry is not currently doing. It has a vested interest in promoting the benefits of technology and not its demerits.

A final role for this new technology regulator should be that of an infrastructure planning agency to understand exactly how much of the internet system is European and what we control . Its remit would be to draw up contingency plans to bring back limited parts of that infrastructure under European control in the event of a widespread attack upon it.

3 – We call for the development of technology that can both make data ‘anonymous’ and at the same time produce valuable data that is of benefit to society as a whole. We contend that the only way that this may be possible is by the development of an ethical computer system that stipulates how the Internet of Things can use information.

4 – We suggest there is a need to reinforce what we call ‘device sanctity’. As smartphones, devices and the software they use become increasingly personalised, it is important that these devices are loyal to the individual who owns them. Devices considered to have ‘a human interest’ need to be properly protected against incursions from both the state and cyber criminals, a protection enshrined in law.

5 – Primacy of interest. It is now possible for a number of different groups to have an interest in a device such as a smartphone – the person who bought it, the telecommunications company that runs it on our behalf, companies such as Facebook, Google or LinkedIn to whom we have granted an interest in our whereabouts, the government and the police. It is essential that the order of primacy in this interest is made clear and asserted.

Individuals should have to actively opt in to the Internet of Things if the use of their device is being solicited by another party, and the implications of signing in should be made clear.

In return for services offered by the IoT there should be a ‘cooling off period’ before those wishing to use a service can participate. Data must not be used without an explicit ‘buy in’ from the person concerned.

We suggest that consideration should be given to imposing compulsory insurance for computers and devices and for those who are producing software for those devices, for the internet and for the IoT.

We believe the issues are major ones; nothing less than the future safety of the internet and the acceptance by citizens of this new technological world are at stake.

For while most consumers seem to have embarked on a deep love affair with their smartphones, devices which, as we have seen, will be most people’s main contact with the internet of things, this technological love-in cannot be taken for granted.

If, over the coming years, more and more people feel alienated, lost and no longer in control of the world they live in, there could be a significant backlash against the machines, software and all things technological.

Up to this point in history, humans have been able to touch, see and intuitively understand how the world around them works. This reassuring handle on the world will start to disappear with the advent of the internet of things, which is increasingly likely to be seen as vast, complex, hidden and mysterious.

Already we have seen in the recent – and ongoing – financial crisis how the complex world of finance lost the trust and confidence of many people when they were confronted with the real world impact of vast transactions and operations that they did not understand but which were seen as being damaging to society’s interests.

How much greater will the risk of alienation become if people feel they are suffering as a result of the complexity of the everyday world itself, one that is perceived to be run by machines and not always in the interest of us, the consumers?

This is why the technological optimism of the new digital world must be accompanied by pragmatic policies, rules and workable legislation to reassure people that they are still the masters of the world in which they live.

Visible, concrete, practical and robust measures need to be adopted to show citizens that the technological world is both safe and here to serve people – and not the other way around.

That way the new age of machines can do what it was surely always intended to do – make life a little easier and more efficient for we humans.

This article is part of a series of articles published from the Netopia report Can We Make the Digital World Ethical? Exploring the Dark Side of the Internet of Things and Big Data, by Peter Warren, Michael Streeter and Jane Whyatt.