Is the IoT acting in the Right Interest?

Consumer rights

A major concern for our rights as consumers is the way that machines direct us according to their interests and not ours. Experts such as Dr Jonathan Cave warn about the growing influence of software machines on our lives. Cave says that software machines will make use of what they know about us to present information to us which may not be to our advantage. Because the search engines that we have used know a certain amount about us and our previous buying decisions, they are keen to exploit that by turning us into a buyer of something, by a process known as ‘filter bubbles’ – a feedback loop where recommendations only reinforce existing patterns.

As Dr Rupp states ‘if you are not paying then you are not the customer’. Thus if you are not paying for an internet technology such as Google or Facebook it is not acting in your interests, but rather in the interests of the customers who are paying to present information to you. ‘As technology changes, our concept of what our rights should be may change: it could be that privacy empowers me to use my data, or it could be that it becomes a market opportunity for someone to collect my data,’ Cave says.

To take a small but familiar example, when Google picks up that we are interested in, say, lawnmowers, it fields a number of adverts down the side of your search that relate to your search and then fine-tunes that list according to data that it holds on us. Research has shown that people do not often go beyond the first search page so companies pay for search engine optimisation (SEO) – i.e. they buy in experts to make sure that they are in the top three results in a search and they keep on refining that. Thus the search is not in our interests – it is in fact a series of adverts competing for our attention.

Moreover other research has shown that if a web page does not load in between eight and 11 seconds then we will go to another site. Thus already the system is directing us and to this extent we are at the mercy of the machines. Other information about us is stored, based upon our profile and information that Google or other systems have culled about us – and later used to serve up offers it thinks may be relevant to us.

It is important to note that this information gathered about us will not all come from the online world, but from our offline behaviour too!

We know of one mid-level executive who was very embarrassed when, while doing a web search in front of colleagues, he saw adverts being served up about Caribbean cruises for gay people. These had evidently resulted from previous searches that he had made. Of course, the machine did not know that at certain moments this information could be embarrassing and was trying to be helpful. The individual had not realised the implications of him logging into his personal Google profile.

There is, here, clearly a conflict of interest between a search engine system in which one is trying to search for information and the fact that that at the same time the search engine system is selling our personal interests to other companies seeking to establish a commercial advantage.

The situation will become even more worrying with the emergence of avatars, which as we have seen are predicted to become our ‘personal representatives’ in the world of the internet of things. Software developers will home in on the code of our avatars and seek to develop their own code that will make the decision for the avatar. Companies and their software engineers will also try to force data out of the avatar that we may have instructed our avatar to withhold. Thus there will be a need to protect the avatar and the ways that it interacts with other software.

Yet another concern is the emergence of ‘behavioural science’ – the study of how we actually make decisions and behave. The internet of things will have a big impact on this, as it will yield huge amounts of data on how we actually behave, where we are, and what we are doing.

It is important to note that this information gathered about us will not all come from the online world, but from our offline behaviour too. Supermarkets, for example, will be able tell which objects we stood in front of in a shopping aisle and which one made it into our trolley at which time. This can then be cross-referenced with other decisions we have made.

‘Losing control of our technology’

Dr Michael Anderson, associate professor of Computer Science at the University of Hartford and an expert on machine ethics who runs the machine ethics website www.machineethics.org, says this debate raises an important question: who are the machines working for?

By ‘machines’ in this context we do not mean ‘dumb’ robots, but instead software entities in the internet that are engaging with a human search or action. It is these programs that cause the greatest concern to Professor Murray Shanahan, an Artificial Intelligence specialist, who has a particular interest in the risks associated with AI intelligence.

‘I think before we should be worrying about humanoid robots taking over there is some concern about artificial intelligence that is not embodied in quite that way but which is in the devices that we carry around with us, and in the internet, and in the cloud and so on,’ he says.

‘We are going to see a lot more AI technology embedded in our surroundings and in the internet and in systems that are connected to the internet – and that’s where I think that we need to worry. Not so much about being taken over in some sort of science fiction scenario – but of losing control of our technology.’

One reason we may lose control is simply that of complexity

One reason we may lose control is simply that of complexity; the machines and the systems they run have become so complex that no-one can understand the mechanism any longer and it could already be developing its own momentum.

As Eric Schmidt, the former Google CEO and advisor to US President Obama has pointed out: ‘The Internet is the first thing that humanity has built that humanity doesn’t understand–the largest anarchy that we have ever had.’

Ethical machines?

This question of control is a key area that is often overlooked in discussions of the internet of things. Exactly who will these systems serve, what will or should drive their decisions, and how will humans ultimately retain full control of what is doing on? Will the driving force be large corporations, governments – or citizens and consumers?

These are questions echoed by Dr Anderson, who is particularly concerned about the ethical dimension of the machine age. ‘Should the robots be trying to tell you something, for example, should we have whistleblowing robots?’ he asks. ‘Should we have ethical machines in the stock exchange systems, that are making the decisions based on the buyer in pursuit of profit or decisions that are in the interests of the employees of a particular company which could be put out of business due to a buying decision?‘

‘If there is a sales robot, should it be trying to sell you something because it wants to make as much money as possible for the person who has paid for the development of the robot – or for the buyer who wants to make the best possible choice for themselves?’

This is a particularly problematic area due to the limited understanding among legislators about the way that the internet works and the relatively poor representation of consumers in this new ‘global’ marketplace. While in stock markets regulators have quickly evolved a whole host of mechanisms to ensure – in theory at least – probity in the markets, including sophisticated software analysis of market patterns, similar systems of control have not yet been developed for the internet at large. This has given companies the freedom to evolve very sophisticated systems for ‘market rigging’.

Among the most obvious of these is the use of ‘search engine optimisation’ to promote a company, as touched on above. For some years, and despite calls from European countries to prevent this, companies have spent considerable sums of money in order to influence web searches. This can involve buying a search for a particular word or writing a web page in a way that promotes it through search engine ranking. Thus pages are deliberately written – arguably ‘programmed’ – to appeal to the search robots themselves and achieve a high hit rate for SEO, while programmers abuse the scoring system used by the search engines to also achieve the same end.

Many leading internet thinkers, for example Jaron Lanier, are now arguing there is a need to reverse humanity out of the machine that has developed and start rebuilding a system that meets its needs. Since the start of industrialisation humanity has built machines that it has imposed upon itself and computers are a very good example of this. One way this has changed our behaviour can be seen by the way that in the United States and elsewhere people now make themselves ‘fit’ forms and profiles in order to achieve credit histories. Thus they have in effect begun to make themselves more like machines in a bid to succeed in a world where machines are increasingly taking autonomous decisions.

Cybercrime and the internet of things

Each of the experts interviewed for this report expressed profound concern at the way this world was developing, the speed at which it has been occurring and above all the lack of debate surrounding it – given the abuses of personal information that have occurred due to the algorithms used by the NSA and GCHQ. Legislators, in particular, have arguably focused too much on issues such as encouraging big data and controlling, for example, 3-D printing of firearms, and not enough on the collection of personal information and individual rights.

According to Melissa Hathaway, US President Barrack Obama’s first advisor on cyber security strategy, the debate, particularly in the US, has been virtually paralysed by political concerns.

‘The next few years are going to be crucial for the internet and the US is not in the best possible place to respond to that – at the moment it is facing a number of financial issues and the government itself is not particularly cohesive.‘

‘This is coming at the same time that a number of important things are happening; in October 2014 there is a significant meeting to discuss the future of the internet and that is the UN World Summit on Information Society and it will be important for the US Government to think about what is the positive narrative for the internet and how will the US work with allies and other countries to promote the economic health and well-being of the internet,’ she says.

This is an important point for much of the key infrastructure of the internet is still in the US and thus the US has an important part to play in that debate. After the revelations about Prism many countries may not like that, but is still something that has to be acknowledged.

It is also important to note that political attention on the development of the internet and internet of things has been diverted away by what have seemed to be more pressing issues such as the war on terror, concerns over global warming, and by the economic repercussions of globalisation.

All of this meant that technology issues such as cybercrime, the changes wrought by social networks or the ramifications of the rapid and wholesale penetration of information technology into our lives have not received the full consideration that they need, particularly in the area of law and regulation.

This is a point Hathaway underlines. Every household is now equipped with internet-capable devices, not just mobile phones but laptops, tablets, smart televisions, e-book readers and PCs, and to these we will rapidly add fridges, smart meters and cars. All of these devices will be connected to the internet and can also be connected to from inside the house via Bluetooth or Wifi and will also be accessible remotely by us, our family and other people to whom we may give access, to manage our homes and devices.

All of which is great and makes life more efficient; but it also makes us more vulnerable to attack from the unscrupulous.

Self-programming software?

Any potential problems with the internet of things and how it will increasingly dominate our lives will only grow when even more sophisticated software enters the scene. The next generation of software robots may involve a form of self-programming or decision making, based on the situations they encounter via the internet. This is a risky step for any technology, given the possibility of computer viruses running amok. For example 25 years ago the first internet worm – the Morris Worm – jammed the fledgling internet after developing in a manner unforeseen by its creator and Stuxnet, which was designed to be a stealth virus, and was targeted only at Iran’s nuclear industry, still managed to attack power plants in Asia and caused damage to the US oil company Chevron. Already it has been reported that a programming virus has attacked the IoT.

All of which is great and makes life more efficient; but it also makes us more vulnerable to attack from the unscrupulous.

According to Professor Barrett, any move to self-programming systems would be extremely worrying. Barrett suggests that the initial moves towards artificial intelligence will involve locking down parts of the code. ‘The way that such software works is through “adaptive” programs. These have a fixed core of functionality, augmented by a set of varied additional functions, pre-programmed by the author but switched in or out by the program as required. Something very like this is used to make mutating computer viruses, for example.’

This creates a self-programming autonomous program, and is a step that an organisation should only take only if it is completely confident that it is in control of what that system can do – a difficult guarantee for anyone involved in technology to give. This is why software machines are considered by many of the experts we interviewed to be the greatest potential threat to humans from the new world of machines.

Old and new data

A more prosaic concern for policy-makers is what to do with the large amounts of ‘old’ data currently stored on ‘old’ computer systems run by governments and some large companies around the world, data that is currently separate from the ‘new’ world of ‘big data’.

The chief issue about this old data is how to migrate if from the ‘legacy systems’ where it is stored and also whether its value can ever be full realised. There are also concerns about data protection laws which mean much of the old stored data cannot currently be mixed or ‘consolidated’ with other data because, as Mayer-Schönberger points out, when the data was collected it was not intended to be re-purposed for the purposes that people may wish to use it for in the future.

This has led to various initiatives aimed at finding cloud solutions which will allow data to move from government data pools onto the web – and to what has been dubbed the G-Cloud, or Government Cloud. For government and big businesses such as financial institutions this represents the final move from the secure computing environments of the phone age, onto the internet where data is in computers with restricted access in secure locations and in the cloud.

The move to the cloud is, though, a step fraught with risk. Currently much of the data is held on legacy systems and in differing coding architectures. For both governments and big business this represents a huge problem as it is being stored against the risk of loss while at the same time those organisations are unable to derive any benefit from it. Therefore the efficiencies promised by the internet of things will be not be fully realised until the ‘old data’ is harnessed to the data being generated from these other sources.

Banks and financial institutions are therefore at a disadvantage against less restricted cloud-based competitors such as mobile phone companies, who are able to develop the single customer profile that the banks have not been able to deliver. Governments are also in a difficult situation, but for different reasons; if they try to pool together all their data they are at risk of being accused of developing a ‘big brother’ computer system to monitor their citizens’ behaviour. However, the banks at least do not have as much of an issue as governments in relation to information loss. The track record of the banks in terms of their ability to protect data is far better than governments though arguably this is due to commercial pressures arising from the possibility of reputational loss, litigation and the banks’ view of data as an asset.

On the other hand, a failure to develop a government variant of the ‘single customer profile ‘ much beloved by business marketeers will lead to accusations of government technological backwardness, incompetence and inefficiency that will, particularly in the light of the expected explosion in health data, be politically difficult for a government.

Systematic government failures of centralised computer systems, particularly in countries such as the UK, will become even more difficult to justify at a time when budget cuts demand efficiencies that can only be delivered using ‘smart’ technologies such as the smart grid and smart cities.

This article is part of a series of articles published from the Netopia report Can We Make the Digital World Ethical? Exploring the Dark Side of the Internet of Things and Big Data, by Peter Warren, Michael Streeter and Jane Whyatt.