Flattening the Curve – Why Exponential Growth in AI May Be a Mirage

Time spent in lockdown can be used to think about the big things in life. Like artificial intelligence. Netopia has a mini-theme on artificial intelligence, with Peter Warren’s story on insuring self-driving cars and Ralf Grötker’s review of Stuart Russell’s Human Compatible. Both put artificial intelligence in context. No, artificial superintelligence will not eliminate humans anytime soon. Yes, there are completely different issues we need to debate to make AI useful for humans.

Exponential growth is a popular way to think about digital phenomena. Innovations added to innovations in an ever-accelerating pace. This mindset leads to quotes like “change will never again be so slow as it is now”. It brings the conclusion that there will come a point where innovation happens all at once and the speed of change explodes into the “singularity”.

In this virus pandemic, we have been looking at a lot of exponential growth curves and thinking about ways of flattening them. While the virus might be disruptive in its own way, it is driven by mutation rather than innovation.

Exponential growth is seductive. Apply it to any process and you get mind-blowing results. The problem of course is that not all processes can accelerate exponentially. Let’s look at self-driving cars. If computing power (whichin theory grows exponentially according to Moore’s Law), that would make the AI smart enough to take over from the driver anytime soon. The first problem is that AI relies on a number of other technologies, such as GPS, cameras, lidar, radar and many other sensors and communication technologies. Each of these technologies develops at its own pace, but not necessarily with exponential speed. 5G telecom networks, for example, are often mentioned as a key to self-driving cars, but the roll-out pace is held back by many factors: legal, financial, political etc.

The second problem is the data-set. One way to make AI useful is to train it on a big set of data and make it find the patterns. This is what machine-translation has done. In the old days, machine-translation tried to replicate how humans learn languages with grammar and glossary and such. The break-through came with predictive statistics applied to huge samples of real language (corpuses), where the AI can guess with some accuracy what the most likely next word will be from the context. This is what auto-maker Tesla tries to do with its auto-pilot system, which silently observes how real drivers deal with various traffic situations (such as stopping at red lights), uploading that to a central system which then analyses the best driver behaviours and feed them back to the autopilot. That brings the philosophical question if all traffic situations can be predicted and simulated. If the answer is no, self-driving cars will never be 100% self-driven, just like machine-translation can never be 100% accurate.

Put together, this means that exponential growth in self-driving cars may exist in simulation but not on the road. In real-life, incremental innovation is a better explanation. We make the camera a little better, so the car can navigate in rainy conditions. Then we make the wheel sensors a little better, so the AI can get more feedback about the tires grip and calculate braking power better. Then we add more videos to the data-set so that the AI better recognizes road indications and can keep the car in lane. All these things little steps amount to great progress, but it makes for systems that assist the driver (lane departure warnings, automatic high beams), rather than the computer replacing him. Mathematically this is logarithmic growth, which means diminishing returns, also know as Achilles and the Turtle.

In many real-life AI applications, logarithmic growth may be a better explanation than exponential, only not as spectacular. If we keep this distinction in mind we can have a better informed debate about the threats and opportunities of artificial intelligence. Also, next time somebody says exponential growth, you can ask “sure it’s not logarithmic”?

Now, if we could only flatten the corona-virus curve in the same way…