Artificial intelligence (AI) seems to be on everyone’s lips. If 2017 was a formative year for technological advancement in this sphere, 2018 has been billed by some observers as the beginning of full-scale implementation of AI in our lives.
The recent Consumer Electronics Show (CES 2018, held in Las Vegas), the world’s largest tech event, communicated the clear message that if a device doesn’t feature AI, it’s probably not worth having.
As such, AI will now be part of the conversation when buying a TV. And, at CES, there was even a lavatory, Kohler’s Numi, that can flush upon a voice command. Hurrah!
But what is AI? The term was coined in 1956 by John McCarthy, an American computer scientist. In its purest form, the initials describe a system that does not yet exist – that is, harking back to the test developed by mathematician Alan Turing in 1950, where machines demonstrate an ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. This would require a computer to have high-level human cognitive abilities, like possessing a sense of self and understanding the motivations of others.
While machines do, and may always, lack our own sentient capacity, giant strides have been made in a dynamic subset of AI, namely machine learning (ML). This is defined as the science of teaching computers to learn for themselves. It appears that breakthroughs in ML are both driving the evolution of AI and redefining what, in the current environment, we mean by it.
Much of the technical detail on ML passes me by but, in short, the development of neural networks has been a game changer in advancing the humanlike performance of computers, while retaining their superior capability for speed and accuracy. These networks, working on a system of probability, enable computers to classify information in the same way as we do.
As a result, based on data with which it is fed, a machine can make decisions with some degree of certainty. Further, reliability is enhanced by a feedback loop (was the decision right or wrong?) which facilitates the learning and informs a modified approach next time around.
On the positive side, there are many applications of this torrent of innovation. We are already well-aware of speech-based assistants, smart homes, autonomous vehicles. More broadly, AI will transform healthcare, enabling better and faster diagnoses; robotic process automation will remove highly repetitive tasks from the workplace; facial recognition will redefine security procedures.
There will be even more improvements to come, especially in personalisation, as neural networks grow in scale, via advances in so-called deep-learning techniques, leading to jumps in the accuracy of classification and, particularly, of prediction where ML has to date performed less successfully.
But there are also risks in this race to develop a machine that can operate like us. I don’t mean the ‘Terminator scenario’, which will hopefully remain within the realms of science-fiction despite reports last year of two Facebook chatbots (Bob and Alice!) creating their own language, eventually talking to each other in a way that was impossible for their human minders to understand. My concern lies in what I believe to be the gap between the theoretical possibility of AI technology and the level of penetration which is manageable and beneficial to the public at large.
Arguably, we have been asleep at the wheel with smart devices and social media. As author and motivational speaker Simon Sinek has pointed out, we happily impose age restrictions on smoking, drinking and gambling and yet permit “unfettered access to these dopamine-inducing devices and media”. We do know that Apple’s co-founder Steve Jobs limited his own children’s use of the company’s products. His successor, Tim Cook, in last Saturday’s issue of The Guardian, expressed a similar sense of responsibility. But, given the global reality, is this just so much iWash?
And that’s before we consider some of the more specific issues around AI. For instance, how do we establish an appropriate co-existence between automation and employment? Will job reduction in some areas be offset by job creation in others? Can we rely on assertions like that of Deloitte’s Shilpa Shah on Women’s Hour (16.01.18), namely “we believe that 65% of jobs haven’t yet been invented in industries that are yet to be born”?
Also, it is acknowledged that AI systems contain bias in part because most algorithms are built by white male teams (hence the high-tech loo, perhaps?!) and, more widely, because the data sets upon which AI relies, unconsciously or not, don’t reflect the subtleties of our society.
Finally, the tech industry is laying the groundwork for our smart devices to be AI-enabled and for AI to be added to our suite of apps. As a result, the addictive power of our new best friend increases.
Having a sensible view not only on what it can and cannot do but also on what it should and should not do is the key to making the right decisions about AI. And this view cannot simply be that of the world’s institutions and technology companies.
We must all play our part in the AI revolution (it’s here to stay), committing to lifelong learning, resilience and flexibility. But, equally, we must all be active in the debate on how artificial intelligence contributes to our lives. It’s about striking the right balance for humanity.
Our own RI must come first, supported by AI, or else…