Artificial intelligence (AI) and new routes of data processing are beginning to challenge insurance processes: in terms of personalised pricing, quoting, automated claims and new types of risk diagnostics such as voice analytics or driver signature. Elsewhere in wider society AI is having an impact in many aspects of daily life, with machines that can autonomously read documents, assess images, learn, speak, walk, fly and even (potentially) fire weapons.

Big data, AI and machine learning are becoming part of business as usual for many organisations. The US government, with the National Science and Technology Council, held its hearings on preparing for AI. In the UK , the Science and Technology Select Committee has posed a host of technical, legal and ethical questions in a recent report ‘Robotics and Artificial Intelligence’.

In its recent report on the subject, the Information Commissioners Office, the data regulator for the UK, put forward its view on how dynamic data and new uses of data are increasingly challenging the regulatory landscape.

Public policies and a ‘bigger debate’ needed for data and AI

It is an important debate: how to design public policies and controls for AI, but without stifling innovation. Both sides of the debate make compelling arguments.

The changes are all being driven by the continued growth and availability of data, including data from new sources such as the Internet of Things (IoT), the development of tools to manage and analyse it, and growing awareness of the opportunities it creates for business benefits and insights.

Some of the distinctive aspects of big data analytics are:

  • the use of algorithms
  • the traditional opacity of the processing
  • the speed of processing
  • the tendency to collect ‘all the data’ (which by law must be with the customer’s consent)
  • the repurposing of data, and
  • the use of new types of data.

Although the use of big data analytics is becoming common, it is still possible to see it as a step change in how data is used, with particular characteristics that distinguish it from more traditional processing. Identifying what is different about big data analytics helps to focus on micro-processing steps and other features that have implications for data protection, privacy and in particular implementation of GDPR. In a recent blog we commented on preparing for GDPR and how it’s expected to impact on the insurance industry in Europe and globally.

Up to now, the thrust of the regulation is to ensure that the consumer is in control of their data, to ensure that data handlers have a grip on their data audit, security and data management procedures. And finally, there’s an understanding in society that intelligent machines must be programmed to “do the right thing”, driven by machine learning since machine learning is the easiest way to programme the machines.

Deep learning, machine learning and AI

In a previous blog we commented on how deep learning is already moving into the insurance world, particularly in designing computational models from complex layers of information.

Deep learning is a more complex sub-field of machine learning, which is itself a sub-field of artificial intelligence. Progress in terms of computing power, new data sources and the potential for AI has been pretty remarkable, so much so that in insurance – as in the rest of the business world – it is almost impossible to predict what will come next.

This year Google’s Alpha Go programme defeated Ke Jie, the world’s best player of the strategy game Go. The New York Times had anticipated it would take the computer 100 years to learn centuries of game play by humans. In fact, it took just ten months to win.

This step for computing comes at a time when insurance companies –  with their partners – are exploring the potential for AI to do everything from detecting deception in the human voice, pricing the cost of vehicle repairs from crash images, keeping driverless cars safe, to drafting legal documents. Many traditional pillars of the insurance business are changing.

For example, Volvo has unveiled its vision for the “uncrashable car”. Pedestrian air bags on vehicles, cars in continuous communication with eachother, traffic lights, bus stops and street signs, automated braking and lane assist, are just some of the intelligent safety features already available, or soon-to-be-available in the market.

Entering a ‘golden age’ of productivity with AI and data automation

Speaking at the recent IP Expo and Machina Summit, in his keynote address Professor Stuart Russell from the University of California, Berkeley, painted an optimistic scenario for human-compatible artificial intelligence in the future.

However, he said there are dangers, for example by replicating pre-existing racial, gender or other data biases into AI systems, subverting democratic processes, or with the media tending to focus only on the downside of AI, such as recent deaths involving driverless cars or the theoretical potential for weapons systems.

What is coming soon from AI? What of the potential is currently missing, and what will be the wider impacts on insurance and the economy?

Professor Russell, who heads the Computer Science faculty at UC Berkeley, and Adjunct Professor of Neurological Surgery, predicts new intelligent tools for scientific research and economics will lead to a “golden age” of productivity, a step change whereby AI will eventually make better decisions than humans in a whole range of areas, destroying some jobs but creating others.

“Never bet against the ingenuity of humans,” commented Professor Russell at the Machina Summit.

“Coming soon we can expect machines that can read everything that has ever been written and integrate that information….Search engines will be able to understand everything that is out there, not just key words. There will be intelligent assistants for health, life, education, finance, and by that I don’t mean chatbots, which are not really intelligent. They just have some speech recognition at the front.”

Bias and other dangers for human-compatible AI

In the category of what is currently missing in AI, but coming soon, Professor Russell commented that deep understanding of language (especially joking, irony or other aspects of language in context) is still a difficult goal for computing.

He said the integration of learning with knowledge in computing has already begun – essentially replicating the deep learning patterns of the human brain – as well as the cumulative discovery of concepts and theories, based on layer upon layer of patterns and behaviours.

“As a species, we should feel greatly humbled by the prospect of advanced AI,” said Professor Russell, “even if we will still want to have the power to turn off AI at certain strategic moments….The monetary value of human-level AI [in the economy] will have a value exceeding the total current global GDP.”

In terms of business ethics, risk and ensuring that machines continue to work in an altruistic manner for the interests of humans, Professor Russell said that society needs to be sure that we put the right purpose into the machine up front, to ensure it really is the purpose we really desire.

There are some basic principles of computing and ethics that it is worth us all revisiting, to make sure that we are building the right algorithms into the systems being built. Is more data always better?  What is so scary about better and more powerful AI? How has the business attitude to advanced AI been moving forward?

There’s been a view for many years that just shoveling more data into systems would provide better answers. In fact the key to deriving business insights from data relies on much deeper questions. To apply intelligence and insight into the business world, there’s a need first and foremost for a human to specify your decisioning input. From driverless cars to medical diagnoses, everyone is running to break the market, when in fact what’s required is for society to calmly reflect, and think about where we are racing to.

“We cannot just work to the premise that we can always switch off AI when we feel like it,” commented Professor Russell, recalling the horror moment in the ‘2001: A Space Odyssey’ movie when the computer HAL decides to over-ride his human master.

“We want machines that are beneficial to us,” said Professor Russell. “But give a machine an objective, such as ‘go and make coffee’ and it will do everything it can to meet the objective, including over-riding an off switch [in the absence of other governing principles]. So we need to change the definition, to figure out and define the failure mode, the basic principles.”

“Humility is what will save us…The robot has only one objective, which is to maximise the realization of human values. But at the same time we have to make the machine unsure about the full human objectives, so it will be happy to be turned off, to the extent that it is uncertain about the objective. Such a robot is provably beneficial.”

As humans attempting to apply AI to insurance and other business processes, to fulfil such a great promise for the future, the difficulty for us is in our own limitations about how we express our preferences. Machines have to deal with the inconsistent preferences of humans, the emotions, as well as people who commit crimes, commit financial fraud, dishonesty and the other internal conflicts that come from different parts of the brain working together.

But in the financial sector there is already a huge volume of information on human choices that can be harvested. There are huge business and social incentives to getting it right with AI. There is a need to develop the theory and practice of provably beneficial AI, for the regulators and for all of us involved.

Follow the link to the LexisNexis Risk Solutions website to find out more about how we support insurance providers.