Written by: Trevor Lloyd-Jones, Content Manager, LexisNexis Risk Solutions

There’s currently a lot of hype around artificial intelligence (AI) and automated data processing for a number of reasons.

But why is that there are relatively few real-world examples of data science teams in insurance applying machine learning? How can AI and machine learning improve certain areas of the insurance process? How to make money and actually create real value for the customer from it? What are the specific real world scenarios where automation can be used?

These were some of the questions addressed at the recent Big Data World and AI Tech World conference series in London. In addition to leaders from the world of data science and consumer technology, during the sessions a group of leading insurers gave their views about how they are starting to bring the data foundation together, involving millions of fast-moving records that now flow around the insurance ecosystem every day.

Certainly machine learning has begun to take a role in insurance where there are many small opportunities to extract more value from data, or where there is very fast-moving data that would otherwise have been subjected to human checks: slower problems and contextual problems being solved by people.

With very high transaction volumes and the systems of today that can execute upwards of 5,000 data allocations per second, processes start to reach volumes that are uneconomic with legacy technology. So for insurance the machine learning opportunity, working together with human data modellers, comes down to looking at small opportunities to create value with data, in places where there is currently no value and the data gaps exist. Having a machine learning model by itself doesn’t mean anything unless you can operationalise it.

Speaking at the Big Data World Conference Pardeep Bassi, Head of Data Science at LV=™ presented the data approach and strategies for the 125-year old insurer, describing how the company is changing “from the inside out”.

“The goal for us is to influence any key decision in the business, in other words to enhance human decision-making, bringing the business team and the data science team close together,” commented Pardeep Bassi.

“By that I mean building the team, understanding the situations, the settings to apply the data to, before formalising the platform. There’s a need to map your path to success, not just the cultural factors, but also in cooperation with the IT, business analysts and other support functions.”

He added: “Engaging as a central [data science] function is important. We found it is important to create the right culture and to bring in the right abilities, not just the academic abilities. We hired people with a consultant skillset, not just an academic or technical skillset. You need to hire people who can learn a clear matrix structure for their development path.”

There was agreement in the conference that machine learning and automation in insurance needs support from the top level, and at every management level. The CEO can set machine learning goals for all his direct reports. But below that level, the senior management team can in many ways be the hardest stakeholder to convince, whose greatest fear is a loss of control.

There’s a need for data scientists to build these layers of support, including the lower operational customer-facing staff who are the ones pressing the buttons, delivering solutions into the real-world setting.

Many incremental benefits for data-driven insurance

First, look at the existing data infrastructure and how it allows for future integration, either in batch production, regular scoring or real-time deployment. All the speakers agreed that the optimum approach for a data science function should not be to come up with a whole new system, but to identify quick wins with an eye to how they can be put into production. Pricing, rating and integrating with the current rating engine – working with the current IT and analytics functions – are absolutely fundamental to how AI is going to bring improvements in general insurance.

Another challenge is the co-existence of many different core systems, managing and retaining key skills, and the internal and external teams responsible for different parts of a typical insurer’s infrastructure. This can extend to misconceptions and misunderstandings of what machine learning actually does, and how to use road show events, secondments and staff placements for spreading the right skills in the right places.

“Our approach at LV=™ is that where [the functionality] is repeatable, it should be brought in as functionality in-house,” commented Pardeep Bassi. “Identify key roles and needs and manage, move the business forward. Take a long-term approach.”

Amongst the biggest issues to implementing AI in insurance are technological issues, the regulatory and human issues that frequently surface in terms of the shortage (and high turnover) of adequate and good quality skills.

All the time, when we think about using automation and real-time data better, it comes down to helping insurance providers underwrite better. This is in turn will stimulate demand for on-demand insurance products, to satisfy what consumers are looking for in this area.

Questions of fog computing, the IoT and decentralised infrastructure

Parul Kaul-Green, Head of AXA Next Labs Europe, said the advantages for data-driven insurance are clear, in terms of revenue growth, deepened customer loyalty, reduced risk and new operational efficiencies.

“Identifying emerging risks and risk management strategies to reduce large pay outs are the holy grail for any insurer,” she said. “Identifying efficiencies through advanced analytics, working with clearer data sets, can lead to new [risk] mitigation methods…..We see more of this area of prescriptive analytics going into the hands of policyholders.”

Peter Kelly, Director Actuarial, Data Science and Claims Analytics, Hastings Direct, commented that the speed of data flow that was blindingly fast is getting even faster, and the warning signs are already there in terms of losing sight on where the automation can bring value.

“If you’re a predictive modeller, the signs of caution are there. But you need to read them,” he said.

There is an interesting point of reflection right now in the insurance industry, around how to quantify and extract value around big data and automation. How does it sit between the front-end, user experience of your website and the application layer? How to control and manage a data layer? These are all growing complexities.

More data processing is moving to the edge, to the consumer’s phone, their car or other device. Then above the level of the device, and below the cloud, there now sits fog computing.

The main difference between fog computing and cloud computing is that the cloud is a centralised system, while fog is a distributed decentralised infrastructure. Fog computing is the mediator between hardware and remote servers, with the parts of the servers that receive requests and send responses, the APIs. Fog computing regulates which information should be sent to the server and which can be processed locally.

There are lessons to be learned from the early implementations of machine learning in insurance. Looking across at the worlds of retail and the Internet of Things (IoT), the use of micro-segments of data to serve the consumer better is becoming more and more important. Doing nothing in the field of advanced analytics is not an option.

There has been a GDPR ‘understanding curve’ too, which creates challenges when considering legitimate interest for data processes in complex, less highly-integrated data environments, introducing the concept of dynamic consumer consent.

For example the use of social media for understanding customer behaviour is now much more restricted. Data lineage—knowing the lifecycle and origins of data, working with trusted partners, trusted platforms—is a vital component of any effective data architecture. Insurance core systems that used to be slow and heavy, now need to be lighter and open, with built-in adaptability for an uncertain future.

Insurance prices at an individual level can only come from data that is held within an individual insurer or platform. There’s been some examples in the popular media about how AI can be manipulated, for example in certain markets around the world where the nature of regulation and privacy control is vastly different, but insurance is on the right road.

In many ways AI is at the point now where web design was 15 years ago, and now nobody talks about web design. It is just a solution and it’s embedded into everything that we do in the online world.

Follow these links for information on the Global Telematics Exchange or the LexisNexis Risk Solutions website for US insurance or UK insurance to find out more about how we support insurers.

Post a Comment

Your email address will not be published. Required fields are marked *