Written by: Trevor Lloyd-Jones, Content Manager, LexisNexis Risk Solutions

In terms of where automation can add value – and reduce risk – it’s helpful to consider that around 80% of the time of a typical data scientist is spent in data assembly or what we could loosely call data preparation, data collecting, cleaning and preparing it for modelling, including building training sets of data and working with other stakeholders. Another 9% or so is spent on mining data for patterns. And only a relatively small amount of time, around 4%, is spent on refining algorithms, or what most people would think about when it comes to the actual value creation.

This analysis comes from the annual survey by Figure Eight and it serves to show that on the road to transformation, embedded AI (artificial intelligence) and building better algorithms for insurance, a lot is going to depend on having access to the right data, filtering it and modifying it so that it makes sense for the model it is being applied to.

The main point is that if you want to get the most out of your data, it should be clean, filtered, normalized and tested across different sources or technologies, to ensure it relates to real risks out there in the real world. In the context of data science and machine learning, data cleaning means filtering and modifying data such that it is easier to explore, understand, and model.

Machine learning can help where there are many small opportunities, in a fast-moving environment, otherwise by definition they could be solved by people. This area of many, many small incremental benefits in quoting, underwriting or in claims processes is currently where many insurers are looking to add value: effectively creating value for the customer where currently there is none, identifying something that can be manipulated but where the individual value or payback is small.

Challenges of alignment, skills and putting models into production

Success with machine learning and R&D for insurance is in bringing it all together, the millions of fast-moving records, into real-world scenarios. Having a model doesn’t mean anything unless you can operationalise it into a benefit for the consumer, such as better pricing and risk selection, faster claims settlement, or knowing when and how that customer wants to engage.

Speaking at the Big Data World Conference, Pardeep Bassi, Head of Data Science for LV=, described how data science for the insurer operates as a central team, invited into different areas of the business, as a link between different functions. One important example could be machine learning for driving speed and efficiency in claims settlement.

“First, there’s a need to assess how the existing [technology] structure allows for integration,” commented Pardeep Bassi in the conference.

“It could be using batch production and regular scoring….real-time deployments are slightly more complicated. The initial approach is not to come up with a whole new system. Identify quick wins with an eye towards putting them into production. Having a pricing and rating engine in general insurance is absolutely fundamental,” he added.

“One of the key issues we have as a new team is interfacing with the existing analytics team. Even though as data scientists we don’t replace what they do, there are challenges when integrating the modelling into the workflow….Some barriers can come from misunderstandings of what machine learning can actually do, or fears over loss of control.”

Insurance currently in the ‘enhance’ phase

Another common challenge with machine learning discussed at the Big Data World conference relates to the required cultural changes, and the many different systems and teams (and viewpoints) that are now involved for a typical insurance company if we think of IT, underwriting, technology, marketing, customer engagement, analytics and claims teams.

Insurers are working to solve these challenges of alignment through change management, involving initiatives such as cross-function working groups and dial-in monthly roadshows where groups of stakeholders can discuss automation and machine learning projects.

Considering the three stages of digital transformation with machine learning: the jump start, enhance and transform, UK insurance providers are firmly in the enhance phase. That’s to say they’re able to deliver some real enhancements to processes, though they’re not yet able to completely redefine or automate a process end-to-end.

In one example, one of the top five UK motor insurers showed at Big Data World that by using data to model the process of determining liability in a motor claim, and streaming cases into fault, no fault or partial fault cases, it had been able to reduce the average time of claims settlement by a day. This also had the effect or raising customer satisfaction as measured by the NPS rating or Net Promoter Score.

But key questions remain such as how to measure the value of a particular use case being considered?

Model with real world data. It is enough to take 10% or 15% of the whole dataset, such as claims. Then use AB testing before deploying as potentially a larger implementation. Cloud platforms have now become an essential ingredient to this way of working: build the model based on a segment of real data history, but keep part of the data separate from the data used to train the model on. Test and build in a small environment. Learn quickly and move on.

There’s a need too to challenge the assumptions underpinning the model and check for implicit bias. Consider random shuffling: mix a small section of the data and observe what happens. Beware of abandoning caution in the data assembly stage.

Data built for past priorities and past technologies

Janthana Kaenprakhamroy, CEO and founder of Tapoly, the on-demand insurance provider for SMEs, commented in Big Data World that the biggest obstacles to implementing AI in insurance are the human issues.

She said the history of insurance has evolved to the current point in time where there is a shortage of adequate and good quality data, the data having been built for past priorities and past technologies. But better data and real-time data is becoming available to help insurers underwrite better. This in turn will stimulate demand for flexible insurance and on-demand insurance models, which are really what’s required to satisfy what consumers are looking for in this area.

“There are lessons to be learned from the early uses of AI,” commented Janthana Kaenprakhamroy. “The regulation also creates challenges,” she added. “For example, if we think about social media for understanding customer behaviour, which is now more restricted….or considering that pricing on an individual level can only come from data that is held within an individual insurer or platform.”

Katie King, CEO, AI in Marketing, and member of the All Party Parliamentary Group on AI (APPG AI) for the UK, commented that we are still at least two years away from creating customer centricity at scale using AI. There are so many different, conflicting popular views of artificial intelligence, she said, and the media is usually not very helpful by coming up with so many scare-mongering stories that are not knowledgeable about the issues.

“From my perspective, we are still in the early stages of looking at the customer journey [through the prism of AI] in all of its many ways,” commented Katie King.

“We are currently in the hype cycle,” commented Jeffrey Fung, Senior Data Scientist, Domestic and General, the UK’s leading specialist warranty provider.

“I see us being still decades away from implementing something that can learn by itself from unstructured data….The way you can use AI is in making the small changes, the incremental steps like [optimising] conversion rates and figuring out when the customer is ready to engage and communicate.”

The future is going to be about not just the hype. It’s going to build towards a new consumption model and a shift towards much greater self-selection and a faster, seamless way of buying insurance cover. We can expect to see more and more innovative use cases but at the same time automation can increase the risk that bad data could somehow be used to perpetuate bad decisions.

So we need to be clear, that automation actually increases the need for procedural rigour, business acumen and technical expertise, a forensic control mechanism of sorts. It’s changing the face of machine learning and speeding up model building, but speed does not always deliver simplicity.

Follow the link to the LexisNexis Risk Solutions website to find out more about how we support UK insurance.

Post a Comment

Your email address will not be published. Required fields are marked *