LexisNexis Risk Solutions

September 8, 2020

It’s a common question: how is artificial and intelligence (AI) and machine learning (ML) being applied in the insurance industry in order to create differentiation in the market? But as it is continually becoming more powerful and pervasive across a whole range of processes, perhaps a better question would be, where isn’t being applied or where shouldn’t it be applied?

For the pricing and underwriting side of the insurance business, AI is a way of pricing better, based on knowing more information about a policyholder and using an algorithm to learn and extract the most relevant information.

The basic concept is quite simple: it’s about testing, building and validating a model that is predictive of the future. AI-based modeling enables the development of risk and pricing models with high quality and validity through machine learning and testing cycles.

In the post-pandemic period a lot of attention has been falling on insurance pricing and how to use data to understand the new trends around consumer sentiment, loyalty, attitudes to business insurance protection, trading down and trading up on certain types of products. There is also some renewed popularity of micro-policies, usage-based insurance and motor telematics products.

Overall there has been a strong flight towards greater automation and digital fulfillment of insurance to the end-customer. This is placing demands on data, machine learning cycles and customer insight, stretching the traditional structure and culture of the insurance organisation.

Some insurance providers have been lowering future premiums and others have been returning a portion of current premiums to customers. So in some ways this is a problematic period right now, with insurers rolling out new sales and pricing strategies harnessing elements of AI.

AI and data infrastructure are key for the future

A recent study by McKinsey & Co* found that the global leaders in pricing innovation (defined as the top 20% of insurers measured by profitable growth) tend to invest in data infrastructure to better harness internal data and, perhaps more important, data from external sources. This group of ‘sophisticated’ or highly-digitised insurance providers on average evaluate more than 30 new external data sources and then select two to four sources each year that they use to develop new features to embed in their pricing and rating models.

With a strong data infrastructure in place, those insurance leaders then invest in pricing technology and self-learning algorithms. These tools help to deliver pricing insight, generating sharper quotes based on greater granularity of risk segments, whilst still maintaining profitability and avoiding adverse risk selection. Every quote generates an additional data point. The details from each transaction are looped back into the algorithms.

A machine learning algorithm can be deployed in any aspect of the insurance continuum, from compliance to claims to customer experience, automating processes and driving down costs, to HR resourcing, or investment decisions, capital reserves management and risk management. An algorithm can also be used to show where companies should be investing in their IT infrastructure, or to look at risk scenarios based on a certain segment of customers an underwriter might want to query.

But it’s in the area of pricing and underwriting that the data exploration process really goes to the heartbeat of the insurance business.

Listen to this video as our Director of Statistical Modelling for UK&I and International Markets, Alan O’Loughlin, talks about breaking down the closed ‘black box’ stereotype of an AI or machine learning algorithm – the concept of a small actuarial team working in isolation – into something that can be applied and understood across the whole insurance business.

*McKinsey & Co report: ‘Climbing the Power Curve, How to Win in Insurance’

Follow the link to the LexisNexis Risk Solutions website to find out more about how we support insurance providers.