Speaking at the Insurance Analytics and AI Europe conference in London recently, and listening to insurers and data scientists discussing artificial intelligence and advanced analytics, I noticed there are key issues around the skill sets that are missing, about distinguishing how we speak about automation as distinct from true AI, and about whether to take a wide vision of the technology, or focus on solving micro-problems and micro-steps in the insurance process.

There’s a sense that all of the discussion around artificial intelligence adds up to greater digitization, that AI and digital transformation in insurance are somehow inter-linked, increasing demands for making data openly available at different points in the ecosystem. There’s still a major piece of work to be done in the industry, on data quality, data standardisation and inter-connectivity, deciding precisely what gives value to data. We need to get on top of the whole data quality issue.

We need ways to define the governance and philosophy of data use and how we structure the data, with data protection and privacy in mind. What do we really mean by meaningful data or meaningful insight?

There’s a need to consider what new business value we can draw from the data, because in terms of behaviour of the individual and individual consumer needs, we are just scratching the surface of the available data. With the data we collect, what are we giving back to the consumer? And then of course we need to think about how we change old legacy technology, in the knowledge that past systems may not always provide data in the required data structures of the future.

In my presentation at the Insurance Analytics and AI Europe meeting I covered how actuaries can use data science and new modelling techniques to process vast amounts of new data sources, in order to re-imagine risk profiles

I showed how adaptation is key and that actuaries need to adapt quickly to new innovation and data sources to increase profitability of the insurance organisation. Increasingly, as analytics is already applied across the whole spectrum of insurance there’s a need to think about how data is stored and accessed, to make it add value.

At LexisNexis Risk Solutions we have six petabytes of data across the globe. In the UK and Ireland alone we serve data into 200 million insurance quotes a day, across Motor, Home and Commercial lines of business. Our LexID and linking technology is able to reconcile all our data to an individual, not just through business rules or fuzzy matching, but also probabilistic and specificity matching, meaning we can ensure a quote is returned accurately for that person. Our data helps to identify the quote behaviour of a customer, through to their policy behaviour, and into claims, and if necessary, we have the tools and capabilities to segment different risk profiles, visualise and track the individual policy risk and accumulation of risk across different geographical zones.

Increasingly the challenges of advanced analytics in insurance mean that we have to ensure all the right data resources and analytics are being applied correctly.

Modelling and discovering new risk attributes is one part of this, which is very satisfying. Differentiating risk and modelling risk selection is becoming more advanced. Like, say, picking a shade of green to paint a room, technically the human eye can see millions of hue variations, with each variation a little closer to the perfect shade you have in mind. With the greater data choices, modelling techniques and tools that are now available, you get closer and closer to pricing a book of risk more accurately.

To be competitive in the risk business is all about explaining, identifying and segmenting pools of risk that are unknown.

  • Examining an insurer’s current total burn/loss cost
  • Ranking their current model predictions
  • Adding additional attributes or rating factors, shows smoother degradation and segmentation of risk.

New model versus old model

Identifying risk to the next level with better segmentation leads to a better set of scoring in the underwriting and customer targeting process. A large part of what our Analytics and Modelling teams do at LexisNexis® is about reducing residual errors (the difference between predicted and actual loss), explaining some previously unexplainable risk and separating that from actual statistical noise using ever-growing sources of data.

We go to great lengths to investigate and prove the validity of models and in so doing, help our customers to find additional competitiveness and underwriting profits.

We’ve learnt that innovation can come from different pre-existing data sources being enriched and examined through a different lens, and with the contributory databases that we support for the industry, and those we are building, we are able to build new features and new risk rating factors.

From a governance perspective, like other players in the market, we are using more and more advanced rating models going forward. But all rating models are only as good as the data you have. In the realm of AI, we advise customers never to go “full cog”, deploying an end-to-end dynamic pricing model without human (actuary) intervention.

There’s a lot of optimism and potential for fully automated, end-to-end, dynamic deep learning pricing models and other AI systems. But today we are at a point in time where we need that human interface to take care of the checks and balances of governance, and to help pick palatable rating factors that are acceptable to the market.

For example, when looking at some policy shopping behaviours as rating factors, we identified a number of segments: loyal customers were identified as standard risk, prudent shoppers who switched every couple of years were the lowest risk, and people who switch every year looking for the cheapest price regardless of coverage or excess were found to be the highest risk segment. However we can’t go to market with an attribute that might penalise some instances of high shopping activity if the government is currently advising people to ‘shop around’ when premiums are on the rise. So there is a place for advanced analytics, but you still need human intervention for the full model, to see the big picture.

There’s a need to trust the owner of the business domain. Don’t go ahead and build something based on inadequate data.

AI has something of a confusing history to date. There is a lot of optimism, if we think about the potential for new vehicle-derived data sources from the connected car, with ADAS (advanced driver assistance systems) and the data normalization work we, at LexisNexis®, are doing across all these different devices and software sources. In all of this we need to keep in mind the possibility of people bias, or importing bias from systems that were built for a different era.

The early history of AI, with computers like Deep Blue playing chess and solving mathematic theorem, actually happened much faster than the predictions, and it turned out to be easier than most experts thought. It was believed there would be logical progression from that, to human tasks like communication, abstract reasoning, language, speech and social culture for predicting how consumers interact with products. In fact human-level AI is still a far-off goal and in human interactions, with bots and RPA in insurance or in business in general, it is proving very difficult to embed machines with human-like values.

I suspect that the point we are at now with AI is something like the world of physics 20 years before Isaac Newton came along, and then everything changed.

We still need a long series of breakthroughs with AI technology and it will take a very new way of thinking and a new business culture. For now we are still finding out, what are the right questions to ask the machines?

Governance is applying pressure to use advanced analytics in a certain way. There are still important questions to be answered. Certainly in the EU we don’t have governance for deploying a regression model, so why would we need to create special governance for AI? If AI is going to become more than a series of specific use cases or micro-solutions, how as an industry are we going to make it happen? Change is the only constant.

Follow the link to the LexisNexis Risk Solutions website to find out more about how we support insurers.

Post a Comment

Your email address will not be published. Required fields are marked *