Beyond the Buzz: Do You Have GenAI FOMO?

September 20, 2023

Alex Sun

CEO, Enlyte

Tech leaders are feeling pressured to integrate generative artificial intelligence (GenAI) into their programs, but what do we really need to understand about leveraging AI effectively and responsibly for P&C?

Growing mainstream use of GenAI tools like ChatGPT have supercharged the desire to adopt this technology in every industry, including P&C. GenAI enables interfaces that allow users to engage with AI through natural language, dramatically improving usability. 

GenAI has also introduced methods allowing some models to advance at a more rapid rate. This has prompted the technology community to invest significantly in more powerful computing solutions. All of this creates a powerful, virtuous cycle of advancement that is very exciting and as a result, tech leaders are being pressed now more than ever to deliver programs that integrate generative artificial intelligence into their claims workflows. 

If businesses don't stay on top of the latest trends and start delivering value immediately, they risk alienating customers and falling behind competitors. That's why it's crucial to pay close attention to the rise of generative AI but also ensure any implementation is done responsibly.”

—Forbes, 2023

Innovation with Industry Expertise

While we at Enlyte are equally excited and encouraged by the opportunities GenAI offers, as industry veterans, it is also our responsibility to make sure AI FOMO is not the reason for new technology implementation without proper due diligence. With the influx of FinTech startups entering the industry and promising to automate claims overnight, it’s easy for companies to take shortcuts in implementing AI and risk damages to carriers and their customers. Often, these companies lack the intricate knowledge and experience in claims management to understand the complexity, or the long-standing partnerships needed for connectivity across the entire workers’ comp or auto claims landscape. Whether implementing AI algorithms or large language model (LLM) tools like ChatGPT, rushing to implement tools without expertise can easily affect claim accuracy, data security and confidentiality.

Cigna, for example, currently faces a class action lawsuit over charges that it illegally used an AI algorithm to deny hundreds of thousands of claims without a physician’s review.1 While the case is likely to help shape the rapidly changing legal landscape of AI, it also illustrates why giving AI too much authority right away may not be the best first step. New tech, we believe, shouldn’t replace human judgement where it’s needed, instead, it should be used to augment expertise and more easily prioritize human experience and intervention.

This is the premise behind the development we have done in our Auto Physical Damage team with the Mitchell Intelligent Solutions portfolio. Mitchell Intelligent Review, for example, combines cloud computing, Mitchell-authored vehicle data and the company’s machine-learning and computer-vision models to scan photos of collision damage and automatically evaluate the labor operations entered on the estimate. The artificial intelligence (AI) then flags problematic estimates that require a closer look by a trained appraiser. Automating this traditionally manual, time-consuming and resource-intensive task is intended to help carriers increase estimate accuracy, ensure quality and pinpoint workflows or areas of the business in need of improvement. It also gives insurers the ability to review every estimate written and then assists them in identifying the specific appraisals they should focus on to accelerate settlement times for policyholders. We took great care in perfecting our proprietary AI models, and our team of data scientists continues to leverage Enlyte’s comprehensive data—along with a human-machine feedback loop—to expand the type and accuracy of the predictions made.

When it comes to the hype around LLM and GenAI, casualty industry professionals need to be even more diligent in using this technology, especially when it comes to privacy concerns for claims processes. You wouldn’t want to place a claimant’s medical or personal identifiable information (PII) through a public system like ChatGPT without knowing where the information is going and who is securing it. Ethical questions about how and where to implement these technologies can only be determined by those with sufficient experience and expertise in the industry to know where the opportunities lie, while proper usage must be trusted to those with appropriate security and technology infrastructure.

Opportunities Abound

The good news is there are many practical application opportunities for AI in our industry. These include customer service, triage, potential fraud, and property damage and bodily injury applications, just to name a few. At Enlyte, we are looking at these areas and others, using our experience in auto and workers’ comp claims with our proficiency in advanced technologies, to provide guidance on where these technologies make the most sense across auto and casualty claims. 

AI has and continues to be a powerful opportunity to leverage data (be it medical billing data, repair information, photos of damaged vehicles or images from litigation demands) to improve task automation and enable advanced decision support to claims professionals as they seek to help individuals return to work, optimal health, or get back on the road.

As technology leaders, we’re just as excited about the potential of LLM and GenAI technology. As with any new product or service, however, solutions need to be developed with those who have vast industry knowledge, specific to users’ needs and must meet the high standards our industry requires for data integrity, confidentiality and the trust our customers expect. Meeting these demands won’t be easy, but I believe with the right mix of experience and innovation, the opportunity of GenAI is even better than the buzz.