GenAI: Uses and Risks
At first glance, generative AI sounds like something out of science fiction: a seemingly all-knowing computer that can answer any question that it is posed. It can do everything from writing code to composing a poem to answering trivia questions. Some even worry that it will one day replace white collar jobs. It’s clear that generative AI is a powerful new technology. But new technologies also pose challenges for actuaries that should be considered before being applied in a wider context.
Taking a step back, what is generative AI? Simply put, generative AI (also referred to as GenAI) is a type of artificial intelligence that creates new content such as text or images in response to user prompts. Some examples include GitHub CoPilot, ChatGPT, Google Gemini, DALL-E and others. These tools use sophisticated machine learning models (including large language models or LLMs) that are fit using very large training datasets. The quality and size of this training data powers the ability of the model to determine a prediction for inputs that the model has not seen and provides GenAI tools with their characteristic versatility.
There are many potential applications of GenAI in an actuarial context. Some of the most common include assistance with coding, analyzing unstructured data and summarizing content. The value proposition of GenAI is self-evident. If it can be trusted to quickly solve rote tasks, that frees up the actuary to focus their attention on more abstract tasks and yield a better work product.
One of the most popular applications of GenAI is as a coding assistant. GenAI can produce working code in a matter of seconds, where an intermediate coder might have taken hours to accomplish the same task. This can be immensely helpful, particularly when working in an unfamiliar programming language or an unfamiliar task. However, the actuary should take care that the prompt being provided explains the intended function of the code in a clear and easily understandable way. The tool might “misunderstand” the intended calculation and provide code that does not actually perform the desired task. Furthermore, GenAI output could provide code that utilizes different versions of code than the user intends, which can result in incompatibility issues. Users should verify all calculations and make certain that the code delivered by GenAI runs on their systems and produces the intended result.
An important point to keep in mind with respect to coding applications is that the usefulness of GenAI is a function of complexity. It can write a script fairly well but not an entire code base. As the complexity of the code increases, the usefulness of GenAI decreases. In these cases, it’s important for the coder to understand how to leverage GenAI to its full potential. This means understanding how to construct an appropriate prompt, understanding the output provided by the tool, and then taking that output and applying it to the specific use case. The user needs to know enough about coding to verify that the generated code is appropriate for the intended use. Coders should never become dependent on GenAI because there will come a point where the tool being constructed gets too complicated for the AI to deliver value. At that point, the coder needs to step in and apply their own expertise.
With respect to utilizing GenAI in the model-building process, it’s critical to understand GenAI’s limitations and biases. LLMs are a black box and it’s generally not possible to explain why a given input produces a given output. Therefore, when applied to a model-building context the actuary must take steps to ensure models do not produce unfair bias and discrimination. Actuaries must comply with professional standards to ensure they fully understand the extent to which external models are influencing the results of their analyses. Blindly following the output of a GenAI model trained on data the actuary has not seen could be ethically problematic.
One very important consideration when using GenAI is corporate intellectual property and protection of sensitive data. Users need to take great caution that they are not divulging trade secrets, proprietary information or sensitive data such as personally identifiable information (PII) in an open setting. GenAI tools gather data from user prompts to be used as training data in future iterations of their models. If a user were to divulge confidential information in a GenAI user prompt that would mean that a future iteration of these models could be trained upon confidential data. Most importantly, data belonging to the policyholder must be protected from release. Users of GenAI tools should be transparent about their desire to use these tools for business purposes and work with their management and IT stakeholders to develop a compliant way of doing so.
An interesting potential application of GenAI is analyzing unstructured data. Particularly in operations or claims contexts, there is potential for GenAI to deliver meaningful insight into unstructured data collected by insurance companies. For example, notes made by adjusters during the course of a claim could be used for fraud detection purposes. GenAI could deliver meaningful suggestions on how to improve the efficiency of operations or detect ways to optimize underwriting activities. GenAI tools can ingest large amounts of data and deliver bullet point summaries on certain operational activities. This has the potential to drive line of sight across the organization and enable better collaboration.
Another interesting application of GenAI is summarizing content. On a more administrative level, GenAI can summarize emails or take the minutes for meetings. It could conceivably take a 1,000+ page rate filing and provide a bullet point summarization of the actions described by the filing. This reduces the administrative burden on users and unlocks more time for other important work.
All that said, GenAI tools are never perfect. Sometimes GenAI tools will make up their own facts called “hallucinations.” There have been a few examples in the media recently where lawyers were caught citing fabricated cases that did not exist due to their blind reliance on GenAI. In all cases, actuaries should ensure that the output provided by GenAI makes sense and comports with their actuarial judgement. We should never assume that the output of GenAI is correct: trust but verify.
In closing, it’s obvious that GenAI is a very powerful new tool. Actuaries should consider whether they can take advantage of it to increase their productivity. At the same time, there are important practical and ethical considerations when using them. Never rely on GenAI as though it is an expert. We as actuaries should apply our own professional judgement to the output of these tools and react accordingly. In that way, GenAI is just another tool in the actuary’s toolbox.