Finance
Insurance companies using AI for underwriting and due diligence amid cyber threats
The insurance industry is adapting to the rise of generative artificial intelligence (AI) by incorporating it into its underwriting processes amid looming cyber threats, some of which may also utilize AI.
Insurance companies’ risk assessments of prospective clients looking to buy a policy have typically required that a substantial amount of underwriters’ time be focused on administrative tasks. A 2022 report by Accenture found that underwriters spent about 40% of their time on such tasks, representing an efficiency loss of up to $160 billion over five years across the insurance industry.
Using AI-informed automation in the underwriting workflow can accelerate processing times and yield insights that underwriters may not have been able to track down manually.
“We want to make sure we have good clients on the books,” Van Carlson, the founder and CEO of SRA 831(b) Admin, a strategic risk alternatives company, which offers plans to small- and medium-sized businesses seeking to self-insure their risk, told FOX Business. “We’ve actually had AI in the past catch some issues with clients.”
AI MAKES IT EASIER FOR FRAUDSTERS TO CREATE FAKE COMPANIES, HAMPER DUE DILIGENCE: REPORT
Carlson said that AI is helping insurance companies dig deeper into the risks associated with companies and individuals who are in the market for an insurance policy in the underwriting process.
“Being able to search a name through AI where maybe there’s some articles, maybe there’s some pending lawsuits, maybe there’s some of those things that physically an underwriter, a person, may not get to,” he explained. “I think with AI, you can go out and grab all that stuff pretty quickly.”
“It’s going to be harder to hide if you’ve got issues going on and you’re going through an underwriting process and you’re trying to get a professional liability insurance policy, for example,” Carlson added.
GENERATIVE AI TOOLS LEAD TO RISING DEEPFAKE FRAUD
The insurance industry’s adoption of AI comes as businesses face growing threats from fraudsters who leverage the tech to create fictitious businesses or use deepfakes to carry out fraud.
Carlson said that scammers are masking phone calls to conceal their identity, attempting to hijack emails and formulating long-term plans to rip off businesses, which has insurance companies on edge as AI proliferates.
AI VOICE-CLONING SCAMS ARE ON THE RISE, HERE’S HOW YOU CAN PROTECT YOURSELF
“They have a lot of clients out there, unfortunately, that they’re spying on when there are opportunities to step in and put themselves in the middle of the transaction, and that’s the really scary part of this. And sometimes, are you dealing with real people or not? And it’s just the beginning of this,” he said.
“It’s a real big exposure to small to middle market business owners and traditional insurance companies that are offering cyber coverages on a standalone basis on a policy aren’t going to be in a big hurry to start looking at coverages for this.”
Carlson said that companies should step up the cyber training for their workforces to make it easier for employees to spot potential scams and put protocols in place to prevent money from being transferred unwittingly to a fraudster.
“I always tell people, slow down. Look at the emails if it doesn’t sound right, smell right. And to the voicemails, I think one way that you’re going to have to handle AI, is, unfortunately, through verbal passwords,” he explained.
“When somebody calls in and says, ‘Hey, I want this money moved to this account’ you’ve got to have protocols there. Either you have passwords, or you hang up, call the company back, you talk to the individual, get a second confirmation, because once this money leaves your account, especially in a wire transaction, it’s gone forever.”
Read the full article here