As artificial intelligence is increasingly integrated into the business landscape, experts are raising concerns over the risks it could introduce to data security — particularly with the rise of generative AI models such as ChatGPT. Forbes recently reported on how the platforms could affect the data privacy of businesses, and companies like Apple and Samsung have taken steps to prevent employees from using the systems for the sake of confidentiality.
“There are so many concerning things with that situation,” said Allison Arnold, Broker, Professional Liability, Burns & Wilcox, Indianapolis, Indiana. “What it really comes down to is if you are going to use AI, you have to be educated on what you are using it for and which AI tech is going to get you the outcome you are looking for, whether it is in education, at work, or in your daily life.”
Until more is known about the risks, organizations should tread lightly when it comes to integrating generative AI into their operations, said Gino Franco, Broker, Professional Liability, Burns & Wilcox, Denver, Colorado. They should also ensure that they are protected with Cyber & Privacy Liability Insurance in the event their data is compromised.
“You are essentially asking a bot to do the work of a human, which cannot fully be done,” Franco said. “If it is not monitored in the correct way, that opens them up to security issues.”
‘Checks and balances’ needed for AI
AI tools rely on gathering data points, the International Association of Privacy Professionals noted in a March report, and there are currently some uncertainties about the personal information they could collect about users, due in part to evolving privacy laws. In recent months, mounting concerns over generative AI have caught the attention of government agencies. U.S. officials have called for regulations on AI and the White House recently announced plans for federal research on the issue, ABC News reported May 23. In Canada, regulators are set to investigate ChatGPT parent company OpenAI over data collection and usage concerns, Firstpost reported May 26.
You are essentially asking a bot to do the work of a human, which cannot fully be done. If it is not monitored in the correct way, that opens them up to security issues.
These are important steps, Arnold said, especially as more businesses and individuals use the technology. Two months after ChatGPT launched in late 2022, it reached 100 million monthly active users and broke records for the speed with which its user base grew, Reuters reported in February.
“These technologies can have dire effects if we do not have control of them,” Arnold said. “We really need a solid checks-and-balance system when we are dealing with AI because they are built to grow and learn, and that can be very morally confusing and could cause some issues down the road.”
In Europe, the General Data Protection Regulation addresses data privacy and security, but no equivalent national law exists in the U.S., she added. “If we had some sort of blanket law to protect all of us, that could probably help our world moving forward,” Arnold said. “I really do not think we can regulate or control our own personal data too much.”
As the risk grows, Cyber & Privacy Liability Insurance could become even more important for business owners. When a business experiences a data breach, whether due to AI platforms or other vulnerabilities, the policy can help pay for breach response, investigation, ransomware payments, lawsuit settlements, and more. Cyber & Privacy Liability Insurance policies can also provide risk management services to help companies reduce their chance of being targeted. Excess Liability Insurance can provide additional liability limits if needed.
Like other technologies, the use of AI is a potential opening for cybercriminals, Franco explained. “You are allowing a gateway for more access for cybercriminals to come in,” he said. “It is opening up a door, but this is the world we live in. It is about finding a way to mitigate that risk. If you do not protect your business against cyber threats, it could really ruin a business.”
Managing workplace AI hazards
Generative AI is currently being used in a wide range of industries. According to a 2022 survey of professionals in the U.S., 37% of marketing and advertising professionals and 35% of technology professionals had used AI to help with tasks at work, Statista reported in May. Industries with lower use rates included accounting at 16% and health care at 15%.
According to research released in April from KPMG, about 65% of U.S. companies were looking into how generative AI could improve their operations, compared to about 37% of companies in Canada. Other research showed that Canadian lawyers, however, were more open to generative AI in the workplace compared to attorneys in the U.S. and U.K., Canadian Lawyer reported May 8.
If you do not protect your business against cyber threats, it could really ruin a business.
Companies that use AI tools could face a variety of data security risks, Franco said. In addition to the concerns associated with employees sharing sensitive company data with chatbots, any automated processes that use AI could introduce vulnerabilities. “What is really scary is that these AI systems evolve and adapt as much as us humans, but a lot quicker,” he said. “They may take a certain algorithm and adapt to it, and possibly change that algorithm to benefit someone and/or something other than the user who initiated.”
While the ways businesses could be targeted based on their AI use will vary between industries, general best practices apply and should be reviewed regularly with a cybersecurity professional, Franco said. As companies work to strengthen their defenses, restricting chatbot use appears to be a wise start, according to Arnold.
“I do hope companies move toward having a flat-out restriction against using it,” she said. “The fact that these AI bots and companies have no guarantee of privacy is very concerning. It also pushes liability to the user and does not take on liability.”
Companies that allow chatbot use should fully research the platforms they utilize, including how data is secured and whether or not it is ever shared. “Who is it shared with? Is it ever destroyed or removed from that system? It goes back to doing your research and due diligence,” Arnold said. “It is growing very quickly, and I can just imagine that the majority of individuals using this are not thinking about their data privacy. It could really wreak some havoc if they are not careful.”
Taking some steps away from your employees who have to do certain things on a daily basis could be beneficial, but if the AI does not know to double-check something or take precautions, then that really ramps up a lot of the risk for cybersecurity claims scenarios happening.
Even with policies against employee chatbot use, the rapid integration of AI means “you cannot really stop it,” Franco said. “It is already in place in so much of what we do,” he said. “It is a scary world, but what we can do is have conversations and collaborations within our industries. We know the technology is here. How do we secure ourselves and keep ourselves safe? That is a huge question.”
Importance of the ‘human factor’
Generative AI is expected to continue to change how many businesses operate, including further personalization of digital interactions, increased software accessibility, and the ability to create code to automate actions, Harvard Business Review detailed in an April report. Adding guidelines for the software is critical, the article noted.
“The capabilities of AI are pretty endless,” Arnold said. “Taking some steps away from your employees who have to do certain things on a daily basis could be beneficial, but if the AI does not know to double-check something or take precautions, then that really ramps up a lot of the risk for cybersecurity claims scenarios happening. They are eliminating that human factor, which could cause some major issues.”
In March, OpenAI confirmed a data breach of its own ChatGPT platform, underscoring the data privacy worries many have about the service, Security Intelligence reported. The system was taken offline temporarily after the breach, which OpenAI noted in a release was caused by “a bug in an open-source library which allowed some users to see titles from another active user’s chat history.”
Today’s Cyber & Privacy Liability Insurance policies do not generally have restrictions surrounding breaches that are caused by or related to the use of AI platforms. This could change over time, Arnold and Franco agreed.
“I think that could start to happen as AI becomes more utilized in our everyday lives,” Arnold said of coverage forms eventually excluding AI tools. For now, business owners should speak with their insurance broker about their cybersecurity coverage and make sure they have risk management systems in place. “If they speak with their insurance professional, they would probably advise them not to use AI in any capacity,” she said.
Cyber & Privacy Liability Insurance policies “come with a lot more than just monies to make yourself whole again after a data breach,” she added. “They bring a lot of expertise to the table, including forensic investigators to figure out how the hacker got into the system, and they know every state’s notification requirements. They will handle that for the business, and they can also help with any reputational harm.
The good [AI] can do is really incredible, but the bad it can do can be astronomical.
Arnold said that if she were a business owner, she would not want to go through a cyber incident without a policy where she could have them walk her through every step of the process. “It can get very, very complicated and costly and the quicker you can have someone help you get it resolved and move on, the better off you are going to be,” she said.
Businesses should also consider having dedicated IT security professionals on staff to oversee processes, including the implementation of any generative AI platforms, Franco said. “That is going to be the best way to do it,” he said. “We have seen companies with the highest security levels they can have, though, and issues still happen. You will never be able to completely stop it, but you want to work with IT professionals who are more cybersecurity-savvy, and also have that insurance coverage in place.”
As generative AI continues to evolve, there needs to be strong, conscious decisions and thinking around where we are going with this technology, Arnold emphasized. “The good it can do is really incredible, but the bad it can do can be astronomical,” she said. “Do your research, do your due diligence, and ask the right questions about that data privacy. Right now, that does not seem to be at the forefront of AI conversations.”