Big Data

GenAI Is Putting Data in Danger, But Companies Are Adopting It Anyway


(Ayesha kanwal/Shutterstock)

There are serious questions about maintaining the privacy and security of data when using generative AI applications, yet companies are rushing headlong to adopt GenAI anyway. That’s the conclusion of a new study released last week by Immuta, which also found some security and privacy benefits to GenAI.

Immuta, a provider of data governance and security solutions, surveyed about 700 data professionals about their organizations’ GenAI and data activities, and it shared the results in the AI Security & Governance Report.

The report paints a dark picture of looming data security and privacy challenges as companies rush to take advantage of GenAI capabilities made available through large language models (LLMs) such as GPT-4, Llama 3, and others.

“In their eagerness to embrace [LLMs] and keep up with the rapid pace of adoption, employees at all levels are sending vast amounts of data into unknown and unproven AI models,” Immuta says in its report. “The potentially devastating security costs of doing so aren’t yet clear.”

Half of the data professionals surveyed by Immuta say their organization has four or more AI systems or applications in place. However, serious privacy and security concerns are accompanying the GenAI rollouts.

Immuta says 55% of those surveyed say inadvertent exposure of sensitive information by LLMs is one of the biggest threats. Slightly fewer (52%) say they’re worried their users will expose sensitive data to the LLM via prompts.

You can download Immuta’s AI Security & Agovernance Report here

On the security front, 52% of those surveyed say they worry about adversarial attacks by malicious actors via AI models. And slightly more (57%) say that they’ve seen “a significant increase in AI-powered attacks in the past year.”

All told, 80% of those surveyed say GenAI is making it more difficult to maintain security, according to Immuta’s report. The challenge is compounded by the nature of public LLMs, such as OpenAI’s ChatGPT, which use information that’s inputted as source material for subsequent training sessions. This presents “a higher risk of attack and other cascading security threats,” Immuta says.

“These models are very expensive to train and maintain and do forensic analysis on, so they carry a lot of uncertainty,” Joe Regensburger, vice president of research at Immuta. “We’re not sure of their impact or the scope.”

Despite the security challenges posed by GenAI, 85% of data professionals surveyed by Immuta are confident that they can address any concerns about using the technology. What’s more, two-thirds say they’re confident in their ability to maintain data privacy in the age of AI.

“In the age of cloud and AI, data security and governance complexities are mounting,” Sanjeev Mohan, the principal and SanjMo, says in the report. “It’s simply not possible to use legacy approaches to manage data security across hundreds of data products.”

The top three ethical issues of AI, per Immuta’s report

While GenAI raises privacy and security risks, data professionals are also looking to GenAI to provide new tools and techniques for automating privacy and security work, according to the survey.

Specifically, 13% are looking to AI to help with phishing attack identification and security awareness training, 12% are looking to AI to help with incident response, while 10% say it can help with threat simulation and red teaming. Data augmentation and masking, audits and reporting, and streamlining security operations center (SOC) teamwork and operations are also potential uses for AI.

“AI and machine learning are able to automate processes and quickly analyze vast data sets to improve threat detection, and enable advanced encryption methods to secure data,” Matt DiAntonio, vice president of product management at Immuta, said in a press release.

At the end of the day, it’s clear that advancements in AI are changing the nature of data security and privacy work. Companies must work to stay on top of the rapidly changing nature of the threats and opportunities, DiAntonio said.

“As organizations mature on their AI journeys, it is critical to de-risk data to prevent unintended or malicious exposure of sensitive data to AI models,” he said. “Adopting an airtight security and governance strategy around generative AI data pipelines and outputs is imperative to this de-risking.”

Related Items:

New Cisco Study Highlights the Impact of Data Security and Privacy Concerns on GenAI Adoption

ChatGPT Growth Spurs GenAI-Data Lockdowns

Bridging Intent with Action: The Ethical Journey of AI Democratization