The retail industry is among the leaders in generative AI adoption, but a new report highlights the security costs that accompany it.
According to cybersecurity firm Netskope, the retail sector has all but universally adopted the technology, with 95% of organisations now using generative AI applications. That’s a huge jump from 73% just a year ago, showing just how fast retailers are scrambling to avoid being left behind.
However, this AI gold rush comes with a dark side. As organisations weave these tools into the fabric of their operations, they are creating a massive new surface for cyberattacks and sensitive data leaks.
The report’s findings show a sector in transition, moving from chaotic early adoption to a more controlled, corporate-led approach. There’s been a shift away from staff using their personal AI accounts, which has more than halved from 74% to 36% since the beginning of the year. In its place, usage of company-approved GenAI tools has more than doubled, climbing from 21% to 52% in the same timeframe. It’s a sign that businesses are waking up to the dangers of “shadow AI” and trying to get a handle on the situation.
In the battle for the retail desktop, ChatGPT remains king, used by 81% of organisations. Yet, its dominance is not absolute. Google Gemini has made inroads with 60% adoption, and Microsoft’s Copilot tools are hot on its heels at 56% and 51% respectively. ChatGPT’s popularity has recently seen its first-ever dip, while Microsoft 365 Copilot’s usage has surged, likely thanks to its deep integration with the productivity tools many employees use every day.
Beneath the surface of this generative AI adoption by the retail industry lies a growing security nightmare. The very thing that makes these tools useful – their ability to process information – is also their biggest weakness. Retailers are seeing alarming amounts of sensitive data being fed into them.
The most common type of data exposed is the company’s own source code, making up 47% of all data policy violations in GenAI apps. Close behind is regulated data, like confidential customer and business information, at 39%.
In response, a growing number of retailers are simply banning apps they deem too risky. The app most frequently finding itself on the blocklist is ZeroGPT, with 47% of organisations banning it over concerns it stores user content and has even been caught redirecting data to third-party sites.
This newfound caution is pushing the retail industry towards more serious, enterprise-grade generative AI platforms from major cloud providers. These platforms offer far greater control, allowing companies to host models privately and build their own custom tools.
Both OpenAI via Azure and Amazon Bedrock are tied for the lead, with each being used by 16% of retail companies. But these are no silver bullets; a simple misconfiguration could inadvertently connect a powerful AI directly to a company’s crown jewels, creating the potential for a catastrophic breach.
The threat isn’t just from employees using AI in their browsers. The report finds that 63% of organisations are now connecting directly to OpenAI’s API, embedding AI deep into their backend systems and automated workflows.
This AI-specific risk is part of a wider, troubling pattern of poor cloud security hygiene. Attackers are increasingly using trusted names to deliver malware, knowing that an employee is more likely to click a link from a familiar service. Microsoft OneDrive is the most common culprit, with 11% of retailers hit by malware from the platform every month, while the developer hub GitHub is used in 9.7% of attacks.
The long-standing problem of employees using personal apps at work continues to pour fuel on the fire. Social media sites like Facebook and LinkedIn are used in nearly every retail environment (96% and 94% respectively), alongside personal cloud storage accounts. It’s on these unapproved personal services that the worst data breaches happen. When employees upload files to personal apps, 76% of the resulting policy violations involve regulated data.
For security leaders in retail, casual generative AI experimentation is over. Netskope’s findings are a warning that organisations must act decisively. It’s time to gain full visibility of all web traffic, block high-risk applications, and enforce strict data protection policies to control what information can be sent where.
Without adequate governance, the next innovation could easily become the next headline-making breach.
See also: Martin Frederik, Snowflake: Data quality is key to AI-driven growth

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post Generative AI in retail: Adoption comes at high security cost appeared first on AI News.
This insightful article highlights the critical security challenges arising from AI adoption in retail, emphasizing the need for stricter controls and enterprise-grade solutions to mitigate risks like data breaches and unauthorized tool usage. The shift towards company-approved AI tools is a positive step, but the persistent issue of personal app usage remains a significant concern that demands urgent attention.
Who knew banning ZeroGPT would be the retail industrys version of a mass exodus from a sketchy online dating app? Its delightful to see businesses finally realize that my AI might just be my problem when it comes to exposing source code like its a public beach. And while diving into enterprise-grade platforms sounds fancy, lets be real, configuring these without tripping over your own feet is like trying to teach a cat to play piano – chaotic but somehow adorable. But lets not forget the golden oldies: those personal apps where employees treat work data like a party favor. Its like watching a toddler play with a smartphone – inevitable chaos, but somehow, we keep watching. Maybe its time for a retail AI intervention, stat.
Haha, great to see retail finally realizing shadow AI is less of a shadow dance and more of a data horror show! Banning ZeroGPT like its the boogeyman is cute, but the real trick is knowing your employees are probably still doing their shadow AI dance with LinkedIn and Facebook. And lets be real, trying to control everything is like trying to herd cats – except the cats are AI and sensitive data, and they *love* sneaking into the pantry. Time to get serious, folks, or the next data breach will be so big, itll make ChatGPTs dip look like a tiny ripple!vows for him
Who knew banning ZeroGPT would be the retail industrys version of finally putting the kibosh on that one colleague who always brings the dodgiest snacks to the office? Its heartwarming to see businesses wise up to the dangers of shadow AI, though it seems our collective employees are still experimenting with forbidden apps like Facebook at a rate that would make any security leader weep (or maybe high-five internally about another breach). The war for the desktop is hilarious to watch, with ChatGPT, Gemini, and Copilot jostling for throne, much like office chairs in a crowded meeting. But really, the funniest part is realizing that even with fancy enterprise platforms, one wrong click or misconfiguration is all it takes to turn our carefully curated digital lives into a full-blown cybersecurity comedy of errors. Good luck, retail!
Its hilarious to watch the retail industry finally realize its AI experiments are like a toddler with a smartphone – full of potential chaos! Banning ZeroGPT is the funniest part, though – like telling someone not to click the one interesting link they absolutely must click. While switching to fancy enterprise platforms sounds great, good luck with the configuration – its like teaching a cat to play piano, isnt it? And dont even get me started on employees treating personal apps like a party favor for work data. The security leaders have a point: maybe its time for an AI intervention before the next breach makes headlines. Time to block those high-risk apps, folks – before someone uploads the source code to GitHub as a joke!