According to OpenAI, enterprise AI has graduated from the sandbox and is now being used for daily operations with deep workflow integrations.
New data from the company shows that firms are now assigning complex and multi-step workflows to models rather than simply asking for text summaries. The figures illustrate a hard change in how organisations deploy generative models.
With OpenAI’s platform now serving over 800 million users weekly, a “flywheel” effect is driving consumer familiarity into professional environments. The company’s latest report notes that over a million business customers now use these tools, and the goal is now even deeper integration.
This evolution presents two realities for decision-makers: productivity gains are concrete, but a growing divide between “frontier” adopters and the median enterprise suggests that value depends heavily on usage intensity.
From chatbots to deep reasoning
The best metric for corporate deployment maturity is not seat count, but task complexity
OpenAI reports that ChatGPT message volume has grown eightfold year-over-year, but a better indicator for enterprise architects is the consumption of API reasoning tokens which suggests deeper integrations are taking place. This figure has increased by nearly 320 times per organisation—evidence that companies are systematically wiring more intelligent models into their products to handle logic rather than basic queries.
The rise of configurable interfaces supports this view. Weekly users of Custom GPTs and Projects (tools that allow workers to instruct models with specific institutional knowledge) have increased approximately 19x this year. Roughly 20 percent of all enterprise messages are now processed via these customised environments, indicating that standardisation is now a prerequisite for professional use.
For enterprise leaders auditing the ROI of AI seats, the data offers evidence on time savings. On average, users attribute between 40-60 minutes of time saved per active day to the technology. The impact varies by function: data science, engineering, and communication professionals report higher savings (averaging 60-80 minutes daily.)
Beyond efficiency, the software is altering role boundaries. There is a specific effect on technical capability, particularly regarding code generation.
Among enterprise users, OpenAI says that coding-related messages have risen across all business functions. Outside of engineering, IT, and research roles, coding queries have grown by an average of 36 percent over the past six months. Non-technical teams are using the tools to perform analysis that previously required specialised developers.
Operational improvements extend across departments. Survey data shows 87 percent of IT workers report faster issue resolution, while 75 percent of HR professionals see improved employee engagement.
Widening enterprise AI competence gap
OpenAI’s data suggests that a split is forming between organisations that simply provide access to tools and those in which integrations are being deeply embedded into their operating models. The report identifies a “frontier” class of workers – those in the 95th percentile of adoption intensity – who generate six times more messages than the median worker.
This disparity is stark at the organisational level. Frontier firms generate approximately twice as many messages per seat as the median enterprise and seven times more messages to custom GPTs. Leading firms are not just using the tools more frequently; they are investing in the infrastructure and standardisation required to make AI a persistent part of operations.
Users who engage across a wider variety of tasks (roughly seven distinct types) report saving five times more time than those who limit their usage to three or four basic functions. Benefits correlate directly with the depth of use, implying that a “light touch” deployment plan may fail to deliver the anticipated ROI.
While the professional services, finance, and technology sectors were early adopters and maintain the largest scale of usage, other industries are sprinting to catch up. The technology sector leads with 11x year-over-year growth, but healthcare and manufacturing follow closely with 8x and 7x growth respectively.
Global adoption patterns also challenge the notion that this is solely a US-centric phenomenon. International usage is surging, with markets such as Australia, Brazil, the Netherlands, and France showing business customer growth rates exceeding 140 percent year-over-year. Japan has also surfaced as a key market, holding the largest number of corporate API customers outside of the US.
OpenAI: Deep AI integrations accelerate enterprise workflows
Examples of deployment highlight how these tools influence key business metrics. Retailer Lowe’s deployed an associate-facing tool to over 1,700 stores, resulting in a customer satisfaction score increase of 200 basis points when associates used the system. Furthermore, when online customers engaged with the retailer’s AI tool, conversion rates more than doubled.
In the pharmaceutical sector, Moderna used enterprise AI to speed up the drafting of Target Product Profiles (TPPs), a process that typically involves weeks of cross-functional effort. By automating the extraction of key facts from massive evidence packs, the company reduced core analytical steps from weeks to hours.
Financial services firm BBVA leveraged the technology to fix a bottleneck in legal validation for corporate signatory authority. By building a generative AI solution to handle standard legal queries, the bank automated over 9,000 queries annually, effectively freeing up the equivalent of three full-time employees for higher-value tasks.
However, the transition to production-grade AI requires more than software procurement; it necessitates organisational readiness. The primary blockers for many organisations are no longer model capabilities, but implementation and internal structures.
Leading firms consistently enable deep system integration by “turning on” connectors that give models secure access to company data. Yet, roughly one in four enterprises has not taken this step, limiting their models to generic knowledge rather than specific organisational context.
Successful deployment relies on executive sponsorship that sets explicit mandates and encourages the codification of institutional knowledge into reusable assets.
As the technology continues to evolve, organisations must adjust their approach. OpenAI’s data suggests that success now depends on delegating complex workflows with deep integrations rather than just asking for outputs, treating AI as a primary engine for enterprise revenue growth.
See also: AWS re:Invent 2025: Frontier AI agents replace chatbots

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post OpenAI: Enterprise users swap AI pilots for deep integrations appeared first on AI News.
