NVIDIA Deploys GPT-5.5-Powered Codex Across 10,000+ Employees, Marking a New Benchmark for Enterprise AI Adoption
NVIDIA has deployed GPT-5.5-powered Codex to over 10,000 employees on its own GB200 infrastructure — setting a new benchmark for what enterprise AI adoption looks like at operational scale.

NVIDIA Deploys GPT-5.5-Powered Codex Across 10,000+ Employees, Marking a New Benchmark for Enterprise AI Adoption

NVIDIA has moved AI agents from pilot programmes into full enterprise operations, deploying OpenAI’s GPT-5.5-powered Codex application across more than 10,000 employees, spanning engineering, product, legal, marketing, finance, sales, HR, operations, and developer programmes, on NVIDIA’s own GB200 NVL72 rack-scale infrastructure.

The rollout, announced on April 23 via NVIDIA’s official blog, is one of the most significant internal enterprise AI deployments disclosed by a major technology company to date, and it signals a decisive shift from AI experimentation to AI execution at scale.

What Was Deployed and How

Codex, OpenAI’s agentic coding application, is now powered by GPT-5.5 and runs on NVIDIA’s GB200 NVL72 rack-scale systems. The hardware underpinning this deployment is not incidental, it is central to the economic argument NVIDIA is making. The GB200 NVL72 systems deliver 35x lower cost per million tokens and 50x higher token output per second per megawatt compared with prior-generation systems, a cost-efficiency threshold that NVIDIA argues makes frontier-model inference viable at genuine enterprise scale for the first time.

The deployment was built around a security-first architecture. NVIDIA IT rolled out cloud virtual machines for every employee to run their agent safely, providing a dedicated sandbox for the agent to operate at maximum capability while maintaining full auditability. A zero-data retention policy governs the deployment, and agents access production systems with read-only permissions through command-line interfaces.

Measurable Operational Gains

The productivity metrics disclosed in the official announcement are specific and operationally significant. Debugging cycles that once stretched across days are closing in hours. Experimentation that previously required weeks is turning into overnight progress in complex, multi-file codebases. Teams are shipping end-to-end features from natural-language prompts, with stronger reliability and fewer wasted cycles than earlier models.

These are not projections. They are reported outcomes from active internal use over several weeks, a distinction that separates this announcement from the category of AI enthusiasm that has dominated corporate communications for the past two years.

A Strategic Infrastructure Commitment

The deployment also disclosed the scale of OpenAI’s infrastructure commitment to NVIDIA’s hardware ecosystem. OpenAI has committed to deploying more than 10 gigawatts of NVIDIA systems for its next-generation AI infrastructure, a buildout that will put millions of NVIDIA GPUs at the foundation of OpenAI’s model training and inference for years ahead.

That figure, 10 gigawatts, is not a token investment. It places this partnership among the largest technology infrastructure commitments in recorded history, with direct implications for data centre capacity planning, power grid demand, and semiconductor supply chains across the United States, Europe, Asia, and increasingly, emerging markets investing in sovereign AI infrastructure.

The partnership between NVIDIA and OpenAI spans more than a decade, beginning in 2016 when NVIDIA founder and CEO Jensen Huang hand-delivered the first DGX-1 AI supercomputer to OpenAI’s San Francisco headquarters. The Codex rollout is the most commercially visible output of that relationship to date.

Why This Deployment Sets a New Reference Point

What distinguishes this announcement from the volume of AI deployment claims now circulating across the enterprise technology sector is the specificity of what was disclosed: a named model, named infrastructure, a verifiable employee count, and operational metrics tied to real workflows across multiple business functions, not just engineering.

For technology decision-makers in South Africa, across the African continent, and in emerging markets evaluating enterprise AI readiness, the NVIDIA-OpenAI deployment provides the most concrete reference architecture available for what large-scale AI adoption actually looks like in practice: dedicated compute per employee, sandboxed security environments, zero-data retention governance, and measurable productivity outcomes as the threshold for success, not model benchmarks.

The cost economics made possible by GB200 infrastructure are also significant for markets outside North America. As the cost of frontier AI inference continues to decline, the deployment window for enterprise-grade AI across banking, logistics, energy, and professional services in emerging economies moves closer.

As Jensen Huang told employees in a company-wide message urging adoption of Codex: “Let’s jump to lightspeed. Welcome to the age of AI.” That internal directive, now public, carries institutional weight that no press release could replicate.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply