Palantir is integrating Nvidia CUDA-X and Nemotron models into its AI platform for operational use


Palantir is now working directly with Nvidia to fuse CUDA‑X accelerated computing and Nemotron open AI models into the core of its Ontology framework, according to details revealed Tuesday during CEO Jensen Huang’s keynote.
This is full-stack integration aimed at operational AI that runs complex systems in real time. The two companies are designing a new stack for AI infrastructure; analytics tools, reference workflows, automation layers, and customizable agents that go beyond generic large language models.
This tech will sit at the heart of the Palantir AI Platform, built for institutions with live data pipelines.
Jensen explained during the announcement, “By combining Palantir’s powerful AI-driven platform with Nvidia CUDA‑X accelerated computing and Nemotron open AI models, we’re creating a next-generation engine to fuel AI-specialized applications and agents that run the world’s most complex industrial and operational pipelines.”
Retail and telecom firms connect to new Nvidia stack
Lowe’s is one of the first companies to use the new integrated setup. The retailer is building a digital replica of its global supply chain, which will let its teams make real-time decisions based on AI suggestions.
This virtual copy of its logistics will allow dynamic optimization, supporting cost control, supply chain flexibility, and better customer service without having to rely on after-the-fact reporting.
Meanwhile, Nvidia is teaming up with communications giants like Booz Allen, Cisco, MITRE, ODC, and T-Mobile to assemble what they’re calling America’s first AI-native wireless stack for 6G.
The stack is powered by Nvidia AI Aerial, the company’s infrastructure platform built for high‑throughput AI network workloads. This stack will support public safety through multimodal sensing systems and also allow AI‑driven spectrum sensing and agility across wireless infrastructure. It’s built to handle what’s next, not what’s current.
Eli Lilly, Uber and automakers build out Nvidia-powered AI systems
On the healthcare front, Eli Lilly is joining forces with Nvidia to build a supercomputer and AI factory aimed at drug discovery.
The system will be ready in December, going online in January, and will be fully owned and operated by Eli Lilly. The company said it will use over 1,000 Nvidia Blackwell Ultra GPUs, connected through one fast, unified network.
These GPUs will power the AI factory, where researchers can train and deploy models to speed up drug development.
Diogo Rau, Eli Lilly’s Chief Information and Digital Officer, said the current drug approval process takes a decade. With this AI factory, the goal is to cut that time drastically.
“The things that we’re talking about discovering with this kind of power that we have right now, we’re really going to see those benefits in 2030,” he said.
Thomas Fuchs, Chief AI Officer at Eli Lilly, called the machine “a novel scientific instrument… like an enormous microscope for biologists.” He said the system lets researchers simulate millions of experiments to test potential drugs and uncover more treatment options than ever before.
Nvidia is also now working with Uber to expand its DRIVE platform for autonomous ride-hailing, involving a global ride-hailing setup that combines robot drivers and human riders. Automakers like Stellantis, Lucid, and Mercedes-Benz are building level 4-ready vehicles to integrate with the Nvidia DRIVE platform.
Others like Aurora, Volvo, and Waabi are applying level 4 tech to long-haul freight, where consistent road conditions make it easier to deploy autonomous solutions.
Join Bybit now and claim a $50 bonus in minutes




