🌍 What would it take for AI to scale while making #datacenters and the planet more sustainable? AI is supercharging demand for data centers, and leaders like NVIDIA, Autodesk, Arcadis, ArtifexAI, and Amazon Web Services (AWS) are proving that sustainable design, efficient #acceleratedcomputing, and circular approaches to energy and water can cut carbon, energy use, and water waste while performance keeps rising. 🔗 Explore how AI, design, and infrastructure leaders are reshaping data center #sustainability: https://nvda.ws/44eawe0
About us
The NVIDIA accelerated computing platform is optimized for energy efficiency while accelerating AI performance, helping enterprises deploy secure, future-ready AI data centers.
- Website
-
https://www.nvidia.com/en-us/data-center/
External link for NVIDIA Data Center
- Industry
- Computer Hardware Manufacturing
- Company size
- 10,001+ employees
- Headquarters
- Santa Clara, California
Updates
-
By running on the most powerful NVIDIA AI infrastructure and software—including the NVIDIA Holoscan platform for real-time edge computing and systems with #NVIDIARTXPRO 6000 Blackwell Server Editions GPUs—Palantir Technologies, TWG AI and Teton Ridge are bringing cutting edge computer vision and AI at the edge to modernize the rodeo—accelerating and elevating the sport experience for athletes, fans, and partners.
We are teaming up with powerhouse innovators Palantir Technologies + NVIDIA + TWG Global AI to bring real-time edge AI to rodeo. Every ride analyzed instantly in the arena. Better athlete + animal performance. Smarter rider/stock pairing and next-level broadcasts on Cowboy Channel. The west just went full tech. Read more here:https://lnkd.in/g9mnmdn4 #Rodeo #AI #Palantir #NVIDIA #TetonRidge #CowboyChannel
-
How is AI supercomputing transforming the semiconductor industry? Join NVIDIA’s Timothy Costa at the Global Semiconductor Executive Summit as he breaks down how #acceleratedcomputing and AI are powering a new era of smart manufacturing—from AI driven chip design and physics-accurate simulation to autonomous robotics and digital twins. 🗓️ Wednesday, December 17 | 10:30 AM JST 🔗 Learn more: https://nvda.ws/4pFHL2J #SEMICONJapan
-
-
California Polytechnic State University-San Luis Obispo is launching a new #AIfactory, powered by an #NVIDIADGX BasePOD with DGX B200 systems. The AI factory, to be built in collaboration with Mark III Systems, will provide students and faculty the resources to tackle challenges in industry and the community. Learn more now ⬇️
Cal Poly is making a major investment in the future of applied AI. Our new AI Factory, powered by NVIDIA DGX technology, will allow students, faculty and regional partners to train large-scale models on campus and accelerate research in areas like medical imaging, climate monitoring and advanced manufacturing. This positions Cal Poly among a select group of universities with this level of capability. Read more about what’s coming: https://lnkd.in/guXG3R8c
-
Mixture of Experts (MoE) is the AI architecture pushing the industry toward a future where massive capability, efficiency, and scale coexist. NVIDIA GB200 NVL72 unlocks this potential today and our roadmap with the NVIDIA Vera Rubin architecture will continue to expand the horizons of frontier models. For #IT leaders, this means lower computational costs while improving performance per dollar and per watt. Prepare your infrastructure for the next era of AI reasoning and cost optimization. Learn how NVIDIA GB200 NVL72 scales complex MoE models and enables 10x the performance.
What do the world’s most intelligent open-source AI models have in common? 🧠 They all use a Mixture of Experts (MoE) architecture. Learn why NVIDIA Blackwell NVL72 is the only rack-scale system capable of delivering a 10x inference performance leap across a broad range of MoEs today, including Kimi K2 Thinking, DeepSeek-R1, and Mistral Large 3. 🔗: https://nvda.ws/4pJRsMX
-
-
🚨 NVIDIA CUDA 13.1 brings the largest update to the #CUDA platform in 20 years. Discover the new features and updates for improving performance and driving #acceleratedcomputing, including: ✅ Announcing CUDA Tile: Future-proof kernels by abstracting Tensor Cores. ✅ Performance Wins: Up to 4x speed-up with cuBLAS Grouped GEMM & 2x in cuSOLVER on Blackwell. ✅ Green Contexts: Finer GPU resource control via Runtime API. ✅ New Tooling: Nsight Compute now profiles CUDA Tile kernels. ✅ New CUDA Programming Guide: Everything CUDA for novices and experts. 🔗 Download the toolkit and see the next era of GPU programming: https://nvda.ws/4pljv5w
-
-
NVIDIA Data Center reposted this
Following our partnership announcement at NVIDIA GTC earlier this summer, we had a chance to test NVIDIA's new DGX Lepton Cloud. We found Lepton's ability to unify different cloud providers into a single platform particularly useful. During our evaluation, the plug-and-play setup and intuitive log surfacing let our team focus on what matters—advancing AI for biological applications. When training and evaluating dozens of model variants for Latent-X, multi-node GPU scaling meant minimal downtime and strong throughput. We went from zero to training in hours, not days. DGX Lepton Cloud helped us iterate faster and we're thankful to NVIDIA for supporting our research. 🖥️ Read more on the partnership announcement here: https://lnkd.in/et8TDYwT
-
🤝 Amazon Web Services (AWS) and NVIDIA have been collaborating for over 15 years with extreme co-design across software, silicon, and the entire AI infrastructure. Announced at #AWSreInvent, AWS and NVIDIA are boosting AI inference performance for frontier mixture-of-experts models through: ✅ AWS supporting NVIDIA NVLink Fusion to deploy Trainium4 chips, Graviton CPUs, and AWS Nitro System. ✅ Amazon EC2 P6e-GB300 UltraServers, powered by NVIDIA GB300 NVL72 systems Hear from Dion Harris, Sr. Director of HPC and AI Infrastructure Solutions at NVIDIA. 🔗 Read announcement: https://nvda.ws/48TotRb
-
🎉 We are expanding our collaboration with Palantir Technologies to launch Chain Reaction which will accelerate NVIDIA AI infrastructure installations across the U.S. By using AI to improve the complex supply chains, we will support gigawatt-scale #AIfactory buildouts across power generation, power distribution, construction and data center operations. Learn more now ⤵️
The bottleneck to AI innovation is no longer algorithms; it is power and compute. America is at an inflection point in the energy infrastructure buildout, and it requires software built for an entirely different scale. Today, alongside NVIDIA and CenterPoint Energy, we are launching Chain Reaction to address this directly by accelerating the AI buildout with energy producers, power distributors, data centers and infrastructure builders.
-
-
AI has three critical phases: training, post-training, and inference. Each has different system demands—and meeting them efficiently requires full-stack co-design across compute, networking, and software. In this discussion, Alex Divinsky and Dion Harris break down why #inference is becoming the core economic engine of the #AIfactory, how disaggregated serving is shaping modern model performance, and how system-level optimization is delivering 10× performance gains for large-scale inference architectures like mixture-of-experts (MoE).
E21: NVIDIA'S HUGE AI Chip Breakthroughs Change Everything
https://www.youtube.com/