NVIDIA Data Center’s cover photo
NVIDIA Data Center

NVIDIA Data Center

Computer Hardware Manufacturing

Santa Clara, California 234,844 followers

About us

The NVIDIA accelerated computing platform is optimized for energy efficiency while accelerating AI performance, helping enterprises deploy secure, future-ready AI data centers.

Website
https://www.nvidia.com/en-us/data-center/
Industry
Computer Hardware Manufacturing
Company size
10,001+ employees
Headquarters
Santa Clara, California

Updates

  • View organization page for NVIDIA Data Center

    234,844 followers

    By running on the most powerful NVIDIA AI infrastructure and software—including the NVIDIA Holoscan platform for real-time edge computing and systems with #NVIDIARTXPRO 6000 Blackwell Server Editions GPUs—Palantir Technologies, TWG AI and Teton Ridge are bringing cutting edge computer vision and AI at the edge to modernize the rodeo—accelerating and elevating the sport experience for athletes, fans, and partners.

  • How is AI supercomputing transforming the semiconductor industry? Join NVIDIA’s Timothy Costa at the Global Semiconductor Executive Summit as he breaks down how #acceleratedcomputing and AI are powering a new era of smart manufacturing—from AI driven chip design and physics-accurate simulation to autonomous robotics and digital twins. 🗓️ Wednesday, December 17 | 10:30 AM JST 🔗 Learn more: https://nvda.ws/4pFHL2J #SEMICONJapan

    • No alternative text description for this image
  • California Polytechnic State University-San Luis Obispo is launching a new #AIfactory, powered by an #NVIDIADGX BasePOD with DGX B200 systems. The AI factory, to be built in collaboration with Mark III Systems, will provide students and faculty the resources to tackle challenges in industry and the community. Learn more now ⬇️

    View organization page for Cal Poly Engineering

    7,106 followers

    Cal Poly is making a major investment in the future of applied AI. Our new AI Factory, powered by NVIDIA DGX technology, will allow students, faculty and regional partners to train large-scale models on campus and accelerate research in areas like medical imaging, climate monitoring and advanced manufacturing. This positions Cal Poly among a select group of universities with this level of capability. Read more about what’s coming: https://lnkd.in/guXG3R8c

  • Mixture of Experts (MoE) is the AI architecture pushing the industry toward a future where massive capability, efficiency, and scale coexist. NVIDIA GB200 NVL72 unlocks this potential today and our roadmap with the NVIDIA Vera Rubin architecture will continue to expand the horizons of frontier models. For #IT leaders, this means lower computational costs while improving performance per dollar and per watt. Prepare your infrastructure for the next era of AI reasoning and cost optimization. Learn how NVIDIA GB200 NVL72 scales complex MoE models and enables 10x the performance.

    View organization page for NVIDIA

    4,417,402 followers

    What do the world’s most intelligent open-source AI models have in common? 🧠 They all use a Mixture of Experts (MoE) architecture. Learn why NVIDIA Blackwell NVL72 is the only rack-scale system capable of delivering a 10x inference performance leap across a broad range of MoEs today, including Kimi K2 Thinking, DeepSeek-R1, and Mistral Large 3. 🔗: https://nvda.ws/4pJRsMX

    • No alternative text description for this image
  • 🚨 NVIDIA CUDA 13.1 brings the largest update to the #CUDA platform in 20 years. Discover the new features and updates for improving performance and driving #acceleratedcomputing, including: ✅ Announcing CUDA Tile: Future-proof kernels by abstracting Tensor Cores. ✅ Performance Wins: Up to 4x speed-up with cuBLAS Grouped GEMM & 2x in cuSOLVER on Blackwell. ✅ Green Contexts: Finer GPU resource control via Runtime API. ✅ New Tooling: Nsight Compute now profiles CUDA Tile kernels. ✅ New CUDA Programming Guide: Everything CUDA for novices and experts. 🔗 Download the toolkit and see the next era of GPU programming: https://nvda.ws/4pljv5w

    • No alternative text description for this image
  • NVIDIA Data Center reposted this

    View organization page for Latent Labs

    7,284 followers

    Following our partnership announcement at NVIDIA GTC earlier this summer, we had a chance to test NVIDIA's new DGX Lepton Cloud. We found Lepton's ability to unify different cloud providers into a single platform particularly useful. During our evaluation, the plug-and-play setup and intuitive log surfacing let our team focus on what matters—advancing AI for biological applications. When training and evaluating dozens of model variants for Latent-X, multi-node GPU scaling meant minimal downtime and strong throughput. We went from zero to training in hours, not days. DGX Lepton Cloud helped us iterate faster and we're thankful to NVIDIA for supporting our research. 🖥️ Read more on the partnership announcement here: https://lnkd.in/et8TDYwT

  • View organization page for NVIDIA Data Center

    234,844 followers

    🤝 Amazon Web Services (AWS) and NVIDIA have been collaborating for over 15 years with extreme co-design across software, silicon, and the entire AI infrastructure. Announced at #AWSreInvent, AWS and NVIDIA are boosting AI inference performance for frontier mixture-of-experts models through: ✅ AWS supporting NVIDIA NVLink Fusion to deploy Trainium4 chips, Graviton CPUs, and AWS Nitro System. ✅ Amazon EC2 P6e-GB300 UltraServers, powered by NVIDIA GB300 NVL72 systems Hear from Dion Harris, Sr. Director of HPC and AI Infrastructure Solutions at NVIDIA. 🔗 Read announcement: https://nvda.ws/48TotRb

  • 🎉 We are expanding our collaboration with Palantir Technologies to launch Chain Reaction which will accelerate NVIDIA AI infrastructure installations across the U.S. By using AI to improve the complex supply chains, we will support gigawatt-scale #AIfactory buildouts across power generation, power distribution, construction and data center operations. Learn more now ⤵️

    View organization page for Palantir Technologies

    565,030 followers

    The bottleneck to AI innovation is no longer algorithms; it is power and compute. America is at an inflection point in the energy infrastructure buildout, and it requires software built for an entirely different scale. Today, alongside NVIDIA and CenterPoint Energy, we are launching Chain Reaction to address this directly by accelerating the AI buildout with energy producers, power distributors, data centers and infrastructure builders.

    • No alternative text description for this image
  • AI has three critical phases: training, post-training, and inference. Each has different system demands—and meeting them efficiently requires full-stack co-design across compute, networking, and software. In this discussion, Alex Divinsky and Dion Harris break down why #inference is becoming the core economic engine of the #AIfactory, how disaggregated serving is shaping modern model performance, and how system-level optimization is delivering 10× performance gains for large-scale inference architectures like mixture-of-experts (MoE).

  • 📣 NVIDIA NVLink is the most widely deployed scale-up networking solution and unlocks incredible performance for AI training, inference, and reasoning workflows. The fifth generation NVLink and NVLink Switch with NVIDIA GB200 NVL72 helps deliver 10x the performance and 10x the revenue compared to the previous generation NVIDIA H200 on mixture-of-expert LLMs. The sixth generation will offer 3.6 TB/s per-GPU bandwidth and 260 TB/s of total scale-up bandwidth. Hyperscalers like Amazon Web Services (AWS) can leverage NVLink to deploy custom ASICs via NVIDIA NVLink Fusion. AWS announced plans to integrate NVLink Fusion for its next generation Trainium4 deployment for tackling demanding AI workloads. #NVLinkFusion #NVIDIABlackwell

    • No alternative text description for this image

Affiliated pages

Similar pages