SuperX Launches XN9160-B300 AI Server with NVIDIA for Next-Gen Compute
SuperX introduces the XN9160-B300 AI server featuring eight NVIDIA Blackwell B300 GPUs, unified memory, and high-performance interconnects. Designed for AI training, inference, and HPC across data centers and research environments.
Ad

SuperX Launches XN9160-B300 AI Server with NVIDIA for Next-Gen Compute

SuperX AI Technology Limited (NASDAQ: SUPX) has announced its latest flagship, the SuperX XN9160-B300 AI Server, equipped with eight NVIDIA Blackwell B300 GPUs to deliver peak performance for AI training, inference, and high-performance computing (HPC) workloads.

Designed for scalability, efficiency, and modularity, the XN9160-B300 is built to address the demands of data centers, AI factories, and scientific research environments.

Key Specifications & Architecture

  • Hardware & Chassis: The AI server is housed in an 8U form factor and packs dual Intel Xeon 6 CPUs, 32 DDR5 memory slots, and high-speed networking.

  • GPU & Memory: It integrates NVIDIA’s HGX B300 module with eight B300 GPUs. The system deploys 2,304 GB of unified HBM3E memory (288 GB per GPU), eliminating the need for memory offloading and enabling large model training and inference.

  • Interconnect & Networking: Connectivity includes 8 × 800 Gb/s InfiniBand or dual 400 Gb/s Ethernet, plus 5th-generation NVLink, ensuring ultra-low latency and high throughput for distributed workloads.

  • Performance Gains: NVIDIA’s Blackwell Ultra architecture boosts compute with ~50% more NVFP4 throughput and ~50% more HBM memory compared to previous generation chips.

Use Cases & Target Markets

SuperX positions the XN9160-B300 server for a wide range of high-demand applications, such as:

  • AI model training & inference: Particularly for foundation models, multimodal systems, and long-context models

  • Scientific & HPC work: Climate modeling, genomics, physics simulations, and large-scale research

  • Enterprise & financial analytics: Real-time risk modeling, quantitative simulations, and data-intensive workflows

  • Edge & data center transformation: Building AI “superpods” or next-gen compute clusters

Why This Matters

  • Pushing AI infrastructure forward: This server marks a step toward hardware that can support ever-larger models with fewer bottlenecks.

  • Efficiency under load: With memory unified across GPUs and high-speed interconnects, performance and scaling become more seamless.

  • Modular & future proof: The design supports upgrades, maintainability, and integration into evolving AI data center ecosystems.

Discover IT Tech News for the latest updates on IT advancements and AI innovations.

Read related news  - https://ittech-news.com/supabase-raises-100m-at-5b-valuation-co-led-by-accel-and-peak-xv/

disclaimer
Vereigen Media is a global B2B demand-generation company focused on delivering high-quality, privacy-first leads through proprietary first-party data and Verified Content Engagement. By combining technological precision with human validation and in-house operations, they ensure compliance, transparency, and strong conversion rates empowering marketers to connect confidently with decision-makers across tech-driven industries.

What's your reaction?