ASUS just rolled out an update to its top-tier AI server, the ESC A8A-E12U, adding support for AMD’s latest Instinct MI350 series GPUs. That means businesses, research labs, and cloud service providers can now tap into next-gen performance for AI and HPC tasks without needing to overhaul their current infrastructure.

ASUS ESC A8A E12U AMD Instinct MI350 Series

The AMD Instinct MI350 GPUs are built on the 4th Gen CDNA architecture and come packed with 288GB of HBM3E memory and up to 8TB/s bandwidth, so they’re designed to handle large AI models and simulations more efficiently than ever. They also support low-precision compute formats like FP4 and FP6, which boost performance for machine learning, inference, and generative AI workloads. Since they’re fully compatible with systems running earlier MI300-series GPUs, customers can upgrade easily without a complete rebuild.

When paired with dual AMD EPYC 9005 processors, the ESC A8A-E12U takes full advantage of the MI350’s architecture. It’s especially well-suited for workloads like training large language models, tuning generative AI, or running scientific simulations. With such high memory bandwidth per GPU, fewer GPUs are needed, which cuts down on power, simplifies scaling, and lowers the total cost of ownership.

On top of that, the MI350s include a strong suite of security features like Secure Boot, DICE attestation, encrypted GPU-to-GPU comms, and SR-IOV support, making them a solid fit for industries that require secure, multi-tenant environments like finance, healthcare, or government.

Facebook
Twitter
LinkedIn
Pinterest

Leave a Reply

Related Posts

Subscribe via Email

Enter your email address to subscribe to Tech-Critter and receive notifications of new posts by email.