ThinkSystem SR685a V3
Many businesses are still looking for methods to use generative AI, even since ChatGPT sparked a lot of interest in the field some 20 months ago. Robert Daigle of Lenovo claims that utilising AI can be difficult when dealing with large amounts of data. Often, expensive GPU deployments and sizable language models are required. But that’s only one possible outcome. According to Robert, some businesses require assistance in even knowing where to begin.
Although Lenovo is well-known worldwide for being a dominant producer of personal and business systems, the multinational corporation also offers AI services and solutions to companies in a variety of industries. Robert is in charge of Lenovo’s worldwide AI division, which helps customers set objectives, solve problems, and create infrastructures that maximise the benefits of AI.
AMD and Lenovo: Appropriately sized AI for businesses
Robert stated, “You don’t need a GPU for everything.” “Not every use case requires the greatest performance interconnected accelerators. That’s excellent if you’re attempting to train a model with more than 100 billion parameters from scratch. It’s a wise decision to have that much powerful computing power in a system with networked GPUs. However, many commercial clients will only be considering inferencing and perhaps some fine-tuning. and a tendency towards smaller language models is also present.”
CPUs and lightweight accelerators are more than sufficient for inferencing and smaller language models, according to Robert. Thus, giving customers additional choices and freedom is a crucial component of Lenovo’s “AI for All” agenda. Lenovo gives businesses more options by offering a large selection of both hardware and software.
AMD’s Instinct MI300X GPUs
Lenovo unveiled a line of HCI devices and servers in April that handle “compute intensive workloads” and hybrid AI-centric infrastructure systems. Among these is the ThinkSystem SR685a V3. AMD’s Instinct MI300X GPUs and various GPU alternatives, along with 4th generation AMD EPYC processors, power the ThinkSystem SR685a V3. The system is made to work with both public and on-premise AI cloud services.
Lenovo also contributes to addressing security, removing bias from AI, and environmental protection issues. Lenovo discloses a multi-step assessment procedure that was originally developed for internal usage to help customers build ethical and responsible AI.
Rack server ThinkSystem SR685a V3

- Designed with Compute-Heavy AI in Mind
- Form Factor: Rack server (8U/2S)
- Processor: Two AMD EPYC 4th Generation Processors
- Supported GPUs: 8x NVIDIA H100/H200/B100 or AMD Instinct MI300x
- Fastest GPU interconnect using NVLink from NVIDIA and AMD’s Infinity Fabric
- Memory: 24x TruDDR5 RDIMM slots maximum
- Extension Slots: Ten PCIe Gen5 x16 adapters maximum
Regarding energy usage, Robert stated that businesses are understandably concerned about electricity usage. The fact that some data centres consume as much electricity as small nations astounded him. He suggested that using energy-efficient processors, like AMD EPYC CPUs, and possibly liquid cooling are answers for businesses using LLMs.
ThinkSystem SR685a V3 Features
Boosted Processing for AI and HPC
An 8U2S rack server designed for demanding AI and HPC tasks is the ThinkSystem SR685a V3. Utilising industry-leading 4th Generation AMD EPYC Processors, it is accelerated for intensive computation, supports 8x of the newest GPUs, and connects at the quickest transfer rate via AMD’s Infinity Fabric or NVIDIA’s NVLink. With its computer capability, it can handle tasks like scientific research, financial technology, modelling, training, and rendering.
Option and Adaptability
The variety of choice provided by the open architecture design of the ThinkSystem SR685a V3 allows it to be modified to meet specific workload requirements.
You’ll find everything you need, whether you decide to use the NVIDIA H100/H200/B100 or the MI300x to run AMD on AMD. Whichever suits you best will depend on your availability and workload requirements.
Superior User Interface
With the ThinkSystem SR685a V3, they at Lenovo go above and beyond the norm to provide a premium user experience. To do this by incorporating easy systems administration, robust power supply, and thermals for next-generation GPUs.

- With power supply that provide complete N+N redundancy without throttling, this workhorse offers the highest level of resilience.
- This computer, which uses N+1 hot-swap fans for air cooling, has enough thermal headroom to support both current and next GPUs with greater power.
- Lenovo XClarity systems software is pre-installed on the ThinkSystem SR685a V3 to facilitate deployment and management.
ThinkSystem SR685a V3 Specifications
Form Factor | 8U rack. |
---|---|
Processor | 2x 4th Generation AMD EPYC Processors, up to 400W. |
Memory | Up to 3TB using 24x DDR5 DIMMs at max frequency. |
GPU | Supports 8x high-performance GPUs: 8x AMD Instinct MI300X GPUs with Infinity Fabric interconnects at 896 GB/s8x NVIDIA H100/H200/B100 GPUs with NVLink interconnects at 900 GB/s. |
I/O Expansion | Up to 10x PCIe Gen5 x16 FHHL adapters (8 front, 2 rear): 8x in front connected to PCIe switch for GPU connectivity 2x PCIe or 1x PCIe + 1x OCP3.0 in the rear connected to CPU for CPU connectivity. |
Storage | Up to 16x 2.5″ hot-swap NVMe SSDs Up to 2x M.2 for boot (RAID support). |
Power | Up to 8x hot-swap power supplies, allowing full N+N redundancy. |
Cooling | Air-cooled with N+1 hot-swap fan solution. |
Management | XClarity Controller2 (XCC2), which provides advanced service-processor control, monitoring, and alerting functions. The XCC2 consolidates the service processor functionality, super I/O, video controller, and remote presence capabilities into a single chip on the server system board. |
OS Support | RHEL, Ubuntu, Alma Linux, Rocky Linux, ESXi. |