Chipmaker Ampere Computing has joined forces with Qualcomm to develop a new server solution specifically designed for artificial intelligence (AI) applications. This collaboration leverages Ampere’s powerful Arm-based central processing units (CPUs) and Qualcomm’s Cloud AI 100 Ultra AI accelerator chips.

The target market for this offering is AI inference, which involves running pre-trained AI models to generate predictions or classifications based on new data. This is in contrast to AI training, the computationally intensive process of creating those models in the first place.

Source: Ampere

Traditionally, Nvidia’s graphics processing units (GPUs) have dominated the AI inference market. However, Ampere argues that their Altra CPUs, specifically designed for cloud and AI workloads, can offer comparable performance at lower power consumption. By combining these CPUs with Qualcomm’s specialized AI accelerators, the aim is to provide an even more efficient and scalable solution for running large language models and other demanding AI applications.

This partnership is significant for a few reasons. Firstly, it highlights the growing importance of Arm-based processors in the data center, a market traditionally dominated by x86 chips from Intel and AMD. Secondly, it demonstrates a collaborative approach between two major chipmakers to address the specific needs of AI workloads. Finally, the solution promises greater efficiency, potentially reducing the operational costs of running large AI models.

While the specifics of the server design and its performance benchmarks are yet to be revealed, this announcement signifies a new contender in the AI server market. With increasing demand for efficient AI inferencing, Ampere and Qualcomm’s collaboration has the potential to shake up the established players in this rapidly evolving space.

Shares: