Intel has lifted the veil on a second generation of Gaudi accelerators that could drastically reduce the time needed to train large-scale AI models.
Announced at Intel Vision 2022 in Dallas, the Gaudi 2 processors are built on a 7nm process, feature 24 onboard 100GbE RoCE ports, and feature the most memory of any accelerator on the market (96GB HBM2e).
The new processors are a product of the Israeli company Habana Labs, which has been absorbed by Intel in 2019, and are designed to waiters dedicated to deep learning workloads.
Train AI models
In recent years, a number of large-scale natural language processing (NLP) and computer vision models have emerged that offer far superior performance to previous entries in the respective disciplines.
The problem is that training these multi-billion parameter models is incredibly computationally intensive, and therefore expensive and time-consuming, a limiting factor in the development of the technology.
However, with the new Gaudi 2 accelerators, the cost and time required to develop new sophisticated AI models will be significantly reduced, according to Intel.
According to Eltan Medina, COO at Habana, the price/performance ratio is a key factor for customers, and has therefore been a priority during the development of the second generation accelerators.
Benchmarks presented at Intel Visions suggest that Gaudi 2 processors deliver about 2x the training throughput on popular NLP and vision workloads (BERT and Restnet-50), compared to Nvidia’s A100 GPU.
At the same time, the new Gaudi chips are expected to save around 40% on both types of workloads, again compared to A100 GPUs.
“Intel is advancing AI and value for data center customers with Habana accelerators, which are the optimal solution for deep learning dedicated servers,” Medina said. “We think this category will be extremely important.”
Gaudi 2 processors are immediately available to customers and are also likely to support cloud instances of AWS further down the line, like with the previous generation.