Israeli startup Habana Labs, the developer of AI deep learning processors for data centers, has introduced its second-generation CPU that strengthens its rivalry with Nvidia. Intel acquired Habana Labs for $2 billion in 2019.
“These new processors fill a market void by offering clients with high-performance, high-efficiency deep learning computing options for both training workloads and inference deployments in the data center, while decreasing the AI entry barrier for all businesses,” Intel said.
The Gaudi2 can process 5,425 photos per second, but the A100 can only handle 2,930 per second.
According to the compney, Nvidia dominates the market for AI deep learning processors, and the Gaudi2 is Intel’s attempt to increase its competitiveness as the market for data centers expands by 36.7% yearly.
Habana Labs said the new CPU is already available to clients, and supply change interruptions have had no effect on distribution because Intel is able to provide the necessary chips.
With a manufacturing improvement from 16 nm Gaudi to 7 nm, Gaudi2 offers considerable improvements in computation, memory, and networking. Mobileye, a creator of driverless vehicles and another Israeli Intel subsidiary, has already placed orders for the Gaudi2 CPU.
What Customers and Partners are Saying:
Mobileye Vice President of Research and Development Gaby Hayon stated as a customer: “As a world leader in automotive and driving assistance systems, training cutting-edge deep learning models for tasks such as object detection and segmentation that enable vehicles to sense and understand their surroundings is mission-critical to Mobileye business and vision,” said Gaby Hayon.
“As training such models is time-consuming and costly, multiple teams across Mobileye have chosen to use Gaudi-accelerated training machines, either on Amazon EC2 DL1 instances or on-prem. Those teams consistently see significant cost savings relative to existing GPU-based instances across model types, enabling them to achieve much better time-to-market for existing models or training much larger and complex models aimed at exploiting the advantages of the Gaudi architecture. We’re excited to see Gaudi2’s leap in performance, as our industry depends on the ability to push the boundaries with large-scale high performance deep learning training accelerators.”