Intel Brings Its First Artificial Intelligence-Powered Processor “Nervana”

Intel Brings Its First Artificial Intelligence-Powered Processor “Nervana”

Semiconductor manufacturing company Intel introduced its first high performance artificial intelligence-powered neural network processor, named Nervana. Showcased at the "Hot Chips 2019" conference at Stanford University in California, the processor is bifurcated into Intel Nervana NNP-T for training and Intel Nervana NNP-I for inference.

The purpose of this AI-driven NNP chip is to meet the rising high-speed computing and robotics demand. According to Intel, the Intel Nervana NNP-T is designed from the ground up to train deep learning models at scale that will push the boundaries of deep learning training. The processor is also built to prioritise two key real-world considerations – training a network as fast as possible and performing it within a given power budget. Social network giant Facebook has already started using its first AI chip, the company claims.

Nervana is built with flexibility in mind, striking a balance among computing, communication, and memory, the company said. Intel Nervana NNP-I (code-named Springhill) is purpose-built for inference, intended to expedite deep learning deployment at scale, and introducing specialised deep learning acceleration while leveraging Intel's 10nm process technology with Ice Lake processors to provide industry-leading performance and communicate with the motherboard using M.2, a slot that is typically used for solid-state storage.

Intended for large computing centers, the Nervana NNP-I is developed at Intel's development facility in Haifa, Israel. It derived from the company's $120 million investment in three Israeli AI startups, including NeuroBlade and Habana Labs, Intel mentioned. It also comes with highly capable SW stack able to support all major DL frameworks. The main features Nervana NNP-I comes with – 12x Inference Compute Engines; Intel IA cores with AVX and VNNI; 4×32, 2×64 LPDDR4x; Dynamic power management and FIVR technology; 24MB LLC for fast Inter ICE and IA data sharing; Hardware-based sync for ICE to ICE communication; Total SRAM (75 MB); DRAM BW (68 GB/s); and, Next two generations in planning/design.

General Manager of Intel's Artificial Intelligence products group, Naveen Rao said "In order to reach a future situation of 'AI everywhere', we have to deal with huge amounts of data generated and make sure organisations are equipped with what they need to make effective use of the data and process them where they are collected."

Intel also expects the AI chip to run alongside its Xeon-based servers in large companies as the need for compound computations in the AI field will increase in years to come.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net