Intel may not be ready to manufacture 7nm processors just yet, but it’s propagating its artificial intelligence (AI) know-how by talking about its new AI processors. The American chipmaker has detailed its Nervana neural network processors (NNPs) during the Hot Chips 2019 conference. It explained how they would come in handy for deep learning training and deployment at scale in large computing centers.
Intel has worked on two AI processors, namely NNP-T (codenamed Spring Crest) and the NNP-I (codenamed Spring Hill) to make it possible for users to process data at scale. The former is built from the ground up to help train a neural network as fast as possible while keeping in check the power budget. But, we are more interested in the latter as Nervana NNP-I (Spring Hill) processor is designed to handle data center workloads.
Intel talks up the benefits of having a dedicated inference accelerator, i.e the NNP-I, at a data center by saying that is easy to program, has short latencies, fast code porting and supports all major deep learning frameworks. The Spring Hill processor is based on a 10-nanometer Ice Lake architecture and that should help it cope with high workloads while utilizing minimal amounts of energy across the data center.
“Dedicated accelerators like the Intel Nervana NNPs are built from the ground up, with a focus on AI to provide customers the right intelligence at the right time,” says Intel in an official blog post. These chips have been birthed at the company’s development facility in Haifa, Israel after Intel’s investment in AI startups such as Habana Labs and NeuroBlade.
The first company to start using the new neural network processors is Facebook, which is understandable as it has boatloads of data to process. Intel further adds that its new NNP hardware chips will support Xeon processors in meeting the growing demand and need for complicated AI computations at big corporations.