Intel has announced a new wave of artificial intelligence. Its Nervana Neural Network processors are designed to accelerate AI system development and deployment from cloud to edge, with a new class of AI hardware.
At a gathering of industry influencers, Intel demonstrated its new Nervana Neural Network Processor for Training (NNP-T1000) and inference (NNP-I1000). The new technology is Intel’s first purpose-built ASICs aimed at deep learning, including incredible scale and efficiency for cloud and data centre customers.
At the same event, Intel also revealed its next-generation Movidius Myriad Vision processing unit aimed at edge media, computer vision and inference applications. Intel’s AI solutions are expected to generate over $3.5 billion in revenue in 2019 and these new products further bolster the company’s portfolio.
“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge.” – Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group.
Intel Nervana NNP’s offer a full software stack, developed with open components and deep learning framework. The new technology is in production now and is being delivered to customers. The NNP-T provides a balance between compute, communication and memory, to allow for near-linear and energy-efficient scaling for small clusters, right up to the largest pod supercomputers. Intel Nervana NNP-I is a power and budget-efficient solution ideal for running intense, multi-modal inference at real-world scale.
Discuss on our facebook page HERE.
KitGuru says: The addition of Nervana Neural Network processors to the company’s portfolio of AI solutions will help to boost performance in deep learning, training and AI inference across data centre and edge deployments for years to come.