Intel 14th Gen Meteor Lake Brings Dedicated NPU for Offline AI Processing

Intel NPU architecture - 14th-Gen Meteor Lake

Intel, at its Innovation 2023 event, announced the 14th Gen Meteor Lake processor lineup that comes with a wide variety of improvements and new features. While the Intel 4 process, a disaggregated architecture, and Intel Arc integrated graphics are exciting upgrades, the users will appreciate the AI features that Meteor Lake chips come integrated with. Intel announced it is bringing artificial intelligence to modern PCs through an all-new neural processing unit (NPU) in 14th-Gen Meteor Lake processors. But what is the NPU, and how will it help? Let’s talk about it below.

Intel Brings NPU to Client Chips for the First Time

While artificial intelligence is widely used online, being on the cloud has limitations. The most prominent of them include high latency and connectivity issues. A Neural Processing Unit (or NPU) is a unit dedicated to the CPU for processing AI tasks. So, instead of relying on the cloud, all AI-related processing is done on the device itself. Intel’s new NPU will be able to do exactly that.

While Intel has been experimenting with AI for a while, this will be the first time the company will bring an NPU to the client-sided silicon. The NPU on Meteor Lake chips is a dedicated low-power AI engine that can handle sustained AI workloads both offline & online.

Image Courtesy: Intel

This means instead of relying on AI programs to do the heavy lifting from the internet; you will be able to use the Intel NPU to do the same job such as AI image editing, audio separation, and more. Having an on-device NPU for AI processing and inference will undoubtedly have a lot of advantages.

Since there will be no latency, users can expect lightning-fast processing and output. Furthermore, Intel’s NPU will help improve privacy and security with the chip’s security protocols. It is safe to say you will be able to use AI more interchangeably, and that too daily.

Intel NPU Can Sustainably Interchange Workloads

The Intel NPU has been divided into two main portions that each do their job. These are Device Management and Compute Management. The former supports a new driver model for Microsoft called the Microsoft Compute Driver Model (MCDM). This is vital to facilitate AI processing on the device. Furthermore, because of this driver model, users will be able to see the Intel NPU in the Windows Task Manager along with the CPU and GPU.

Image Courtesy: Intel

Intel has been working on the aforementioned driver model with Microsoft for the last six years. This has enabled the company to ensure that the NPU efficiently works through tasks while dynamically managing power. This means the NPU can quickly switch to a low-power state to sustain these workloads and back again.

Image Courtesy: Intel

The Intel Meteor Lake NPU employs a multi-engine architecture. Built inside it are two neural compute engines. These can work on two different workloads or together on the same one. Within the neural engine lies the Inference Pipeline, which acts as a core component and will help the NPU function.

Intel’s NPU Makes On-Device AI Possible

While the actual workings of the NPU go far deeper, users can expect a lot of firsts with the new Meteor Lake architecture. With on-device AI processing coming to Intel’s consumer chips, it will be exciting to see their range of applications. Also, the presence of the Intel 14th Gen Meteor Lake’s NPU in the Task Manager, while sounding simple, indicates that consumers will be able to capitalize on it. However, we will have to wait for the rollout to see how far we can push it.

Featured Image Courtesy: Intel

Comments 0
Leave a Reply