Products
Solutions
Published
30 July 2025
Written by Elliott Lee-Hearn
If you’ve been keeping up with the latest in AI hardware, you’ve likely heard of NPUs—neural processing units, that are increasingly found in edge AI devices. But according to Quadric CEO Veerbhan Kheterpal, NPUs as we know them aren’t built for the future. In a recent conversation with ipXchange, he introduced us to something entirely new: a general-purpose neural processing unit (GPNPU) that aims to outlast and outperform today’s task-specific accelerators.
The Problems with NPUs…
The problem with most NPUs, Kheterpal says, is that they’re designed to support only a fixed set of models. That means a chip built for today’s AI landscape may be obsolete within a few years—or even months—as algorithms evolve. In contrast, Quadric’s general-purpose neural processing unit is reprogrammable and adaptable by design. It’s not locked to any specific family of models and can be updated via software, keeping pace with future developments in AI.
GPNPU vs NPU
The difference between a GPNPU and an NPU is akin to the difference between an FPGA and a CPU. So what makes this GPNPU architecture different? Quadric’s Chimera IP combines the best of both worlds: the programmability of a CPU and the parallel data processing of an AI-specific tensor core. By blending instruction-set programmability with efficient data movement, this general-purpose neural processing unit can handle full pipelines—from pre-processing to AI inference to post-processing—without offloading tasks to external components.
This approach isn’t just about flexibility. It also simplifies system architecture by reducing the need for multiple accelerators (CPU + NPU + DSP + GPU). In many embedded AI workloads—especially those requiring evolving or multi-model inference—the GP-NPU can streamline both software and hardware design. According to Quadric, that leads to higher performance, lower latency, and reduced total cost of ownership.
Target Applications
Key applications for this general-purpose neural processing unit include automotive systems, industrial vision, smart cameras, and edge computing devices with long life cycles. In these environments, the ability to push software updates over time is critical. Quadric is already working with customers who see the value in flexible IP that can support next-generation models—including whatever comes after transformers.
The Future of GPNPUs
With Series A and B funding already secured, Quadric is pushing toward wide adoption. Kheterpal sees the GP-NPU as the inevitable next step in AI compute. “Everyone’s already been burned by hardcoded silicon,” he told us. “Now they’re looking for solutions that don’t expire in a year.”
If you’re building an AI system that needs to adapt and survive beyond today’s models, it may be time to rethink the NPU—and consider going general-purpose.
Comments are closed.
Comments
No comments yet