Published
19 May 2025
Written by Harry Forster
Swiss Army Knife Meets Neural Net: BrainChip’s New Era of Edge AI
At Embedded World 2025, BrainChip’s CTO Tony Dawe brought a showstopper: the company’s TENNs algorithm for low-power LLM on the edge. And no, this isn’t another “AI buzzword salad.” It’s a real, tangible, running-on-an-FPGA large language model, answering questions without an internet connection, without a data centre, and without blowing through your power budget.
This is what generative AI looks like when it’s reimagined for the real world—and a wristwatch battery.
What Is TENNs, and Why Should You Care?
TENNs (short for “Temporal Event-based Neural Networks”) is BrainChip’s new universal neural network formulation. It’s lightweight, power-efficient, and adaptable across a broad range of AI workloads—from audio denoising to keyword spotting, ASR, LLMs, and beyond. But here’s the magic: unlike traditional RNNs or transformers, TENNs can be trained using standard workflows on GPUs, then folded into an efficient recurrent form for ultra-low-power inference on BrainChip IP.
The result? A chip that does the job of a cloud, on-device. And quietly.
LLMs Without the Cloud: The Real Gamechanger
In one of the most engaging demos at the show, BrainChip ran a local large language model on an FPGA—entirely disconnected. Ask a question, and it replies intelligently. Want to embed that into a social robot, a smart appliance, or a care assistant for the elderly? You can.
Tony likened it to “an artificial son”—ready to help, explain, interpret… without phoning home to Silicon Valley. This is intuitive machine interaction, without compromising privacy or introducing latency.
Why It Matters: From Toast to Trust
Sure, the toaster example was funny—but this technology opens serious doors. With ageing populations and stretched care systems, BrainChip’s platform could enable private, always-on, voice-guided support systems for independent living.
From consumer devices to social robots, from wearables to vehicles, the TENNs algorithm for low-power LLM on the edge sets a new benchmark. It’s not just impressive—it’s necessary.
Comments are closed.
Comments
No comments yet