Language AI off the cloud can be difficult, so how do we get to a point when you can ask a robot to bring you a beer while you’re watching TV? Well, Synaptics is paving the way for that with the SL1680…
In ipXchange’s final interview with Synaptics at Embedded World 2024, Eamon chats with Rajib for another demonstration of the new Astra edge-AI platform.
This time around, the Astra Machina board is using Synaptics’ SL1680 AIoT solution to run part of a Large Language Model (LLM) AI at the edge in Senary’s platform. This means that a query can be quantified – prechecked – before accessing the cloud server backend and relaying the more complex commands to the small robot you see in front of the screen.
Though the demonstration is on the small scale, the AI technology behind it illustrates key use cases like home automation and smart home assistants. The beauty of spreading the AI workload between the edge and the cloud, using Synaptics’ SL1680 and Astra Machina kit, is that if a command is not recognised at the edge, there is no need to access the cloud server and log what actions have occurred.
This is all made possible due to the extreme capabilities of Synaptics’ SL1680 embedded IoT platform, which runs at 2.1 GHz with 4x Arm Cortex-A73 processing cores and an 8-TOPS NPU.
Synaptics’ accompanying software also makes it easy to port AI models onto these devices, so learn more about this exciting AIoT platform by following the link to the board page below. There, you can also apply to evaluate the technology for your own commercial projects.
Keep designing!
Want to see more demonstrations of Synaptics’ Astra platform? Check out our other interviews from Embedded World 2024: