Apply for the development board now

50 TOPS at 5 W?!’s MLSoC is an AI beast!

Some manufacturers make processors. Some make machine learning accelerators. But not many put both into a single chip so that bandwidth is no longer an issue, and they certainly don’t do it with this level of power efficiency…

In another AI-related interview from Embedded World 2024, ipXchange chats with Manuel about’s MLSoC (Machine Learning System-on-Chip). This multi-core processing SoC features 4x Arm Cortex-A65 dual-threaded processors, Synopsis’ EV74 computer vision processor, and SiMa-ai’s own Machine Learning Accelerator, which enables up to 50 TOPS performance at 10 TOPS per watt!

If you’ve been watching our other interviews from the show, you’ll quickly realise that’s MLSoC is a very power-efficient solution when it comes to processing machine learning algorithms that will take your product to the next level. Here’s a quote from the team about how they see this innovative product:

“What is uniquely valuable about our device is that we can run an entire computer vision pipeline on our device and not just the ML component. Not only is this a huge integration and development time advantage, this greatly helps performance as quite often the bottleneck in the application is not the ML. By being able to run the whole application pipeline, is uniquely positioned to accelerate the entire computer vision data flow, not just the ML part. We have taken a software-centric approach to build our products that include both the MLSoC as well as the Palette Software Platform.” is also the fastest AI solution at this level of performance, beating market leaders NVIDIA twice for edge use cases, without the need for active cooling – i.e. fanless operation for lowest maintenance!

Manuel also emphasises the ease-of-use of’s Machine Learning SoC for developers who are only just getting started in the world of artificial intelligence.’s open-source interpreter accepts neural networks from all frameworks and converts them to the binary representation that will run on the MLSoC.

The demonstration on the screen shows the MLSoC using machine learning algorithms to analyse four video streams, each running at 25 frames per second and identifying cars. All functions – pre-processing, labelling, inferencing, box generation, post-processing etc. – are running completely on’s device, without the requirement for cloud processing. The only exception to this is the video signal output, and as an edge AI solution, ML model training must still be done externally to the chip.

That said, since all the data is processed in real time on the chip, where other solutions might only provide the AI acceleration or digital signal processing, bandwidth is not an issue for’s MLSoC.

This is surely a very exciting solution for engineers looking to implement machine learning in the most streamlined way possible, so follow the link to the board page below, and apply to evaluation’s MLSoC today!

Keep designing!

Need more AI accelerator content? Check out these other interviews from Embedded World 2024:

Machine vision with the Hailo-8 AI accelerator

Fanless industrial computers for 24/7 AI tasks

How to run vision AI on batteries with Alif MCUs

Hailo’s M.2 card does gen-AI at the edge at 3.5 W

Cadence DSP provides LLM and LVM AI at the edge MLSoC Machine Learning System-on-Chip

Want AI acceleration, standard processing, DSP, and more in a single, power-efficient chip?

Apply for the development board now

Share this

We’re disrupting the world of component evaluation for design engineers and manufacturers!

Discover why ipXchange is a game changer.

Get industry related news

Sign up for our newsletter and get news about the latest development boards direct to your inbox.

We care about the protection of your data. Read our Privacy Policy.