Building embedded AI usually takes longer than it should. Models are trained in one environment, deployed in another, and often only properly tested once they are flashed to the board. That means long iteration cycles, too much guesswork, and a lot of time spent finding out how the model performs only after deployment. What if AI could be used to automate the toolchain for creating more AI models?
This webinar shows how that’s possible. You’ll see how ModelCat’s AI-powered platform can help engineering teams train, optimise, and benchmark embedded models in a fraction of the usual time, measuring how it performs on real hardware during development. Combined with Alif Semiconductor’s Ensemble series, this makes it possible to go from dataset to optimised model in hours.
We’ll also show a real case study from Embedded World 2026, where the team built a live object-recognition system on Alif silicon in under 30 hours. Alongside that, you’ll see a live walkthrough of the ModelCat SaaS interface, including how models are generated, tuned, and benchmarked using Hardware-in-the-Loop (HIL) measurements for power, runtime, and accuracy.
What you’ll learn:
- Why embedded AI projects often take months to move from training to deployment
- How AI can be used to build and optimise AI models for embedded hardware
- What makes ModelCat different from traditional edge AI development workflows
- How Hardware-in-the-Loop benchmarking improves confidence by measuring power, runtime, and accuracy on real hardware
- Why the Alif Ensemble series is a strong target for high-performance embedded AI workloads
- How the team built a live AI object-recognition demo on Alif hardware in under 30 hours
- How to speed up edge AI deployment without giving up control over model performance
Free