Perhaps that title is a mouthful, but in another AI vision demonstration at Embedded World, this time at the OKdo stand, Kerry from OStream talks us through a cluster of boards running OStream’s PipeRunner edge AI computer vision solution.
Using existing cameras, PipeRunner enables users to convert media streams (video and audio) into a searchable dataset as metadata is injected based on what PipeRunner picks out as useful content – cars, number plates, people, or animals, for example. This data can then be used to take immediate action in your application.
As Kerry shows us, searching the data is easy thanks to OStream’s well-thought-out software stack for easy creation of AI pipelines. This demo features a low-cost carrier board for NVIDIA’s Orin SoMs to serve as the main source of processing power. This allows PipeRunner to run a pose detection model at 224 fps, with the ability for multiple boards to be clustered for executing this AI pipeline at scale across many cameras.