Spawned from the Advanced R&D Lab of LG Electronics, AiM Future is accelerating the transition from centralized cloud-native AI to the distributed intelligent edge. Future edge devices require concurrent multi-network inferencing to deliver true machine intelligence. Our product offering enables this future via a configurable AI/ML hardware accelerator, the NeuroMosAIc Processor (NMP), and co-developed software, NeuroMosAIc Studio. The HW and SW are highly portable, flexible, and optimally meet edge performance, power, and cost requirements, enabling scalable solutions from always-on, battery-operated devices to high-performance edge infrastructure. The NMP HW and SW integrate into an Arm CPU-based system to produce high utilization of available resources ensuring the best inference-per-second-per-watt (IPS/W) efficiency when executing multiple AI models. The NMP easily performs machine vision, audio, and multi-sensor data processing necessary in advanced edge-of-network applications.
Simultaneous On-device Inferencing in Autonomous Delivery Robots
Mobile robot assistants are changing industries from retail to healthcare. Devices, once limited to persistent connectivity, are becoming autonomous thanks to edge AI technologies. Learn how Arm and the NeuroMosAIc Processor are enabling the future.
Simultaneous Machine Vision Inferencing on the NeuroMasAIc Processor
The number of sensors per edge device is increasing dramatically. To deliver accurate, real-time results, these devices must perform multiple inferences concurrently on the device. Here we show two separate vision models running on a single NMP.
Concurrent Multi-Model Inferencing for Edge Computing
As AI moves closer to the data, edge devices are adopting capabilities once only imagined in the cloud. The NeuroMosAIc Processor simultaneously executes multiple machine vision models in less that 1 TOPS and significantly less than 1 Watt.