Plumerai develops production-worthy embedded AI through a relentless focus on the full AI stack, from collecting and curating data, to training algorithms, model architectures, inference engines, and hardware optimizations. Our AI consistently proves to be the most accurate in any environment, while consuming minimal resources. With a tiny memory footprint of 1MB, it runs efficiently on nearly any hardware architecture, reaching 55 frames/s on a single-core Arm Cortex-A CPU. Our Familiar Face Identification accurately effectively distinguishes household members from strangers. Our inference engine for Arm Cortex-M is the fastest and smallest in the world. Plumerai has offices in London and Amsterdam.

Solution Briefs

  • thumbnail: Plumerai - Person Detection on Arm Cortex-M and Cortex-A
    Plumerai - Person Detection on Arm Cortex-M and Cortex-A

    Plumerai provides a highly-accurate and efficient turnkey software solution for camera-based person detection on Arm Cortex-M (2-5fps) and Arm Cortex-A (55fps). Target applications include security cameras, video doorbells, smart home cameras, etc

    Learn More

Insights

  • The world’s fastest deep learning inference software for Arm Cortex-M Blog
    The world’s fastest deep learning inference software for Arm Cortex-M

    Our inference software for Arm Cortex-M microcontrollers is the fastest and most memory-efficient in the world. It has 40% lower latency and uses 49% less RAM than TensorFlow Lite for Microcontrollers kernels while retaining the same accuracy.

    Learn More
  • Arm Tech Talk: Accelerating People Detection with Arm Helium vector extensions Blog
    Arm Tech Talk: Accelerating People Detection with Arm Helium vector extensions

    Watch Cedric Nugteren showcase Plumerai’s People Detection on an Arm Cortex-M85 with Helium vector extensions, running at a blazing 13 FPS with a 3.7x speed-up over Cortex-M7.

    Learn More
  • Great TinyML needs high-quality data Blog
    Great TinyML needs high-quality data

    The use of BNNs helps us reduce the required memory, the inference latency and energy consumption of our AI models, but there is something that we have been less vocal about that is at least as important for AI in the real world: high-quality data.

    Learn More
  • Demo of the world’s fastest inference engine for Arm Cortex-M Arm Tech Talk
    Demo of the world’s fastest inference engine for Arm Cortex-M

    Recently Plumerai announced its inference engine for 8-bit deep learning models on Arm Cortex-M microcontrollers. We showed that it is the world’s most efficient on MobileNetV2, beating TensorFlow Lite for Microcontrollers with CMSIS-NN kernels.

    Learn More
  • Advancing computer vision on the edge with different ML approaches Arm Tech Talk
    Advancing computer vision on the edge with different ML approaches

    In this talk, we highlight how Plumerai revolutionizes computer vision on the edge through machine learning on Arm platforms.

    Learn More