The Latest in Artificial Intelligence Trends, Technologies, and Best Practices


This series of talks brings you the latest AI trends, technologies, and best practices from experts at Arm and our partner ecosystem. From the latest cutting-edge research to real-world use cases, code examples, workshops and demos.

If you are interested in our series of talks about AI and machine learning, sign up now.

Schedule of Upcoming AI Tech Talks

Advancing Computer Vision on the Edge with Different ML Approaches

Speaker: Davis Sawyer, Co-Founder & Chief Product Officer, Deeplite
Roeland Nusselder, CEO & Co-Founder, Plumerai
Deepak Mital, CEO and cofounder, Roviero
May 31rd, 2022, 8:00 a.m. PST, 4:00 p.m. GMT

Computer Vision at the edge is going through a huge transformation due to the advent of machine learning. In this talk, we highlight how Arm Partners Roviero, Deeplite, Plumerai are revolutionizing computer vision on the edge through machine learning on Arm platforms. During this talk, they will present a variety of approaches which they use to perform various computer vision tasks either running conventional neural networks, binary neural networks on hardware or software.

How to Run Object Detection on Arm Cortex-M7 Processors

Speaker: Louis Moreau, Senior DevRel Engineer, Edge Impulse
June 14th, 2022, 8:00 a.m. PST, 4:00 p.m. GMT

In this session, you'll learn to perform object detection on all Arm MCUs with Edge Impulse FOMO. FOMO (Faster Objects, More Objects) is a novel machine-learning algorithm that enables object detection on all Arm microcontrollers, for the first time. With this new architecture you will be able find the location and count objects, as well as track multiple objects in an image in real-time using up to 30x less processing power and memory than MobileNet SSD or YOLOv5.

Nota.AI: A Hardware-aware Approach for Designing Neural Models

Speaker: Shinkook Choi, Tech Lead, Nota Inc
June 28th, 2022, 8:00 a.m. PST, 4:00 p.m. GMT

Since modern AI chipsets have different strategies for efficient operations, most neural network models may not be sufficiently optimized for these devices in terms of latency and memory footprint.

In this talk, we present how we make popular neural models be efficiently deployed on Ethos-U65, a newly launched Micro NPU. To this end, we first examine various operation forms (e.g., convolution types and filter size) and identify suitable operations to improve the accuracy-latency trade-off. 

Based on this investigation, we carefully redesign well-known convolutional blocks (e.g., inverted residual blocks and ghost blocks) and use these blocks to replace computationally inefficient blocks in given models. We demonstrate that the model variants obtained by our approach can significantly reduce inference time as well as memory budget without noticeable performance drops on Ethos-U65.

Featured Videos: Watch some of our past AI Tech Talks
from our Partner Ecosystem


View the Arm AI Tech Talk YouTube Playlist

Every two weeks, we discuss and explore some of the latest trends, technologies, and best practices in the exciting world of AI, featuring incredible partners from the AI Ecosystem, as well as speakers across Arm.

View Youtube Playlist

Join the Arm AI Partner Program

If you are interested in presenting your project or learnings at our AI Tech Talk series and have an AI solution based on Arm IP, you can benefit from a host of co-marketing opportunities as part of the Arm AI Partner Program, including Arm AI Tech Talks. Find out more and join the program today.

Apply Now

Additional AI Resources

Discover additional content and other resources of interest

AI Ecosystem Catalog

Arm’s AI Ecosystem helps to deliver the next generation of AI solutions. Connect with Arm AI Ecosystem Partners through our catalogue.

AI Technologies

Arm AI provides the most versatile and scalable environment for AI development by bringing together the best IP, tools, software and support.

AI Case Studies

Through our vast ecosystem, Arm already powers a wide range of devices and applications that rely on ML at the network edge and endpoints.