Expanding Applications for ML Through Research

As machine learning (ML) expands to more applications across all areas of compute and the wider technology agenda, our research continues to guide and inform this growth. Arm advanced hardware, software, and tools provide the energy efficiency and performance required to support increasingly complex algorithms in this rapidly evolving area. 

Key Research Threads

Our research covers a wide range of topics that focus on developing the technology to power future machine learning solutions.

Hear from Our ML Researchers

 

Want to know more about our work? Connect with our team at these events: 

 Event Name Location  Date Speaker Talk Title
Artificial Intelligence Festival Virtual June 8-12 2020 Matthew Mattina ML On The Edge: Hardware and Models for Machine Learning on Constrained Platforms
tinyML Talks Virtual June 9 2020 Igor Fedorov SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
AI Hardware Summit 
Munich, Germany September 29-30, 2020 Mark O’Connor Hardware and Models for Deep Learning on Mobile and Embedded Platforms

 

Latest Publications

Title

Authors
Run-Time Efficient RNN Compression for Inference on Edge Devices Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina
Compressing RNNs for IoT Devices by 15-38x Using Kronecker Products Urmish Thakker, Jesse Beu, Dibakar Gope, Chu Zhou, Igor Fedorov, Ganesh Dasika, Matthew Mattina
Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Igor Fedorov, Ryan P. Adams, Matthew Mattina, Paul N. Whatmough

Measuring Scheduling Efficiency of RNNs for NLP Applications

Urmish Thakker, Ganesh Dasika, Jesse Beu, Matthew Mattina
Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications 
Dibakar Gope, Ganesh Dasika, Matthew Mattina
Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs 
Partha Maji, Andrew Mundy, Ganesh Dasika, Jesse Beu, Matthew Mattina, Robert Mullins
Learning Low-Precision Neural Networks without Straight-Through Estimator(STE) Zhi-Gang Liu, Matthew Mattina
RNN Compression using Hybrid Matrix Decomposition
Urmish Thaker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina 
DNN Engine: A 28-nm Timing-Error Tolerant Sparse Deep Neural Network Processor for IoT Applications 
Paul Whatmough, S.K. Lee, D. Brooks, G.Y. Wei
FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Paul N. Whatmough, Chuteng Zhou, Patrick Hansen, Shreyas Kolala Venkataramanaiah, Jae-sun Seo, Matthew Mattina
Efficient and Robust Machine Learning for Real-World Systems 
Franz Pernkopf, Wolfgang Roth, Matthias Zoehrer, Lukas Pfeifenberger, Guenther Schindler, Holger Froening,Sebastian Tschiatschek, Robert Peharz, Matthew Mattina, Zoubin Ghahramani 
Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning 
Paul Whatmough, Chuteng Zhou, Patrick Hansen, Matthew Mattina 
SCALE-Sim: Systolic CNN Accelerator Simulator 
Ananda Samajdar, Yuhao Zhu, Paul Whatmough, Matthew Mattina, Tushar Krishna 
Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision 
Yuhao Zhu, Anand Samajdar, Matthew Mattina, Paul Whatmough 
Mobile Machine Learning Hardware at ARM: A Systems-on-Chip (SoC) Perspective  Yuhao Zhu, Matthew Mattina, Paul Whatmough
Meet the Team
Matthew Mattina

Senior Director of Machine Learning

Matthew Mattina is head of Arm’s Machine Learning Research Lab, where he leads a team of world-class researchers developing advanced hardware, software, and algorithms for machine learning.

Join the team! We are always looking for talented researchers across all areas of ML. In particular, we are keen to hear from experts in probabilistic ML, including Bayesian inference, Gaussian processes, variational inference, probabilistic models, and ensemble learning. See our current vacancies

Join Our Team
Latest ML Research Blogs

Read more blogs on our community website.

SpArSe: Democratizing and Enabling TinyML on Arm M-class

Microcontrollers (MCUs) are truly the ubiquitous computer of our time. They are tiny, cheap, and use low power. They can often be powered indefinitely using a solar cell. They are in your watch, your fridge, and your car contains about 30. 

SCALE-Sim: A Cycle-Accurate NPU Simulator for Your Research Experiments

There are currently few simulator options for those working on NPU architecture, which severely limits architecture research. As a result, we have developed a simple architecture simulator in Python, which specifically targets NPUs.

TinyML Applications Require New Network Architectures

In this post, we walk through our work in developing efficient architectures for resource-constrained devices and discuss the best learning methodology to train Doped Kronecker Product. 

Explore the AI and Machine Learning Ecosystem at Arm

Learn More