Optimizing Machine Learning Workloads on Power-Efficient Devices

Software frameworks for neural networks, such as TensorFlow, PyTorch, and Caffe, have made it easier to use machine learning as an everyday feature, but it can be difficult to run these frameworks in an embedded environment.

Limited budgets for power, memory, and computation can all make this more difficult. At Arm, we’ve developed Arm NN, an inference engine that makes it easier to target different SoC architectures, for faster, higher-performance deployment of machine learning in embedded.

Loading...