Even Faster CNNs: Exploring the New Class of Winograd Algorithms

Convolutional Neural Networks (CNNs) are compute-intensive deep neural networks, with increasingly complex architectures.

Over 90% of the operations performed by the CNN are convolutions and many different algorithms – such as fast Fourier transform – have been proposed to accelerate performance and reduce computation time. However, a new class of Winograd algorithms can make CNNs faster than ever before – without accuracy loss – allowing models for classification and recognition to be deployed on low-power, Arm-based platforms.

Join us to discover:

  • The recently introduced class of algorithms that can reduce the arithmetic complexity of convolution layers in the model, with small filter sizes
  • The latest optimization techniques for the most common solutions, such as GEMM
  • The design of Winograd algorithms, with an analysis of the complexity and the performance achieved for convolutional neural networks