Predicting Real-Time Neural Network Performance

Quantifying Performance

Timur Doumler shares more information on meeting real-time deadlines in audio programming.

What’s A Good Score?

Predicting Performance

  • Implement the relevant neural network layers (using SIMD instructions wherever possible).
  • Count the operations used by the neural network as a function of the network hyper-parameters.
  • Measure the network performance for a variety of hyper-parameter choices.
  • Use a regression to estimate how long each operation will take.

Example: Dense Network

Visualization of a Dense network with 2 inputs, 2 outputs, 2 hidden layers, and a hidden size of 8.
  • SIMD Multiplies + SIMD HSums + Scalar Adds: W * (N + L * W + O) / V
  • SIMD Adds: ((L + 1) * W + O) / V
  • SIMD Activations: (L + 1) * W / V
  • SIMD Multiplies + SIMD HSums + Scalar Adds: 5.45500463e-03 seconds
  • SIMD Adds: 1.47277176e-25 seconds
  • SIMD ReLU Activations: 7.23480113e-26
Real-Time Factor for Dense/ReLU and Dense/Tanh networks of a given size. Networks that fall above the red line are too slow to run in real-time at 48 kHz.

Example: Recurrent Networks

Real-Time Factor for LSTM and GRU networks of a given size.

Why Is This Useful?





Jatin Chowdhury is a student.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Instructions for transfer learning with pre-trained CNNs

A Machine Learning Roadmap for 2022 with Tutorials

[Bayesian DL] 4. Types of Uncertainty


Lazy Predict-Intro and error

A Graph-based Text Similarity Method with Named Entity Information in NLP

A Graph-based Text Similarity Measure That Employs Named Entity Information

Bank Loan Default Prediction

Day -6 MLOps

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jatin Chowdhury

Jatin Chowdhury

Jatin Chowdhury is a student.

More from Medium

Sample-Rate Agnostic Recurrent Neural Networks

Learning Flappy Bird Agents With Reinforcement Learning

MiniMax Algorithm Explanation using Tic-Tac-Toe Game

Distributed Training on Intel Xeon Scalable Processors