This repository demonstrates real-time neural network inference on continuous time-series data (e.g., microphone input) using MATLAB. It processes an incoming audio stream, computes a Short-Time Fourier Transform (STFT) in real time, and performs neural network enhancement/denoising before visualizing results live.
- Real-Time Processing: Streams data continuously from a live audio device.
- STFT Visualization: Displays the input waveform, raw STFT, and processed STFT in real time.
- Neural Network Integration: Supports asynchronous model inference using MATLAB’s
parfeval, allowing non-blocking neural network computation (via ONNX models). - Custom Ring Buffer: Includes a robust multi-channel circular buffer (
myBuffer.m) for managing streaming data with overlapping context windows. - Live Display: Visualizes both input and processed spectrograms, updating several times per second.
| File | Description |
|---|---|
demo.m |
Main script that streams audio, computes STFT, runs neural net inference asynchronously, and plots results live. |
myBuffer.m |
Custom multi-channel circular buffer class for managing streaming input and processed data. |
┌────────────┐ ┌────────────┐ ┌──────────────┐ ┌───────────────┐
│ Audio Mic │ →→→ │ STFT Block │ →→→ │ Neural Net │ →→→ │ Real-Time GUI │
│(Live Input)│ │ (Sliding) │ │ (ONNX model) │ │ Wave + STFT │
└────────────┘ └────────────┘ └──────────────┘ └───────────────┘
Internally, the demo maintains three ring buffers:
bufferWaveform— raw time-domain samplesbufferSTFTRaw— input spectrogrambufferSTFTProcessed— neural net–enhanced spectrogram
Each buffer continuously updates and overwrites old data, keeping the visualization smooth and low latency.
- MATLAB R2023a or later
- Toolboxes:
- Audio Toolbox
- Parallel Computing Toolbox
- DSP System Toolbox
- (Optional) Deep Learning Toolbox for loading ONNX models
Clone this repository:
git clone https://github.com/vishalchoudhari11/Real-Time-Neural-Net-Inference-Demo.git
cd real-time-audio-inferenceOpen MATLAB and add this folder to the path:
addpath(genpath(pwd));Run the demo:
demo;To use your own neural network:
- Export your trained model to ONNX.
- Place it under the
models/folder. - Update the path in
inferNeuralNet()insidedemo.m.
The myBuffer class provides an efficient circular buffer interface for managing multi-channel streaming data. It supports:
- Continuous writes and overwrites
- Chronological readouts for plotting
- Windowed reads for neural net inference
- Safe update of frame counters post-processing
Example:
buf = myBuffer(1, 16000); % 1 channel, 1-second buffer at 16 kHz
buf.write(randn(1, 4000)); % write samples
data = buf.plotYield(); % get chronological view
imagesc(data);The demo can perform asynchronous neural net inference:
- The STFT stream is passed to a background worker using
parfeval. - Inference can run using any ONNX-compatible model (e.g., speech enhancement, denoising, separation).
- The main thread continues streaming and updating the display without waiting for inference to finish.
% Inside demo.m
pending = parfeval(@inferNeuralNet, 1, 'infer', bufferSTFTRaw.processSamplesYield(...));To adapt this for other streaming time-series data (e.g., physiological sensors, radar, EEG):
- Replace the audio input section with your data source.
- Adjust STFT parameters or preprocessing to match your domain.
The GUI displays:
- Waveform plot: Time-domain view of the recent buffer window.
- Raw STFT spectrogram: Frequency–time magnitude before processing.
- Processed STFT spectrogram: Output from neural net (enhanced or denoised).
- Real-time speech enhancement / denoising
- EEG / EMG / physiological time-series processing with sliding neural inference
real-time-audio-inference/
├── demo.m
├── myBuffer.m
├── models/
│ └── best_model.onnx
└── README.md
Pull requests and improvements are welcome! If you extend this framework (e.g., add new neural networks or visualization modes), please open a PR or issue.