This package is deprecated and no longer maintained.
Please use the new, actively maintained package instead:
react-native-sherpa-onnx → https://github.com/XDcobra/react-native-sherpa-onnx
Offline Speech-to-Text with sherpa-onnx for React Native
A React Native TurboModule that provides offline speech recognition capabilities using sherpa-onnx. Supports multiple model architectures including Zipformer/Transducer, Paraformer, NeMo CTC, and Whisper.
| Platform | Status |
|---|---|
| Android | ✅ Yes |
| iOS | ✅ Yes |
| Model Type | modelType Value |
Description | Download Links |
|---|---|---|---|
| Zipformer/Transducer | 'transducer' |
Requires encoder.onnx, decoder.onnx, joiner.onnx, and tokens.txt |
Download |
| Paraformer | 'paraformer' |
Requires model.onnx (or model.int8.onnx) and tokens.txt |
Download |
| NeMo CTC | 'nemo_ctc' |
Requires model.onnx (or model.int8.onnx) and tokens.txt |
Download |
| Whisper | 'whisper' |
Requires encoder.onnx, decoder.onnx, and tokens.txt |
Download |
| WeNet CTC | 'wenet_ctc' |
Requires model.onnx (or model.int8.onnx) and tokens.txt |
Download |
| SenseVoice | 'sense_voice' |
Requires model.onnx (or model.int8.onnx) and tokens.txt |
Download |
| FunASR Nano | 'funasr_nano' |
Requires encoder_adaptor.onnx, llm.onnx, embedding.onnx, and tokenizer directory |
Download |
- ✅ Offline Speech Recognition - No internet connection required
- ✅ Multiple Model Types - Supports Zipformer/Transducer, Paraformer, NeMo CTC, and Whisper models
- ✅ Model Quantization - Automatic detection and preference for quantized (int8) models
- ✅ Flexible Model Loading - Asset models, file system models, or auto-detection
- ✅ Android Support - Fully supported on Android
- ✅ iOS Support - Fully supported on iOS (requires sherpa-onnx XCFramework)
- ✅ TypeScript Support - Full TypeScript definitions included
npm install react-native-sherpa-onnx-sttNo additional setup required. The library automatically handles native dependencies via Gradle.
The sherpa-onnx XCFramework is required but needs to be obtained separately. Simply install CocoaPods dependencies after obtaining the framework:
cd ios
pod installNote: The XCFramework is not bundled with the npm package due to its size. You must obtain it before running pod install.
-
Use the prebuilt version (if available):
- The XCFramework may be included in the repository at
ios/Frameworks/sherpa_onnx.xcframework - If present, no additional steps are required
- The XCFramework may be included in the repository at
-
Build locally (requires macOS):
git clone https://github.com/k2-fsa/sherpa-onnx.git cd sherpa-onnx git checkout v1.12.23 # Note: ONNX Runtime is required for building sherpa-onnx # Make sure ONNX Runtime dependencies are installed ./build-ios.sh cp -r build-ios/sherpa_onnx.xcframework /path/to/your/project/node_modules/react-native-sherpa-onnx-stt/ios/Frameworks/
Important: Building sherpa-onnx requires ONNX Runtime. Make sure all dependencies are installed before running
build-ios.sh.Replace
/path/to/your/project/with the actual path to your React Native project. The framework should be copied tonode_modules/react-native-sherpa-onnx-stt/ios/Frameworks/in your project.
The Podspec will automatically detect and use the framework if it exists in ios/Frameworks/.
Note: The iOS implementation uses the same C++ wrapper as Android, ensuring consistent behavior across platforms.
import {
initializeSherpaOnnx,
transcribeFile,
assetModelPath,
} from 'react-native-sherpa-onnx-stt';
// Initialize with a model
await initializeSherpaOnnx({
modelPath: assetModelPath('models/sherpa-onnx-model'),
preferInt8: true, // Optional: prefer quantized models
});
// Transcribe an audio file
const transcription = await transcribeFile('path/to/audio.wav');
console.log('Transcription:', transcription);import {
initializeSherpaOnnx,
assetModelPath,
autoModelPath,
} from 'react-native-sherpa-onnx-stt';
// Option 1: Asset model (bundled in app)
await initializeSherpaOnnx({
modelPath: assetModelPath('models/sherpa-onnx-model'),
preferInt8: true, // Prefer quantized models
});
// Option 2: Auto-detect (tries asset, then file system)
await initializeSherpaOnnx({
modelPath: autoModelPath('models/sherpa-onnx-model'),
});
// Option 3: Simple string (backward compatible)
await initializeSherpaOnnx('models/sherpa-onnx-model');import { transcribeFile } from 'react-native-sherpa-onnx-stt';
// Transcribe a WAV file (16kHz, mono, 16-bit PCM)
const result = await transcribeFile('path/to/audio.wav');
console.log('Transcription:', result);Control whether to prefer quantized (int8) or regular models:
// Default: try int8 first, then regular
await initializeSherpaOnnx({ modelPath: 'models/my-model' });
// Explicitly prefer int8 models (smaller, faster)
await initializeSherpaOnnx({
modelPath: 'models/my-model',
preferInt8: true,
});
// Explicitly prefer regular models (higher accuracy)
await initializeSherpaOnnx({
modelPath: 'models/my-model',
preferInt8: false,
});For robustness, you can explicitly specify the model type to avoid auto-detection issues:
// Explicitly specify model type
await initializeSherpaOnnx({
modelPath: 'models/sherpa-onnx-nemo-parakeet-tdt-ctc-en',
modelType: 'nemo_ctc', // 'transducer', 'paraformer', 'nemo_ctc', 'whisper', or 'auto' (default)
});
// Auto-detection (default behavior)
await initializeSherpaOnnx({
modelPath: 'models/my-model',
// modelType defaults to 'auto'
});import { unloadSherpaOnnx } from 'react-native-sherpa-onnx-stt';
// Release resources when done
await unloadSherpaOnnx();The library does not bundle models. You must provide your own models. See MODEL_SETUP.md for detailed setup instructions.
- Zipformer/Transducer: Requires
encoder.onnx,decoder.onnx,joiner.onnx, andtokens.txt - Paraformer: Requires
model.onnx(ormodel.int8.onnx) andtokens.txt - NeMo CTC: Requires
model.onnx(ormodel.int8.onnx) andtokens.txt - Whisper: Requires
encoder.onnx,decoder.onnx, andtokens.txt - WeNet CTC: Requires
model.onnx(ormodel.int8.onnx) andtokens.txt - SenseVoice: Requires
model.onnx(ormodel.int8.onnx) andtokens.txt
Place models in:
- Android:
android/app/src/main/assets/models/ - iOS: Add to Xcode project as folder reference
Initialize the speech recognition engine with a model.
Parameters:
options.modelPath: Model path configuration (see MODEL_SETUP.md)options.preferInt8(optional): Prefer quantized models (true), regular models (false), or auto-detect (undefined, default)options.modelType(optional): Explicit model type ('transducer','paraformer','nemo_ctc','whisper','wenet_ctc','sense_voice','funasr_nano'), or auto-detect ('auto', default)
Returns: Promise<void>
Transcribe an audio file.
Parameters:
filePath: Path to WAV file (16kHz, mono, 16-bit PCM)
Returns: Promise<string> - Transcribed text
Release resources and unload the model.
Returns: Promise<void>
Resolve a model path configuration to an absolute path.
Parameters:
config: Model path configuration object
Returns: Promise<string> - Absolute path to model directory
- React Native >= 0.70
- Android API 24+ (Android 7.0+)
- iOS 13.0+ (requires sherpa-onnx XCFramework - see iOS Setup below)
We provide example applications to help you get started with react-native-sherpa-onnx-stt:
The example app included in this repository demonstrates basic audio-to-text transcription capabilities. It includes:
- Multiple model type support (Zipformer, Paraformer, NeMo CTC, Whisper, WeNet CTC, SenseVoice, FunASR Nano)
- Model selection and configuration
- Audio file transcription
- Test audio files for different languages
Getting started:
cd example
yarn install
yarn android # or yarn iosA comprehensive comparison app that demonstrates video-to-text transcription using react-native-sherpa-onnx-stt alongside other speech-to-text solutions:
Repository: mobile-videototext-stt-comparison
Features:
- Video to audio conversion (using native APIs)
- Audio to text transcription
- Video to text (video --> WAV --> text)
- Comparison between different STT providers
- Performance benchmarking
This app showcases how to integrate react-native-sherpa-onnx-stt into a real-world application that processes video files and converts them to text.
MIT
Made with create-react-native-library





