A high-performance native speech-to-text module for the FastJava ecosystem. Ultra-low latency via JNI-based Whisper.cpp and real-time Cloud streaming.
FastSTT provides professional-grade speech recognition with minimal latency. It unified local high-performance processing (Whisper) with lightning-fast cloud backends (Deepgram/OpenAI) under a single Java API.
- 🎙️ Local Whisper: Native C++ integration via whisper.cpp for 100% offline privacy.
- ⚡ Cloud Streaming: Real-time WebSocket integration with Deepgram and OpenAI.
- 📦 Zero-Copy: Audio buffers are passed directly via JNI from FastAudioCapture.
- 🛠️ Integrated Installer: Built-in downloader for GGML models (Tiny, Base, Small).
Add the JitPack repository and the dependencies to your pom.xml:
<repositories>
<repository>
<id>jitpack.io</id>
<url>https://jitpack.io</url>
</repository>
</repositories>
<dependencies>
<!-- FastSTT Library -->
<dependency>
<groupId>io.github.andrestubbe</groupId>
<artifactId>faststt</artifactId>
<version>0.1.0</version>
</dependency>
<!-- FastCore (Required Native Loader) -->
<dependency>
<groupId>com.github.andrestubbe</groupId>
<artifactId>fastcore</artifactId>
<version>v1.0.0</version>
</dependency>
</dependencies>repositories {
maven { url 'https://jitpack.io' }
}
dependencies {
implementation 'io.github.andrestubbe:faststt:0.1.0'
implementation 'com.github.andrestubbe:fastcore:v1.0.0'
}Download the latest JARs directly to add them to your classpath:
- 📦 faststt-v0.1.0.jar (The Core Library)
- ⚙️ fastcore-v1.0.0.jar (The Mandatory Native Loader)
Important
Both JARs must be in your classpath for the native JNI calls to function correctly.
FastSTT comes with a built-in installer to help you download and manage Whisper models.
- Clone this repository.
- Run
run-installer.bat. - Choose Option 1 to download a Whisper model (e.g.,
base.bin).
MIT License — See LICENSE for details.
Part of the FastJava Ecosystem — Making the JVM faster.