Turn-taking Gallery
A collection of Turn Taking Algorithms and Voice Activity Detection (VAD) models ready to use with FastRTC. Click on the tags below to find the model you're looking for!
Gallery
-
HumAware VAD
Description HumAware-VAD is a fine-tuned version of Silero-VAD, specifically trained to distinguish humming from actual speech. Standard VAD models often misclassify humming as speech, leading to inaccurate speech segmentation. HumAware-VAD improves detection accuracy in environments with background humming, music, and vocal sounds.
Install Instructions
Use with FastRTC
-
Walkie Talkie
Description The user's turn ends when they finish a sentence with the word "over". For example, "Hello, how are you? Over." would send end the user's turn and trigger the response. This is intended as a simple reference implementation for how to implement a custom-turn-taking algorithm.
Install Instructions
What is this for?
By default, FastRTC uses the ReplyOnPause
class to handle turn-taking. However, you may want to tweak this behavior to better fit your use case.
In this gallery, you can find a collection of turn-taking algorithms and VAD models that you can use to customize the turn-taking behavior to your needs. Each card contains install and usage instructions.
How to add your own Turn-taking Algorithm or VAD model
Turn-taking Algorithm
-
Typically you will want to subclass the
ReplyOnPause
class and override thedetermine_pause
method. -
Then package your class into a pip installable package and publish it to pypi.
-
Open a PR to add your model to the gallery!
Example Implementation
See the Walkie Talkie package for an example implementation of a turn-taking algorithm.
VAD Model
-
Your model can be implemented in any framework you want but it must implement the
PauseDetectionModel
protocol.ModelOptions: TypeAlias = Any class PauseDetectionModel(Protocol): def vad( self, audio: tuple[int, NDArray[np.int16] | NDArray[np.float32]], options: ModelOptions, ) -> tuple[float, list[AudioChunk]]: ... def warmup( self, ) -> None: ...
-
The
vad
method should take a numpy array of audio data and return a tuple of the form(speech_duration, and list[AudioChunk])
wherespeech_duration
is the duration of the human speech in the audio chunk andAudioChunk
is a dictionary with the following fields:(start, end)
wherestart
andend
are the start and end times of the human speech in the audio array. -
The
audio
tuple should be of the form(sample_rate, audio_array)
wheresample_rate
is the sample rate of the audio array andaudio_array
is a numpy array of the audio data. It can be of typenp.int16
ornp.float32
. -
The
warmup
method is optional but recommended to warm up the model when the server starts.
-
-
Once you have your model implemented, you can use it in the
ReplyOnPause
class by passing in the model and any options you need.from fastrtc import ReplyOnPause, Stream from your_model import YourModel def echo(audio): yield audio model = YourModel() # implement the PauseDetectionModel protocol reply_on_pause = ReplyOnPause( echo, model=model, options=YourModelOptions(), ) stream = Stream(reply_on_pause, mode="send-receive", modality="audio") stream.ui.launch()
-
Open a PR to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
Package Naming Convention
It is recommended to name your package fastrtc-<package-name>
so developers can easily find it on pypi.