FastRTC

The Real-Time Communication Library for Python.
Turn any python function into a real-time audio and video stream over WebRTC or WebSockets.
Installation
to use built-in pause detection (see ReplyOnPause), speech-to-text (see Speech To Text), and text to speech (see Text To Speech), install the vad
, stt
, and tts
extras:
Quickstart
Import the Stream class and pass in a handler.
The Stream
has three main methods:
.ui.launch()
: Launch a built-in UI for easily testing and sharing your stream. Built with Gradio..fastphone()
: Get a free temporary phone number to call into your stream. Hugging Face token required..mount(app)
: Mount the stream on a FastAPI app. Perfect for integrating with your already existing production system.
from fastrtc import Stream, ReplyOnPause
import numpy as np
def echo(audio: tuple[int, np.ndarray]):
# The function will be passed the audio until the user pauses
# Implement any iterator that yields audio
# See "LLM Voice Chat" for a more complete example
yield audio
stream = Stream(
handler=ReplyOnPause(echo),
modality="audio",
mode="send-receive",
)
import os
from fastrtc import (ReplyOnPause, Stream, get_stt_model, get_tts_model)
from openai import OpenAI
sambanova_client = OpenAI(
api_key=os.getenv("SAMBANOVA_API_KEY"), base_url="https://api.sambanova.ai/v1"
)
stt_model = get_stt_model()
tts_model = get_tts_model()
def echo(audio):
prompt = stt_model.stt(audio)
response = sambanova_client.chat.completions.create(
model="Meta-Llama-3.2-3B-Instruct",
messages=[{"role": "user", "content": prompt}],
max_tokens=200,
)
prompt = response.choices[0].message.content
for audio_chunk in tts_model.stream_tts_sync(prompt):
yield audio_chunk
stream = Stream(ReplyOnPause(echo), modality="audio", mode="send-receive")
from fastrtc import Stream
import gradio as gr
import cv2
from huggingface_hub import hf_hub_download
from .inference import YOLOv10
model_file = hf_hub_download(
repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
)
# git clone https://huggingface.co/spaces/fastrtc/object-detection
# for YOLOv10 implementation
model = YOLOv10(model_file)
def detection(image, conf_threshold=0.3):
image = cv2.resize(image, (model.input_width, model.input_height))
new_image = model.detect_objects(image, conf_threshold)
return cv2.resize(new_image, (500, 500))
stream = Stream(
handler=detection,
modality="video",
mode="send-receive",
additional_inputs=[
gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
]
)
Run:
Learn more about the Stream in the user guide.
Key Features
Automatic Voice Detection and Turn Taking built-in, only worry about the logic for responding to the user.
Automatic UI - Use the .ui.launch()
method to launch the webRTC-enabled built-in Gradio UI.
Automatic WebRTC Support - Use the .mount(app)
method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend!
Websocket Support - Use the .mount(app)
method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend!
Automatic Telephone Support - Use the
fastphone()
method of the stream to launch the application and get a free temporary phone number!
Completely customizable backend - A
Stream
can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the Talk To Claude demo for an example on how to serve a custom JS frontend.
Examples
See the cookbook.
Follow and join or organization on Hugging Face!