CleanStream is an OBS plugin that cleans live audio streams from unwanted words and utterances using real-time local AI.
- Add the plugin to any audio-generating source
- Adjust the settings
Check out the latest releases for downloads and install instructions.
This video walkthrough (YouTube) will explain various parts of the code if you're looking to learn from what I've discovered.
The filter is running Whisper in real-time to detect words in small chunks of the incoming audio. For each chunck it produces a decision which then determines if the audio rendering will play the original audio or e.g. a beep or silence. The processing happens in a separate thread and therefore there's a built-in lag/delay mechanism to make sure the audio decision (play, beep, silence) is in-sync with the actual audio playback based on the timestamp. The built-in delay is adaptive since some systems (e.g. with CUDA) can make faster decisions.
Here is an illustration of the process:
- OBS version 32+ use plugin versions 0.2.0+
- OBS version 30+ use plugin versions 0.0.4+
- OBS version 29 use plugin versions 0.0.2+
- OBS version 28 use plugin versions 0.0.1
We do not support older versions of OBS since the plugin is using newer APIs.
CleanStream is an OBS plugin that cleans live audio streams from unwanted words and utterances, such as "uh"s and "um"s, and other words that you can configure, like profanity.
See our resource on the OBS Forums for additional information.
It is using a neural network (OpenAI Whisper) to predict in real time the speech and remove the unwanted words.
It's using the Whisper.cpp project from ggerganov to run the Whisper network in a very efficient way.
But it is working and you can try it out. Please report any issues you find. 🙏 (submit an issue or meet us on https://discord.gg/KbjGU2vvUz)
We're working on improving the plugin and adding more features. If you have any ideas or suggestions, please open an issue.
The plugin was built and tested on Mac OSX, Windows and Ubuntu Linux. Help is appreciated in building on other OSs and packages.
The building pipelines in CI take care of the heavy lifting. Use them in order to build the plugin locally.
Start by cloning this repo to a directory of your choice.
Set the MACOS_ARCH environment variable to x86_64 or arm64 as appropriate for the architecture you want to build for
$ export MACOS_ARCH="arm64"Using the CI pipeline scripts, locally you would just call the zsh script.
$ ./.github/scripts/build-macos.zshThe above script should succeed and the plugin files will reside in the ./release folder off of the root. Copy the files to the OBS directory e.g. /Users/you/Library/Application Support/obs-studio/obs-plugins.
To get .pkg installer file, run
$ ./.github/scripts/package-macos.zsh -c Release -t macos-x86_64(Note that maybe the outputs in the e.g. build_x86_64 will be in the Release folder and not the install folder like pakage-macos.zsh expects, so you will need to rename the folder from build_x86_64/Release to build_x86_64/install)
Set the ACCELERATION environment variable to generic, nvidia, or amd to use the appropriate pre-built whisper libraries. nvidia includes CUDA support, amd includes ROCm support via hipblas, and all variants include Vulkan, OpenCL and OpenBLAS support
$ export ACCELERATION="nvidia"Use the CI scripts again
$ ./.github/scripts/build-linux.shSet the ACCELERATION environment variable to generic, nvidia, or amd to use the appropriate pre-built whisper libraries. nvidia includes CUDA support, amd includes ROCm support via hipblas, and all variants include Vulkan and OpenBLAS support
> $env:ACCELERATION = "nvidia"Use the CI scripts again, for example:
> .github/scripts/Build-Windows.ps1 -Configuration ReleaseThe build should exist in the ./release folder off the root. You can manually install the files in the OBS directory.
> Copy-Item -Recurse -Force "release\Release\*" -Destination "C:\Program Files\obs-studio\"
