A Gradio-based browser interface for Whisper. You can use it as an Easy Subtitle Generator!
If you wish to try this on Colab, you can do it in here!
The app is able to run with Pinokio.
http://localhost:7860.Install and launch Docker-Desktop.
Git clone the repository
git clone https://github.com/jhj0517/Whisper-WebUI.git
docker compose build
docker compose up
http://localhost:7860If needed, update the docker-compose.yaml to match your environment.
To run this WebUI, you need to have git, 3.10 <= python <= 3.12, FFmpeg.
Edit --extra-index-url in the requirements.txt to match your device.
By default, the WebUI assumes you're using an Nvidia GPU and CUDA 12.8. If you're using Intel or another CUDA version, read the requirements.txt and edit --extra-index-url.
Please follow the links below to install the necessary software:
3.10 ~ 3.12 is recommended.After installing FFmpeg, make sure to add the FFmpeg/bin folder to your system PATH!
git clone https://github.com/jhj0517/Whisper-WebUI.git
install.bat or install.sh to install dependencies. (It will create a venv directory and install dependencies there.)start-webui.bat or start-webui.sh (It will run python app.py after activating the venv)And you can also run the project with command line arguments if you like to, see wiki for a guide to arguments.
This project is integrated with faster-whisper by default for better VRAM usage and transcription speed.
According to faster-whisper, the efficiency of the optimized whisper model is as follows:
| Implementation | Precision | Beam size | Time | Max. GPU memory | Max. CPU memory |
|---|---|---|---|---|---|
| openai/whisper | fp16 | 5 | 4m30s | 11325MB | 9439MB |
| faster-whisper | fp16 | 5 | 54s | 4755MB | 3244MB |
If you want to use an implementation other than faster-whisper, use --whisper_type arg and the repository name.
Read wiki for more info about CLI args.
If you want to use a fine-tuned model, manually place the models in models/Whisper/ corresponding to the implementation.
Alternatively, if you enter the huggingface repo id (e.g, deepdml/faster-whisper-large-v3-turbo-ct2) in the "Model" dropdown, it will be automatically downloaded in the directory.
If you're interested in deploying this app as a REST API, please check out /backend.
Any PRs that translate the language into translation.yaml would be greatly appreciated!