logo
0
0
WeChat Login

MTranServer

中文 | English | 日本語 | Français | Deutsch

A high-performance offline translation model server with minimal resource requirements - no GPU needed. Average response time of 50ms per request. Supports translation of major languages worldwide.

Note: This model server focuses on offline translation, response speed, cross-platform deployment, and local execution to achieve unlimited free translation. Due to model size and optimization constraints, the translation quality will not match that of large language models. For high-quality translation, consider using online large language model APIs.

v4 has optimized memory usage, further improved speed, and enhanced stability. If you are using an old version, it is recommended to upgrade immediately!

Online Demo

WebsiteTOKENOther InterfaceProvider
ipacel.cc__IpacEL_MT_API_TOKEN__Immersive Translate: https://MTranServer.ipacel.cc/imme?token=__IpacEL_MT_API_TOKEN__@ApliNi

Thanks to community contributors for providing trial services for users!

Usage Guide

Now supports desktop one-click launch! Supports Windows, Mac, Linux.

Desktop

Manual Download

Download the latest desktop version for your platform from Releases, install and launch directly.

After the desktop app launches, it will create a tray menu to conveniently manage the service.

The program includes a simple UI and online debug documentation.

For detailed usage instructions, jump to Ecosystem Projects

Preview (latest version has updates):

UI

Documentation

Server

It is recommended to use the desktop app or Docker deployment for better performance and convenience. Manual server deployment is for advanced users.

Quick Start

Programmers can start the server directly via command line:

npx mtranserver@latest

npx can be replaced with any package manager you prefer, such as bunx, pnpx, etc.

Important Note:

When translating a language pair for the first time, the server will automatically download the corresponding translation model (unless offline mode is enabled). This process may take some time depending on your network speed and model size. After the model is downloaded, subsequent translation requests will enjoy millisecond-level response speeds. It is recommended to test a translation once before formal use to allow the server to pre-download and load the model. The program is frequently updated. If you encounter problems, please try updating to the latest version.

Quick Install

npm i -g mtranserver@latest

npm can be replaced with any package manager you prefer, such as bun, pnpm, etc.

Then run mtranserver.

Docker Compose Deployment

Create a compose.yml file in an empty directory, with the following content:

services: mtranserver: image: xxnuo/mtranserver:latest container_name: mtranserver restart: unless-stopped ports: - "8989:8989" environment: - MT_HOST=0.0.0.0 - MT_PORT=8989 - MT_OFFLINE=false # - MT_API_TOKEN=your_secret_token_here volumes: - ./models:/app/models
docker pull xxnuo/mtranserver:latest docker compose up -d

Ecosystem Projects

IDE Plugins

MTranCode Comment Translation Plugin

Supports VS Code, Cursor, Augment, and other VS Code-based IDEs.

Search for MTranCode in the extension marketplace to install the comment translation plugin.

The plugin defaults to calling the server at http://localhost:8989 for comment and code translation. You can adjust it in settings.

This plugin is forked from vscode-comment-translate.

Browser Extension

TODO: Under active development.

If you develop a derivative project, feel free to submit a PR. I will add your project to the ecosystem list.

By the way, the project has been published to npm. You can directly call the simple library interface in other programs to implement translation functionality. For specific information, check the TypeScript type definitions.

Compatible Interfaces

The server provides compatible endpoints for multiple translation plugins:

EndpointMethodDescriptionSupported Plugins
/immePOSTImmersive Translation plugin endpointImmersive Translation
/kissPOSTKiss Translator plugin endpointKiss Translator
/deeplPOSTDeepL API v2 compatible endpointClients supporting DeepL API
/deeplxPOSTDeepLX compatible endpointClients supporting DeepLX API
/hcfyPOSTSelection Translator compatible endpointSelection Translator
/hcfyPOSTSelection Translator compatible endpointSelection Translator
/google/language/translate/v2POSTGoogle Translate API v2 compatible endpointClients supporting Google Translate API
/google/translate_a/singleGETGoogle translate_a/single compatible endpointClients supporting Google web translation

Plugin Configuration Guide:

Note:

  • Immersive Translation - Enable Beta features in developer mode in Settings to see Custom API Settings under Translation Services (official tutorial with images). Then increase the Maximum Requests per Second in Custom API Settings to fully utilize server performance. I set Maximum Requests per Second to 512 and Maximum Paragraphs per Request to 1. You can adjust based on your server hardware.

  • Kiss Translator - Scroll down in Settings page to find the custom interface Custom. Similarly, set Maximum Concurrent Requests and Request Interval Time to fully utilize server performance. I set Maximum Concurrent Requests to 100 and Request Interval Time to 1. You can adjust based on your server configuration.

Configure the plugin's custom interface address according to the table below.

NameURLPlugin Setting
Immersive Translation (No Password)http://localhost:8989/immeCustom API Settings - API URL
Immersive Translation (With Password)http://localhost:8989/imme?token=your_tokenSame as above, change your_token to your MT_API_TOKEN value
Kiss Translator (No Password)http://localhost:8989/kissInterface Settings - Custom - URL
Kiss Translator (With Password)http://localhost:8989/kissSame as above, fill KEY with your_token
DeepL Compatiblehttp://localhost:8989/deeplUse DeepL-Auth-Key or Bearer authentication
DeepLX Compatiblehttp://localhost:8989/deeplxSupport token parameter or Bearer authentication
Google Compatiblehttp://localhost:8989/google/language/translate/v2Use key parameter or Bearer authentication
Selection Translatorhttp://localhost:8989/hcfySupport token parameter or Bearer authentication

Regular users can start using the service after setting up the plugin interface address according to the table above.

Command Line Options

./mtranserver [options] Options: -version, -v Show version information -log-level string Log level (debug, info, warn, error) (default "warn") -config-dir string Configuration directory (default "~/.config/mtran/server") -model-dir string Model directory (default "~/.config/mtran/models") -host string Server host address (default "0.0.0.0") -port string Server port (default "8989") -ui Enable Web UI (default true) -offline Enable offline mode, disable automatic model download (default false) -worker-idle-timeout int Worker idle timeout in seconds (default 300) --download pairs... Download models for specific language pairs (e.g. --download en_zh zh_en) --languages List all supported language pairs for download Note: `--download` and `--languages` require network access and do not work in offline mode. Examples: ./mtranserver --host 127.0.0.1 --port 8080 ./mtranserver --ui --offline ./mtranserver -v

Comparison with Similar Projects

Here are some similar projects, you can try them if you have other needs:

Project NameMemory UsageConcurrencyTranslation QualitySpeedAdditional Info
NLLBVery HighPoorAverageSlowAndroid port RTranslator has many optimizations, but still has high resource usage and is not fast
LibreTranslateVery HighAverageAverageMediumMid-range CPU processes 3 sentences/s, high-end CPU processes 15-20 sentences/s. Details
OPUS-MTHighAverageBelow AverageFastPerformance Tests
Any LLMExtremely HighDynamicVery GoodVery SlowHigh hardware requirements. If you need high concurrency translation, it is recommended to use vllm framework.
MTranServer (This Project)LowHighAverageUltra Fast50ms average response time per request.

Table data is for CPU, English to Chinese scenarios simple testing, not strict testing, non-quantized version comparison, for reference only.

Advanced Configuration Guide

Please refer to API_en.md and the API documentation after startup.

Star History

Star History Chart

Thanks

Bergamot Project for awesome idea of local translation.

Mozilla for the models.