Live Demo | Quick Start | FAQ | Chrome Extension
Development Docs | Vercel Deployment Guide | MCP Deployment Guide | DeepWiki Docs | ZRead Docs
Prompt Optimizer is a powerful AI prompt optimization tool that helps you write better AI prompts and improve the quality of AI outputs. It supports four usage methods: web application, desktop application, Chrome extension, and Docker deployment.
1. Role-playing Dialogue: Unleashing the Potential of Small Models
In cost-effective production scenarios or privacy-focused local deployments, structured prompts enable small models to consistently enter character roles, providing immersive and highly consistent role-playing experiences that effectively unleash their potential.
2. Knowledge Graph Extraction: Ensuring Production Environment Stability
In production environments requiring programmatic processing, high-quality prompts can significantly reduce requirements for model intelligence, enabling more economical small models to stably output reliable specified formats. This tool aims to assist developers in quickly achieving this goal, thereby accelerating development, ensuring stability, and achieving cost reduction and efficiency improvement.
3. Poetry Writing: Assisting Creative Exploration and Requirement Customization
When facing a powerful AI, our goal is not just to get a "good" answer, but to get a "desired" unique answer. This tool can help users refine vague inspiration (like "write a poem") into specific requirements (about what theme, what imagery, what emotions), assisting you in exploring, discovering, and precisely expressing your creativity to co-create unique works with AI.
For detailed usage instructions, please refer to the Image Mode Documentation
Direct access: https://prompt.always200.com
This is a pure frontend project with all data stored locally in your browser and never uploaded to any server, making the online version both safe and reliable to use.
Method 1: One-click deployment to your own Vercel:
Method 2: Fork the project and import to Vercel (Recommended):
ACCESS_PASSWORD: Set access password to enable access restrictionVITE_OPENAI_API_KEY etc.: Configure API keys for various AI service providersFor more detailed deployment steps and important notes, please check:
Download the latest version from GitHub Releases. We provide both installer and archive formats for each platform.
*.exe, *.dmg, *.AppImage, etc. Strongly recommended as it supports automatic updates.*.zip. Extract and use, but cannot auto-update.Core Advantages of Desktop Application:
.exe, .dmg) can automatically check and update to the latest version.docker run -d -p 8081:80
-e VITE_OPENAI_API_KEY=your_key
-e ACCESS_USERNAME=your_username \ # Optional, defaults to "admin"
-e ACCESS_PASSWORD=your_password \ # Set access password
--restart unless-stopped
--name prompt-optimizer
linshen/prompt-optimizer
</details> ### 6. Docker Compose Deployment <details> <summary>Click to view Docker Compose deployment steps</summary> ```bash # 1. Clone the repository git clone https://github.com/linshenkx/prompt-optimizer.git cd prompt-optimizer # 2. Optional: Create .env file for API keys and authentication cat > .env << EOF # API Key Configuration VITE_OPENAI_API_KEY=your_openai_api_key VITE_GEMINI_API_KEY=your_gemini_api_key VITE_DEEPSEEK_API_KEY=your_deepseek_api_key VITE_ZHIPU_API_KEY=your_zhipu_api_key VITE_SILICONFLOW_API_KEY=your_siliconflow_api_key # Basic Authentication (Password Protection) ACCESS_USERNAME=your_username # Optional, defaults to "admin" ACCESS_PASSWORD=your_password # Set access password EOF # 3. Start the service docker compose up -d # 4. View logs docker compose logs -f # 5. Access the service Web Interface: http://localhost:8081 MCP Server: http://localhost:8081/mcp
You can also directly edit the docker-compose.yml file to customize your configuration:
services:
prompt-optimizer:
# Use Docker Hub image
image: linshen/prompt-optimizer:latest
container_name: prompt-optimizer
restart: unless-stopped
ports:
- "8081:80" # Web application port (MCP server accessible via /mcp path)
environment:
- VITE_OPENAI_API_KEY=your_openai_key
- VITE_GEMINI_API_KEY=your_gemini_key
# Access Control (Optional)
- ACCESS_USERNAME=admin
- ACCESS_PASSWORD=your_password
Prompt Optimizer now supports the Model Context Protocol (MCP), enabling integration with AI applications that support MCP such as Claude Desktop.
When running via Docker, the MCP Server automatically starts and can be accessed via http://ip:port/mcp.
MCP Server requires API key configuration to function properly. Main MCP-specific configurations:
# MCP Server Configuration
MCP_DEFAULT_MODEL_PROVIDER=openai # Options: openai, gemini, deepseek, siliconflow, zhipu, custom
MCP_LOG_LEVEL=info # Log level
In a Docker environment, the MCP Server runs alongside the web application. You can access the MCP service through the same port as the web application at the /mcp path.
For example, if you map the container's port 80 to port 8081 on the host:
docker run -d -p 8081:80 \ -e VITE_OPENAI_API_KEY=your-openai-key \ -e MCP_DEFAULT_MODEL_PROVIDER=openai \ --name prompt-optimizer \ linshen/prompt-optimizer
The MCP Server will then be accessible at http://localhost:8081/mcp.
To use Prompt Optimizer in Claude Desktop, you need to add the service configuration to Claude Desktop's configuration file.
Find Claude Desktop's configuration directory:
%APPDATA%\Claude\services~/Library/Application Support/Claude/services~/.config/Claude/servicesEdit or create the services.json file, adding the following content:
{
"services": [
{
"name": "Prompt Optimizer",
"url": "http://localhost:8081/mcp"
}
]
}
Make sure to replace localhost:8081 with the actual address and port where you've deployed Prompt Optimizer.
For more detailed information, please refer to the MCP Server User Guide.
Supported models: OpenAI, Gemini, DeepSeek, Zhipu AI, SiliconFlow, Custom API (OpenAI compatible interface)
In addition to API keys, you can configure advanced LLM parameters for each model individually. These parameters are configured through a field called llmParams, which allows you to specify any parameters supported by the LLM SDK in key-value pairs for fine-grained control over model behavior.
Advanced LLM Parameter Configuration Examples:
{"temperature": 0.7, "max_tokens": 4096, "timeout": 60000}{"temperature": 0.8, "maxOutputTokens": 2048, "topP": 0.95}{"temperature": 0.5, "top_p": 0.9, "frequency_penalty": 0.1}For more detailed information about llmParams configuration, please refer to the LLM Parameters Configuration Guide.
Configure environment variables through the -e parameter when deploying with Docker:
-e VITE_OPENAI_API_KEY=your_key
-e VITE_GEMINI_API_KEY=your_key
-e VITE_DEEPSEEK_API_KEY=your_key
-e VITE_ZHIPU_API_KEY=your_key
-e VITE_SILICONFLOW_API_KEY=your_key
# Multiple Custom Models Configuration (Unlimited Quantity)
-e VITE_CUSTOM_API_KEY_ollama=dummy_key
-e VITE_CUSTOM_API_BASE_URL_ollama=http://localhost:11434/v1
-e VITE_CUSTOM_API_MODEL_ollama=qwen2.5:7b
📖 Detailed Configuration Guide: See Multiple Custom Models Documentation for complete configuration methods and advanced usage
For detailed documentation, see Development Documentation
# 1. Clone the project
git clone https://github.com/linshenkx/prompt-optimizer.git
cd prompt-optimizer
# 2. Install dependencies
pnpm install
# 3. Start development server
pnpm dev # Main development command: build core/ui and run web app
pnpm dev:web # Run web app only
pnpm dev:fresh # Complete reset and restart development environment
For detailed project status, see Project Status Document
A: Most connection failures are caused by Cross-Origin Resource Sharing (CORS) issues. As this project is a pure frontend application, browsers block direct access to API services from different origins for security reasons. Model services will reject direct requests from browsers if CORS policies are not correctly configured.
A: Ollama fully supports the OpenAI standard interface, just configure the correct CORS policy:
OLLAMA_ORIGINS=* to allow requests from any originOLLAMA_HOST=0.0.0.0:11434 to listen on any IP addressA: These platforms typically have strict CORS restrictions. Recommended solutions:
Use Desktop Application (Most Recommended)
Use Self-deployed API Proxy Service (Professional solution)
Note: All web versions (including online version, Vercel deployment, Docker deployment) are pure frontend applications and subject to browser CORS restrictions. Only the desktop version or using an API proxy service can solve CORS issues.
A: This is caused by the browser's Mixed Content security policy. For security reasons, browsers block secure HTTPS pages (like the online version) from sending requests to insecure HTTP addresses (like your local Ollama service).
Solutions: To bypass this limitation, you need to have the application and API under the same protocol (e.g., both HTTP). We recommend the following approaches:
http://localhost:8081, both the app and local Ollama use HTTPA: This is because the application has not been signed with an Apple Developer certificate. Due to the high cost of Apple Developer accounts, the desktop application is currently unsigned.
Solution: Run the following command in Terminal to remove the quarantine attribute:
# For installed applications
xattr -rd com.apple.quarantine /Applications/PromptOptimizer.app
# For downloaded .dmg files (run before installation)
xattr -rd com.apple.quarantine ~/Downloads/PromptOptimizer-*.dmg
After running the command, you can open the application normally.
git checkout -b feature/AmazingFeature)git commit -m 'Add some feature')git push origin feature/AmazingFeature)Tip: When developing with Cursor tool, it is recommended to do the following before committing:
Thanks to all the developers who have contributed to this project!
This project is licensed under AGPL-3.0.
In simple terms: You can freely use, modify, and commercialize this project, but if you turn it into a website or service for others, you must share your source code.
What you can do:
What you must do:
Core principle: Commercial use is allowed, but not closed-source.
If this project is helpful to you, please consider giving it a Star ⭐️