This is the backend API for DeepWiki, providing smart code analysis and AI-powered documentation generation.
# From the project root
pip install -r api/requirements.txt
Create a .env file in the project root:
# Required API Keys GOOGLE_API_KEY=your_google_api_key # Required for Google Gemini models OPENAI_API_KEY=your_openai_api_key # Required for embeddings and OpenAI models # Optional API Keys OPENROUTER_API_KEY=your_openrouter_api_key # Required only if using OpenRouter models # AWS Bedrock Configuration AWS_ACCESS_KEY_ID=your_aws_access_key_id # Required for AWS Bedrock models AWS_SECRET_ACCESS_KEY=your_aws_secret_key # Required for AWS Bedrock models AWS_REGION=us-east-1 # Optional, defaults to us-east-1 AWS_ROLE_ARN=your_aws_role_arn # Optional, for role-based authentication # OpenAI API Configuration OPENAI_BASE_URL=https://custom-api-endpoint.com/v1 # Optional, for custom OpenAI API endpoints # Ollama host OLLAMA_HOST=https://your_ollama_host" # Optional: Add Ollama host if not local. default: http://localhost:11434 # Server Configuration PORT=8001 # Optional, defaults to 8001
If you're not using Ollama mode, you need to configure an OpenAI API key for embeddings. Other API keys are only required when configuring and using models from the corresponding providers.
💡 Where to get these keys:
- Get a Google API key from Google AI Studio
- Get an OpenAI API key from OpenAI Platform
- Get an OpenRouter API key from OpenRouter
- Get AWS credentials from AWS IAM Console
DeepWiki supports multiple LLM providers. The environment variables above are required depending on which providers you want to use:
GOOGLE_API_KEYOPENAI_API_KEYOPENROUTER_API_KEYAWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEYThe OPENAI_BASE_URL variable allows you to specify a custom endpoint for the OpenAI API. This is useful for:
Example: you can use the endpoint which support the OpenAI protocol provided by any organization
OPENAI_BASE_URL=https://custom-openai-endpoint.com/v1
DeepWiki now uses JSON configuration files to manage various system components instead of hardcoded values:
generator.json: Configuration for text generation models
api/config/ by defaultembedder.json: Configuration for embedding models and text processing
api/config/ by defaultrepo.json: Configuration for repository handling
api/config/ by defaultYou can customize the configuration directory location using the environment variable:
DEEPWIKI_CONFIG_DIR=/path/to/custom/config/dir # Optional, for custom config file location
This allows you to maintain different configurations for various environments or deployment scenarios without modifying the code.
# From the project root
python -m api.main
The API will be available at http://localhost:8001
When you provide a GitHub repository URL, the API:
When you ask a question:
Returns basic API information and available endpoints.
Streams an AI-generated response about a GitHub repository.
Request Body:
{
"repo_url": "https://github.com/username/repo",
"messages": [
{
"role": "user",
"content": "What does this repository do?"
}
],
"filePath": "optional/path/to/file.py" // Optional
}
Response: A streaming response with the generated text.
import requests
# API endpoint
url = "http://localhost:8001/chat/completions/stream"
# Request data
payload = {
"repo_url": "https://github.com/AsyncFuncAI/deepwiki-open",
"messages": [
{
"role": "user",
"content": "Explain how React components work"
}
]
}
# Make streaming request
response = requests.post(url, json=payload, stream=True)
# Process the streaming response
for chunk in response.iter_content(chunk_size=None):
if chunk:
print(chunk.decode('utf-8'), end='', flush=True)
All data is stored locally on your machine:
~/.adalflow/repos/~/.adalflow/databases/~/.adalflow/wikicache/No cloud storage is used - everything runs on your computer!