This Docker setup replaces the Kimi interface with Ollama and FinGPT for generating alpha factors using WorldQuant Brain API.
For GPU acceleration:
Set up credentials:
# Copy the example credentials file
cp credential.example.txt credential.txt
# Edit credential.txt with your WorldQuant Brain credentials
# Format: ["username", "password"]
Create necessary directories:
mkdir -p results logs
Build and run with Docker Compose:
CPU Only:
docker-compose up --build
With GPU Acceleration:
# Windows
start_gpu.bat
# Linux/Mac
docker-compose -f docker-compose.gpu.yml up --build
naive-ollma: Main application container
ollama-webui (Optional): Web interface for Ollama
alpha-dashboard (Optional): Alpha Generator Dashboard
The system now includes an integrated orchestrator that manages:
./credential.txt: WorldQuant Brain credentials (read-only)./results/: Generated alpha results and batch files./logs/: Application logsollama_data: Persistent Ollama models and dataThe alpha dashboard provides comprehensive monitoring and control:
You can modify the docker-compose.yml to adjust:
OLLAMA_HOST: Ollama host (default: 0.0.0.0)OLLAMA_ORIGINS: CORS origins (default: *)PYTHONUNBUFFERED: Python output buffering (default: 1)The application accepts these arguments (modify in docker-compose.yml):
--batch-size: Number of alphas per batch (default: 3)--sleep-time: Sleep between batches in seconds (default: 30)--max-concurrent: Maximum concurrent simulations (default: 3)--log-level: Logging level (default: INFO)--ollama-url: Ollama API URL (default: http://localhost:11434)--mode: Operation mode (continuous, daily, generator, miner, submitter)--mining-interval: Hours between mining runs (default: 6)The Docker setup now runs the alpha generator and expression miner concurrently:
--max-concurrent limit (default: 3)hopeful_alphas.json as the coordination mechanism# Start all services
docker-compose up -d
# View logs
docker-compose logs -f naive-ollma
# Stop services
docker-compose down
The system includes a comprehensive web dashboard for monitoring and control:
Access URLs:
Dashboard Features:
Manual Controls:
Application Logs:
docker-compose logs -f naive-ollma
Web Interface:
Results:
./results/ directory for batch resultshopeful_alphas.json for promising alphasOllama Model Issues:
# Check if FinGPT model is available
docker-compose exec naive-ollma ollama list
# Pull model manually if needed
docker-compose exec naive-ollma ollama pull fingpt
Authentication Issues:
credential.txt format: ["username", "password"]Resource Issues:
Network Issues:
# Test Ollama connectivity
curl http://localhost:11434/api/tags
# Test WorldQuant Brain connectivity
docker-compose exec naive-ollma python -c "
import requests
from requests.auth import HTTPBasicAuth
import json
with open('credential.txt') as f:
creds = json.load(f)
sess = requests.Session()
sess.auth = HTTPBasicAuth(creds[0], creds[1])
resp = sess.post('https://api.worldquantbrain.com/authentication')
print(f'Auth status: {resp.status_code}')
"
To use a different model instead of FinGPT:
generate_alpha_ideas_with_ollama method in alpha_generator_ollama.pyFor production use:
Increase batch size:
command: ["--batch-size", "10", "--sleep-time", "60"]
Add resource limits:
deploy:
resources:
limits:
memory: 16G
cpus: '4.0'
Use external Ollama service:
environment:
- OLLAMA_API_BASE_URL=http://external-ollama:11434/api
Backup models:
docker run --rm -v naive-ollma_ollama_data:/data -v $(pwd):/backup alpine tar czf /backup/ollama_backup.tar.gz -C /data .
Restore models:
docker run --rm -v naive-ollma_ollama_data:/data -v $(pwd):/backup alpine tar xzf /backup/ollama_backup.tar.gz -C /data
Model Optimization:
Resource Management:
Network Optimization:
For issues related to: