ChatDKU is an agentic RAG system with a web frontend, a Flask backend, and optional Django services. This repository is open-source and designed to run on local machines or servers.
- Copy the environment template:
cp .env.example .env
- Start the stack:
docker compose up --build
- Open the app:
http://localhost:3005
To use a local secrets file without committing it, create .env.local and run:
docker compose --env-file .env.local up --build
Key variables (see .env.example):
- Core:
LLM_API_KEY,LLM_BASE_URL,TEI_URL,EMBEDDING_MODEL - Storage:
REDIS_HOST,REDIS_PASSWORD,CHROMA_HOST,CHROMA_DB_PORT - Frontend:
NEXT_PUBLIC_API_BASE_URL,NEXT_PUBLIC_DICTATION_WS_URL - Next.js server proxy:
BACKEND_INTERNAL_URL,BACKEND_FEEDBACK_URL
- LLM: OpenAI-compatible server (sglang). Default guide uses port
18085, so setLLM_BASE_URL=http://<server-ip>:18085/v1andLLM_API_KEY. - Embedding: TEI +
bge-m3on port8080, so setTEI_URL=http://<server-ip>:8080. - Full server setup: see
Documentations/Deployment-Guide_ZH.md.
Run ingestion from chatdku/chatdku/ingestion:
python update_data.py --data_dir ./data --user_id Chat_DKU -v True
python load_chroma.py --nodes_path ./data/nodes.json --collection_name chatdku_docs
python -m chatdku.chatdku.ingestion.load_redis --nodes_path ./data/nodes.json --index_name chatdku
docker compose --profile django up --build
This starts PostgreSQL, Django, and Celery in addition to the default services.
chatdku/: core agent logic, ingestion, backend, and frontendscraper/: recursive web scraperbenchmarks/: benchmarking scriptsDocumentations/: internal docs (sanitized)
- No production data or secrets are included in this repository.
- Use
.envor.env.localto provide your own credentials.