2.2 KiB
2.2 KiB
🌲 Crumbforest Debian Setup Guide (Raspi Ready)
This guide documents the specific setup steps for Debian-based systems (including Raspberry Pi and Ubuntu). It supplements the standard README.md.
1. System Services (Headless / Persistent)
On Debian/Raspi, we use Systemd User Services to keep components running in the background without needing open terminal windows.
Qdrant (Vector DB)
- Service:
~/.config/systemd/user/qdrant.service - Port: 6333
- Usage: Stores the RAG knowledge (CrumbCodex).
- Note: The native binary often excludes the Web Dashboard (
/dashboard) to save resources. Use API or Snapshot Export for visualization.
Ollama (AI Model Server)
- Service:
~/.config/systemd/user/ollama.service - Port: 11434
- Usage: Serves local AI models (e.g.
tinyllamaor embedding models).
Enable Services:
systemctl --user enable --now qdrant ollama
systemctl --user status qdrant ollama
2. RAG Knowledge Setup (Nullfeld)
The "Nullfeld" (Zero Field) is the vector database containing our documentation.
Ingestion Sources
- CrumbCodex:
~/Documents/crumbdocs/CrumbCodex - OZM-Keks-Handbuch:
~/Documents/crumbdocs/OZM-Keks-Handbuch-v1
Running Ingestion
If you add new Markdown files, run:
python3 missions/tools/terminal_dojo/scripts/ingest_knowledge.py ~/Documents/crumbdocs
Embedding Model used: all-minilm (384 dimensions).
Data Visualization
Since the native Qdrant has no dashboard:
- Create Snapshot:
curl -X POST http://localhost:6333/collections/crumbforest_knowledge/snapshots - Export: Find file in
~/.terminal_dojo/qdrant/snapshots/. - View: Import into a Dockerized Qdrant instance on your Desktop.
3. Crew Shell Resilience
The crew command (~/.terminal_dojo/crew.sh) is patched for robustness:
- Local Mode: If
ollamais running with a valid chat model (e.g.,tinyllama,llama3), it runs locally. - Cloud Fallback: If local models fail (common on low-power devices), it automatically falls back to the OpenRouter API (via
waldwaechter.sh).
Check Status:
crew_doctor
Should return ✅ Alle Checks bestanden! System gesund 💚.