🌲 Core Philosophy: "Was kostet die Frage eines Kindes?"
In the forest: priceless. In the system: measured. Pedagogically: teaches mindful questioning.
Changes:
- Added .env.template for API configuration
• OpenRouter, Claude API, OpenAI support
• Ollama (local AI) as free alternative
• Qdrant vector database configuration
• Token budget system for mindful learning
• Parental controls (PIN, reports)
- New AI Doktor module in mission selector
• Shows configured APIs and models
• Displays token budget & tracking status
• Lists active AI characters (Waldwächter)
• Links to token logs viewer
• Guides setup if .env missing
- Extended CLAUDE.md with Token Philosophy
• Educational reasoning behind token tracking
• Why it teaches reflection and quality
• Budget system explanation
• Implementation details
Philosophy:
Token tracking isn't restriction - it's mindfulness training.
Just as we teach not to waste water/food/paper, we teach
not to waste computational resources. Children learn to:
- Think before asking
- Value AI's thinking time
- Ask better quality questions
- Research independently first
Result: More thoughtful questions, deeper learning, respect for resources.
10 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
CF_Zero_V1 Project Overview
CF_Zero_V1 is an educational Bash learning platform called "Crumbforest" that uses character-based AI assistants to teach command-line concepts through interactive missions. The project combines shell scripting, Python-based computer vision, and OpenRouter API integration to create an immersive learning experience.
Key Commands
Mission System
# Launch the interactive mission selector
./crumb-mission-selector.sh
# Run individual missions directly
bash missions/basics/fridolin.sh # Navigation basics
bash missions/basics/balu.sh # File creation
bash missions/basics/noko.sh # File reading
bash missions/advanced/dns_mission.sh # DNS tools
bash missions/advanced/ssh_security.sh # SSH basics
AI Character Assistants (Crumbforest Roles)
All roles require OPENROUTER_API_KEY environment variable:
export OPENROUTER_API_KEY="your-key-here"
# Character-based CLI assistants
./crumbforest_roles/deepbit_zero.sh "How do I use grep?"
./crumbforest_roles/bugsy_zero.sh "Explain loops in bash"
./crumbforest_roles/schnippsi_zero.sh "What is curl?"
./crumbforest_roles/tobi_zero.sh "How to use jq?"
Camera Vision System (SnakeCam)
# Start the Flask-based gesture recognition app
cd snake_camera_vision_v2
python app.py
# Access at http://localhost:5000
# View gesture detection module
python gestures/gestures_v4.py
Token Usage Monitoring
# View token usage across all AI assistants
./log_tokens_viewer_v4.sh
# Clean up malformed token logs
./fix_token_logs.sh
# Check individual agent logs
cat ~/.deepbit_logs/token_log.json
cat ~/.bugsy_logs/token_log.json
Architecture Overview
Mission System Architecture
Metadata-Driven Design:
- Each mission consists of two files:
mission_name.sh(executable) andmission_name.meta.json(metadata) - The mission selector (
crumb-mission-selector.sh) dynamically loads missions by scanning directories - Metadata structure:
{ "icon": "🦊", "title": "Mission Title", "description": "What this teaches", "category": "basics|advanced|challenges", "enabled": true }
Mission Categories:
missions/basics/- Beginner missions (navigation, file operations)missions/advanced/- Advanced topics (DNS, SSH, networking)missions/challenges/- Interactive challenges (future)
AI Assistant Architecture
Character-Based Learning Roles: Each role is a specialized bash script that wraps OpenRouter API calls with distinct personalities:
- Deepbit (octopus) - Explains Bash concepts poetically to children
- Bugsy - Debugging and troubleshooting assistant
- Schnippsi - General shell command helper
- Tobi - JSON/data processing expert
- Templatus - HTML architecture assistant
Common Pattern:
- Accept question as command-line argument
- Store conversation history in
~/.{role}_logs/{role}_history.json - Track token usage in
~/.{role}_logs/token_log.json - Use OpenRouter API with
openai/gpt-3.5-turbomodel - Respond in the same language as the input question
API Flow:
Question → JSON Payload → OpenRouter API → Response → Log History → Display
Log Structure:
- Request:
~/.{role}_logs/{role}_request.json - Response:
~/.{role}_logs/{role}_response.json - History:
~/.{role}_logs/{role}_history.json - Token usage:
~/.{role}_logs/token_log.json
Camera Vision System (SnakeCam)
Technology Stack:
- Flask web server with MJPEG video streaming
- OpenCV for camera capture and image processing
- Custom gesture detection using HSV color space and contour analysis
Key Components:
app.py- Flask application with video streaming endpointsgestures/gestures_v4.py- Hand gesture detection algorithm- Detection method: Skin color detection → Contour analysis → Convexity defects → Gesture classification
- Current gestures: "wave" detection based on defect count, area, and aspect ratio
Endpoints:
/- Main camera interface/video_feed- MJPEG stream/log_answer- Log user responses with mood and gesture/shutdown- Clean camera release
Project Structure
CF_Zero_V1/
├── crumb-mission-selector.sh # Main mission launcher (metadata-driven)
├── missions/ # Educational missions
│ ├── basics/ # Beginner tutorials
│ │ ├── fridolin.sh/meta.json # Navigation (pwd, ls, cd)
│ │ ├── balu.sh/meta.json # File creation (mkdir, touch, echo)
│ │ └── noko.sh/meta.json # File reading (cat, grep)
│ └── advanced/ # Advanced topics
│ ├── dns_mission.sh/meta.json # DNS tools (dig, nslookup, host)
│ └── ssh_security.sh/meta.json # SSH basics
├── crumbforest_roles/ # AI character assistants
│ ├── deepbit_zero.sh # Poetic Bash explainer
│ ├── bugsy_zero.sh # Debugging helper
│ ├── schnippsi_zero.sh # Shell command assistant
│ └── tobi_zero.sh # JSON/data expert
├── snake_camera_vision_v2/ # Flask gesture recognition app
│ ├── app.py # Main Flask server
│ └── gestures/ # Gesture detection modules
│ ├── gestures_v4.py # Hand detection algorithm
│ └── gestures_debug_test.py # Debug version with visualization
├── log_tokens_viewer_v4.sh # Token usage viewer
└── fix_token_logs.sh # Clean malformed logs
Important Implementation Notes
Adding New Missions
-
Create two files in appropriate category folder:
touch missions/basics/new_mission.sh touch missions/basics/new_mission.meta.json chmod +x missions/basics/new_mission.sh -
Metadata must include:
icon: Emoji for menu displaytitle: Human-readable namedescription: What the mission teachescategory: basics/advanced/challengesenabled: true/false
-
Mission script should:
- Use
cat << 'EOF'for multi-line instructions - Include interactive prompts with
read -p - Provide examples before asking user to try
- End with success message
- Use
Creating New AI Assistants
- Copy template from existing role (e.g.,
deepbit_zero.sh) - Update these variables:
LOGDIR- Log directory path- System prompt in
jq -ncommand - Character name in echo statements
- Create log directory structure automatically via
mkdir -p - Initialize empty JSON arrays for history and token logs
Token Log Format
Each role logs API usage in this format:
{
"zeit": "2025-06-18 19:05:33",
"rolle": "deepbit",
"usage": {
"prompt_tokens": 45,
"completion_tokens": 123,
"total_tokens": 168
}
}
Gesture Detection Tuning
Located in snake_camera_vision_v2/gestures/gestures_v4.py:
- HSV Range:
lower_skin=[0,30,60],upper_skin=[20,150,255] - ROI: Fixed at (100,100) with 150x150 size
- Area Threshold: 2500-15000 pixels for valid hand detection
- Defect Count: 3-10 convexity defects for "wave" gesture
- Aspect Ratio: 1.3-2.3 for hand shape validation
Environment Requirements
AI Assistants
OPENROUTER_API_KEYmust be exportedjqfor JSON processingcurlfor API calls
Camera System
- Python 3.x
- Flask (
pip install flask) - OpenCV (
pip install opencv-python) - Webcam at
/dev/video0or default camera index 0
Mission System
- Bash 4.0+
jqfor metadata parsing- Standard Unix tools (ls, cat, grep, etc.)
Philosophy and Design Principles
"Waldwächter" (Forest Guardian) Philosophy:
- Transparency over magic
- Metadata-driven extensibility
- Educational and interactive
- No code changes required to add new content
- Multilingual support (responds in input language)
Character-Based Learning:
- Each AI assistant has distinct personality and teaching style
- Poetic and metaphorical explanations for complex concepts
- Designed for children and beginners
- Conversational history maintained across sessions
Token Philosophy - "Was kostet die Frage eines Kindes?"
In the forest, a child's question is priceless. However, in our digital age, questions to AI assistants consume computational resources (tokens). This creates a beautiful teaching moment:
"What does a question cost?"
- In the forest: Unbezahlbar (priceless) - every question holds infinite value
- In the system: Measured in tokens - a concrete, understandable metric
- Pedagogically: Token tracking teaches mindful questioning
Why Token Tracking is Educational
- Teaches Reflection: Children learn to think before asking
- Creates Awareness: Understanding that resources have value
- Builds Quality: Better questions lead to better answers
- Encourages Research: Try to find answers independently first
- Develops Patience: Not every thought needs an immediate AI response
Token Budget System
The .env configuration allows setting a DAILY_TOKEN_BUDGET:
0= Unlimited (default) - trust and freedom>0= Daily limit (e.g., 10000 tokens ≈ 20 thoughtful questions)
This isn't about restriction - it's about mindfulness. Just as we teach children to:
- Not waste water
- Not waste food
- Not waste paper
We can teach them:
- Not to waste computational resources
- To value the AI's "thinking time"
- To appreciate the cost of knowledge
Implementation
All AI character scripts (deepbit_zero.sh, bugsy_zero.sh, etc.) log token usage to:
~/.{character}_logs/token_log.json
View with: ./log_tokens_viewer_v4.sh
This creates a transparent feedback loop where children can see:
- How many tokens each question consumed
- Which questions were "expensive" vs "cheap"
- Their daily/weekly usage patterns
Result: More thoughtful questions, deeper learning, and respect for resources.