Files
crumbmissions/CLAUDE.md
Branko May Trinkwald 12d53db354 🌲 WUHUUUU! Die 15 Waldwächter sind komplett! 🎉
🔺 Das Dreieck (ohne geht's nicht!):
- 🐘 DumboSQL - Nie vergessend, Elefanten-Gedächtnis
- 🦊 FunkFox - Bash Rapper, Pipes im Beat
- 🕊️ Taichi Taube - Balance, Spirale, Weg zeigen

🎨 Die komplette Crew (alle 15/15):
- Hardware: CapaciTobi, Schnecki, Schraubbär
- Code: SnakePy, PepperPHP, CrabbyRust
- UI: Schnippsi, Templatus, ASCII-Monster
- System: Maya-Eule, Deepbit, Bugsy, Spider, Vektor

 Features:
- Waldwächter Library (lib/waldwaechter.sh)
- Inter-Character Communication via JSON logs
- CrumbCrew Command Central (alle 15 Charaktere)
- Stagebuilder Missionen mit AI-Integration
- Kekshandbuch Zero v0.0 (komplette Doku)

🦊 Yo, check mal den Flow:
Pipes sind wie Beat-Übergänge,
Der Rhythmus hält dich fest und forstet.
Mit | verbindest du Commands,
Smoothes Transition, Flow ohne Ende!

#OZM #Crumbforest #WoFragenWachsen

🤖 Generated with Claude Code
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-21 15:52:53 +01:00

348 lines
12 KiB
Markdown

# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## CF_Zero_V1 Project Overview
CF_Zero_V1 is an educational Bash learning platform called "Crumbforest" that uses character-based AI assistants to teach command-line concepts through interactive missions. The project combines shell scripting, Python-based computer vision, and OpenRouter API integration to create an immersive learning experience.
## Key Commands
### Mission System
```bash
# Launch the interactive mission selector
./crumb-mission-selector.sh
# Run individual missions directly
bash missions/basics/fridolin.sh # Navigation basics
bash missions/basics/balu.sh # File creation
bash missions/basics/noko.sh # File reading
bash missions/advanced/dns_mission.sh # DNS tools
bash missions/advanced/ssh_security.sh # SSH basics
```
### AI Character Assistants (Crumbforest Roles)
All roles require `OPENROUTER_API_KEY` environment variable set in `.env` file:
```bash
# Configure environment
cp .env.template .env
# Edit .env and add your OPENROUTER_API_KEY
# Character-based CLI assistants
./crumbforest_roles/mayaeule_zero.sh "What is friendship?"
./crumbforest_roles/deepbit_zero.sh "How do I use grep?"
./crumbforest_roles/bugsy_zero.sh "Explain loops in bash"
./crumbforest_roles/schnippsi_zero.sh "What is curl?"
./crumbforest_roles/tobi_zero.sh "How to use jq?"
./crumbforest_roles/templatus_zero.sh "Create HTML structure"
# Access CrumbCrew Command Central (all assistants in one shell)
# Run mission selector and choose option 9
./crumb-mission-selector.sh
```
### Camera Vision System (SnakeCam)
```bash
# Start the Flask-based gesture recognition app
cd snake_camera_vision_v2
python app.py
# Access at http://localhost:5000
# View gesture detection module
python gestures/gestures_v4.py
```
### Token Usage Monitoring
```bash
# View token usage across all AI assistants
./log_tokens_viewer_v4.sh
# Clean up malformed token logs
./fix_token_logs.sh
# Check individual agent logs
cat ~/.deepbit_logs/token_log.json
cat ~/.bugsy_logs/token_log.json
```
## Architecture Overview
### Mission System Architecture
**Metadata-Driven Design:**
- Each mission consists of two files: `mission_name.sh` (executable) and `mission_name.meta.json` (metadata)
- The mission selector (`crumb-mission-selector.sh`) dynamically loads missions by scanning directories
- Metadata structure:
```json
{
"icon": "🦊",
"title": "Mission Title",
"description": "What this teaches",
"category": "basics|advanced|challenges",
"enabled": true
}
```
**Mission Categories:**
- `missions/basics/` - Beginner missions (navigation, file operations)
- `missions/advanced/` - Advanced topics (DNS, SSH, networking)
- `missions/challenges/` - Interactive challenges (future)
### AI Assistant Architecture
**Character-Based Learning Roles (Waldwächter - Forest Guardians):**
Each role is a specialized bash script that wraps OpenRouter API calls with distinct personalities:
- **Maya-Eule** (owl) - Wise guide with Qdrant memory integration for contextual conversations
- **Deepbit** (octopus) - Explains Bash concepts poetically to children
- **Bugsy** - Debugging and troubleshooting assistant
- **Schnippsi** - General shell command helper
- **Tobi** - JSON/data processing expert (alias "Capacitoby" for electronics)
- **Templatus** - HTML architecture assistant
- **Schraubbär** - Heavy-duty hardware specialist (welding, tools, mechanical systems)
**Common Pattern:**
1. Accept question as command-line argument
2. Store conversation history in `~/.{role}_logs/{role}_history.json`
3. Track token usage in `~/.{role}_logs/token_log.json`
4. Use OpenRouter API with `openai/gpt-3.5-turbo` model
5. Respond in the same language as the input question
**API Flow:**
```bash
Question → JSON Payload → OpenRouter API → Response → Log History → Display
```
**Log Structure:**
- Request: `~/.{role}_logs/{role}_request.json`
- Response: `~/.{role}_logs/{role}_response.json`
- History: `~/.{role}_logs/{role}_history.json`
- Token usage: `~/.{role}_logs/token_log.json`
**Waldwächter Library (`lib/waldwaechter.sh`):**
Missions can source this library to make all AI assistants available as shell functions:
```bash
# In a mission script
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/../../lib/waldwaechter.sh"
# Now all AI assistants are available as commands
templatus "Create an HTML5 structure"
schnippsi "Show me CSS for a button"
bugsy "Debug this error"
mayaeule "What is friendship?"
```
The library automatically:
1. Loads `.env` file with API keys
2. Defines functions for all AI characters
3. Exports functions for use in subshells
4. Sets correct paths to role scripts in `crumbforest_roles/`
**CrumbCrew Command Central:**
Position 9 in the mission selector opens an interactive shell with all Waldwächter pre-loaded:
- Custom prompt: `(🌲 CrumbCrew) user@host:path$`
- Commands: `mayaeule`, `deepbit`, `bugsy`, `schnippsi`, `templatus`, `tobi`, `schraubaer`
- Utilities: `crew_status`, `crew_tokens`, `crew_memory`
**Inter-Character Communication (Crew Memory):**
Characters can reference each other's conversation histories through JSON log files:
```bash
# Schraubbär automatically checks if question mentions other crew members
schraubaer "Tobi sagte wir brauchen 5V, welches Netzteil empfiehlst du?"
# The script reads Tobi's recent conversations and includes them as context
# Keeps it lightweight for Raspberry Pi Zero - simple JSON file memory
```
Each character stores conversations in `~/.{character}_logs/{character}_history.json`:
- Lightweight JSON format for Raspi Zero compatibility
- Last 3 conversations from referenced character are included as context
- Characters detect mentions of other crew members (tobi, schnecki, schnippsi, etc.)
- System prompt automatically enriched with crew context
### Camera Vision System (SnakeCam)
**Technology Stack:**
- Flask web server with MJPEG video streaming
- OpenCV for camera capture and image processing
- Custom gesture detection using HSV color space and contour analysis
**Key Components:**
- `app.py` - Flask application with video streaming endpoints
- `gestures/gestures_v4.py` - Hand gesture detection algorithm
- Detection method: Skin color detection → Contour analysis → Convexity defects → Gesture classification
- Current gestures: "wave" detection based on defect count, area, and aspect ratio
**Endpoints:**
- `/` - Main camera interface
- `/video_feed` - MJPEG stream
- `/log_answer` - Log user responses with mood and gesture
- `/shutdown` - Clean camera release
## Project Structure
```
CF_Zero_V1/
├── crumb-mission-selector.sh # Main mission launcher (metadata-driven)
├── missions/ # Educational missions
│ ├── basics/ # Beginner tutorials
│ │ ├── fridolin.sh/meta.json # Navigation (pwd, ls, cd)
│ │ ├── balu.sh/meta.json # File creation (mkdir, touch, echo)
│ │ └── noko.sh/meta.json # File reading (cat, grep)
│ └── advanced/ # Advanced topics
│ ├── dns_mission.sh/meta.json # DNS tools (dig, nslookup, host)
│ └── ssh_security.sh/meta.json # SSH basics
├── crumbforest_roles/ # AI character assistants
│ ├── deepbit_zero.sh # Poetic Bash explainer
│ ├── bugsy_zero.sh # Debugging helper
│ ├── schnippsi_zero.sh # Shell command assistant
│ └── tobi_zero.sh # JSON/data expert
├── snake_camera_vision_v2/ # Flask gesture recognition app
│ ├── app.py # Main Flask server
│ └── gestures/ # Gesture detection modules
│ ├── gestures_v4.py # Hand detection algorithm
│ └── gestures_debug_test.py # Debug version with visualization
├── log_tokens_viewer_v4.sh # Token usage viewer
└── fix_token_logs.sh # Clean malformed logs
```
## Important Implementation Notes
### Adding New Missions
1. Create two files in appropriate category folder:
```bash
touch missions/basics/new_mission.sh
touch missions/basics/new_mission.meta.json
chmod +x missions/basics/new_mission.sh
```
2. Metadata must include:
- `icon`: Emoji for menu display
- `title`: Human-readable name
- `description`: What the mission teaches
- `category`: basics/advanced/challenges
- `enabled`: true/false
3. Mission script should:
- Use `cat << 'EOF'` for multi-line instructions
- Include interactive prompts with `read -p`
- Provide examples before asking user to try
- End with success message
### Creating New AI Assistants
1. Copy template from existing role (e.g., `deepbit_zero.sh`)
2. Update these variables:
- `LOGDIR` - Log directory path
- System prompt in `jq -n` command
- Character name in echo statements
3. Create log directory structure automatically via `mkdir -p`
4. Initialize empty JSON arrays for history and token logs
### Token Log Format
Each role logs API usage in this format:
```json
{
"zeit": "2025-06-18 19:05:33",
"rolle": "deepbit",
"usage": {
"prompt_tokens": 45,
"completion_tokens": 123,
"total_tokens": 168
}
}
```
### Gesture Detection Tuning
Located in `snake_camera_vision_v2/gestures/gestures_v4.py`:
- **HSV Range**: `lower_skin=[0,30,60]`, `upper_skin=[20,150,255]`
- **ROI**: Fixed at (100,100) with 150x150 size
- **Area Threshold**: 2500-15000 pixels for valid hand detection
- **Defect Count**: 3-10 convexity defects for "wave" gesture
- **Aspect Ratio**: 1.3-2.3 for hand shape validation
## Environment Requirements
### AI Assistants
- `OPENROUTER_API_KEY` must be exported
- `jq` for JSON processing
- `curl` for API calls
### Camera System
- Python 3.x
- Flask (`pip install flask`)
- OpenCV (`pip install opencv-python`)
- Webcam at `/dev/video0` or default camera index 0
### Mission System
- Bash 4.0+
- `jq` for metadata parsing
- Standard Unix tools (ls, cat, grep, etc.)
## Philosophy and Design Principles
**"Waldwächter" (Forest Guardian) Philosophy:**
- Transparency over magic
- Metadata-driven extensibility
- Educational and interactive
- No code changes required to add new content
- Multilingual support (responds in input language)
**Character-Based Learning:**
- Each AI assistant has distinct personality and teaching style
- Poetic and metaphorical explanations for complex concepts
- Designed for children and beginners
- Conversational history maintained across sessions
**Token Philosophy - "Was kostet die Frage eines Kindes?"**
In the forest, a child's question is priceless. However, in our digital age, questions to AI assistants consume computational resources (tokens). This creates a beautiful teaching moment:
*"What does a question cost?"*
- **In the forest:** Unbezahlbar (priceless) - every question holds infinite value
- **In the system:** Measured in tokens - a concrete, understandable metric
- **Pedagogically:** Token tracking teaches **mindful questioning**
### Why Token Tracking is Educational
1. **Teaches Reflection:** Children learn to think before asking
2. **Creates Awareness:** Understanding that resources have value
3. **Builds Quality:** Better questions lead to better answers
4. **Encourages Research:** Try to find answers independently first
5. **Develops Patience:** Not every thought needs an immediate AI response
### Token Budget System
The `.env` configuration allows setting a `DAILY_TOKEN_BUDGET`:
- `0` = Unlimited (default) - trust and freedom
- `>0` = Daily limit (e.g., 10000 tokens ≈ 20 thoughtful questions)
This isn't about restriction - it's about **mindfulness**. Just as we teach children to:
- Not waste water
- Not waste food
- Not waste paper
We can teach them:
- Not to waste computational resources
- To value the AI's "thinking time"
- To appreciate the cost of knowledge
### Implementation
All AI character scripts (`deepbit_zero.sh`, `bugsy_zero.sh`, etc.) log token usage to:
- `~/.{character}_logs/token_log.json`
View with: `./log_tokens_viewer_v4.sh`
This creates a transparent feedback loop where children can see:
- How many tokens each question consumed
- Which questions were "expensive" vs "cheap"
- Their daily/weekly usage patterns
**Result:** More thoughtful questions, deeper learning, and respect for resources.