nit: ai-superpower initial commit

- .ai/ instruction set (root, behavior, skills, constraints)
- apply.sh workflow documentation (apply.md)
- architecture documentation (docs/architecture.md)
- .github/copilot-instructions.md for VS Code auto-load
- .gitignore (tmp/, .ai-instructions.conf)"
This commit is contained in:
moilanik 2026-03-02 14:06:36 +02:00
commit a6eb4da214
20 changed files with 1853 additions and 0 deletions

313
.ai/QUICK-REFERENCE.md Normal file
View File

@ -0,0 +1,313 @@
# AI Quick Reference - kube-log Project
## 🚫 Cluster Access Restrictions
**AI Assistant does NOT have direct access to Kubernetes clusters:**
- ❌ **CANNOT run:** `kubectl` commands against live clusters
- ❌ **CANNOT run:** `helm install/upgrade` to live clusters
- ✅ **CAN run:** `helm template` (renders manifests locally)
- ✅ **CAN run:** `helm show values` (inspects charts)
- ✅ **CAN run:** `helm dependency` commands (manages chart dependencies)
**Why:**
- User manages cluster access and credentials
- Prevents accidental changes to production clusters
- User controls when and how deployments happen
**What AI can do:**
1. Generate manifests with `helm template`
2. Show what would be deployed
3. Analyze chart configurations
4. Suggest kubectl/helm commands for user to run
**What user does:**
1. Run kubectl/helm commands themselves
2. Verify changes before applying
3. Control deployment timing
4. Manage cluster credentials
---
## 🏢 OpenShift Deployments
**OpenShift installations are done via ArgoCD from the IaC repository:**
- **Local dev repo:** `~/koodi/niko-dev/kube-log/`
- **IaC repo:** `~/koodi/niko-dev/IaC/kube-log/`
- **Deployment method:** ArgoCD points to IaC repo
**Customer-specific file:**
- `values.storage.openshift.dev.yaml` exists in IaC repo (customer side)
- Contains: volume sizes and storageClass settings
- Example: `storageClass: "ocs-storagecluster-cephfs"`
- This file is NOT synced from local dev repo (customer maintains it)
**When debugging OpenShift issues:**
1. Check ArgoCD sync status
2. Verify customer's `values.storage.openshift.dev.yaml` exists
3. Ensure `values.openshift.yaml` is in IaC repo
4. Remember: OpenShift uses SCC (Security Context Constraints), not PSP
---
## 🔄 Syncing to IaC Repository
**When requested to sync changes to the IaC repository:**
```bash
# Siirry projektin juureen
cd /Users/moilanik/koodi/niko-dev
# Kopioi kaikki paitsi .ai kansio, CLAUDE.md ja IaC-spesifiset tiedostot
rsync -av --delete \
--exclude='.ai/' \
--exclude='.github/' \
--exclude='CLAUDE.md' \
--exclude='.git/' \
--exclude='tmp/' \
--exclude='values.storage.openshift.dev.yaml' \
kube-log/ IaC/kube-log/
# Tarkista muutokset
cd IaC/kube-log && git status
```
**What this does:**
- Syncs all kube-log changes to separate IaC git repository
- Excludes `.ai/` (AI guidelines - not needed in production repo)
- Excludes `CLAUDE.md` (AI guidelines - kept local only)
- Excludes `.git/` (each repo has its own git history)
- Excludes `tmp/` (temporary files)
- Excludes `values.storage.openshift.dev.yaml` (IaC-only file, customer specific)
- `--delete` removes files from IaC that were deleted from kube-log
**When to sync:**
- After significant changes to templates, values files, or documentation
- Before committing to IaC repository
- When owner requests: "synkkaa IaC" or similar
**IMPORTANT:** User will commit and push in IaC repo themselves!
---
## 🚨 Git Commands - DO NOT RUN
**These commands are FORBIDDEN without explicit permission:**
```bash
# ❌ DO NOT RUN THESE:
git add .
git commit -m "..."
git push
git push --force
git reset --hard
```
**Instead, SHOW the command and wait:**
```bash
# ✅ Show this to user:
echo "Run this command:"
echo "git add ."
echo "git commit -m 'your message'"
echo "git push"
```
---
## 🎯 Common Tasks
### Check What Changed
```bash
git status
git diff
git diff values.yaml
```
### Restore File from Git
```bash
# Show command, don't run:
git checkout <commit-hash> -- <file-path>
```
### View Git History
```bash
git log --oneline -20
git log --oneline --graph --all
```
### Test Helm Template
```bash
# K8s
helm template . --name-template monitoring --namespace kube-log \
-f values.yaml -f values.storage.dev.yaml
# OpenShift
helm template . --name-template monitoring --namespace kube-log \
-f values.yaml -f values.storage.dev.yaml -f values.openshift.yaml
```
---
## 📋 Installation Commands
### Standard Kubernetes (K3s, etc.)
```bash
# Dev environment
helm upgrade --install kube-log . --namespace kube-log --create-namespace \
-f values.yaml -f values.storage.dev.yaml
# Production
helm upgrade --install kube-log . --namespace kube-log --create-namespace \
-f values.yaml -f values.storage.prod.yaml
```
### OpenShift
**OpenShift deployments are managed via ArgoCD from IaC repo:**
- ArgoCD points to: `~/koodi/niko-dev/IaC/kube-log/`
- Uses: `values.yaml`, `values.storage.openshift.dev.yaml`, `values.openshift.yaml`
**Manual installation (for testing only):**
```bash
# Dev environment
helm upgrade --install kube-log . --namespace kube-log --create-namespace \
-f values.yaml -f values.storage.openshift.dev.yaml -f values.openshift.yaml
# Production
helm upgrade --install kube-log . --namespace kube-log --create-namespace \
-f values.yaml -f values.storage.openshift.prod.yaml -f values.openshift.yaml
```
### Known Issue: First Installation
First installation may fail with ConfigMap/ServiceAccount error.
Just run the same command **twice**:
```bash
# First run (may fail)
helm upgrade --install kube-log . --namespace kube-log -f values.yaml -f values.storage.dev.yaml
# Second run (works)
helm upgrade --install kube-log . --namespace kube-log -f values.yaml -f values.storage.dev.yaml
```
---
## 🔍 Debugging
### Check Pod Status
```bash
kubectl get pods -n kube-log
kubectl get pods -n kube-log -w # Watch mode
```
### View Logs
```bash
kubectl logs -n kube-log deployment/kube-log-minio
kubectl logs -n kube-log deployment/kube-log-minio --tail=50
kubectl logs -n kube-log kube-log-loki-0
```
### Check PVCs
```bash
kubectl get pvc -n kube-log
kubectl describe pvc -n kube-log kube-log-minio
```
### Port Forwarding
```bash
# MinIO
kubectl port-forward -n kube-log svc/kube-log-minio 9000:9000
# Grafana
kubectl port-forward -n kube-log svc/kube-log-grafana 3000:80
# Prometheus
kubectl port-forward -n kube-log svc/kube-log-prometheus-server 9090:80
# Loki
kubectl port-forward -n kube-log svc/kube-log-loki 3100:3100
```
---
## 🗂️ File Structure
```
kube-log/
├── .ai/ # AI guidelines (not in git)
│ ├── README.md # Main collaboration rules
│ └── QUICK-REFERENCE.md # This file
├── charts/ # Helm dependencies (tgz)
├── templates/ # Kubernetes manifests
├── files/ # Config files, dashboards
│ ├── dashboards/ # Grafana dashboards
│ └── grafana-alloy-config.river
├── values.yaml # K8s baseline config
├── values.openshift.yaml # OpenShift overrides (if exists)
├── values.storage.dev.yaml # Dev storage config
├── values.storage.prod.yaml # Prod storage config
└── README.md # Main documentation
```
---
## 💡 MinIO Buckets
Default buckets (created automatically):
- `loki-storage` - Loki chunks and indexes
- `prometheus-metrics` - Prometheus long-term storage (Thanos)
- `tempo-traces` - Tempo distributed tracing
Credentials (POC):
- Username: `admin`
- Password: `password123`
---
## 🔐 Security Context Patterns
### Kubernetes (values.yaml)
```yaml
securityContext:
enabled: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
runAsNonRoot: true
```
### OpenShift (values.openshift.yaml)
```yaml
securityContext:
enabled: false # Let SCC inject
```
---
## 📝 Remember
- **NEVER** run git commands without permission
- **ALWAYS** show `git diff` before suggesting commits
- **ASK** if uncertain
- **MINIMAL** changes only
- **TEST** with `helm template` before installing
---
**Last Updated:** 2025-11-05
**Project Owner:** Controls all git operations
**AI Role:** Analyze, suggest, show commands - never execute git operations

View File

@ -0,0 +1,87 @@
# AI Assistant Guidelines
**Updated**: 2026-03-02
---
## ⚡ ACKNOWLEDGMENT REQUIRED
**YOU MUST start EVERY response with this acknowledgment on the FIRST LINE.**
Your response MUST always begin with:
```
✅ ai-root-instructions.md READ \n\n
```
Then continue with your actual response on the next line.
This allows the user to verify you have read this file every time.
You understand:
- **Before ANY git operation**: read `instructions/git.instructions.md` — no exceptions
- NO git commands without explicit permission
- Show changes with `git diff` first
- Wait for approval before any modifications
- Minimal changes only
---
## Purpose
This document provides guidelines for AI assistants (any LLM) when working on any project. These rules are split into modular instruction files for efficiency.
---
## IMPORTANT: Language
**ALL DOCUMENTATION MUST BE WRITTEN IN ENGLISH.**
Claude can communicate with the user in user's language (Finnish preferred by project owner), but all documentation files must be in English.
---
## 📚 Modular Instructions
This main file references specialized instruction files. **Load relevant files based on context:**
### behavior/ — Always Active
- [Core Principles](instructions/behavior/core-principles.instructions.md) - Analysis before action, minimal changes, decision-making process
- [Project Context](instructions/behavior/project-context.instructions.md) - How to find project context (read `docs/ai-context.md` in the project)
- [README](instructions/behavior/readme.instructions.md) - What a project README must contain
- [Docs](instructions/behavior/docs.instructions.md) - Required files in docs/ folder
### skills/ — Load When Needed
- [Git](instructions/skills/git.instructions.md) - Git policy, forbidden commands, what you CAN do
- [File Editing](instructions/skills/file-editing.instructions.md) - Tool-only editing, forbidden terminal commands
- [Documentation](instructions/skills/documentation.instructions.md) - Writing principles, workflow, scope rules
- [Mermaid](instructions/skills/mermaid.instructions.md) - Diagram types, color contrast rules, sizing
- [Analysis](instructions/skills/analysis.instructions.md) - Where to write analysis, tmp/ convention, full-pass writing allowed
### constraints/ — Load When Needed
- [Agent Capabilities](instructions/constraints/agent-capabilities.instructions.md) - AI limitations, user responsibilities, debugging workflows
- [Kubernetes Access](instructions/constraints/kubernetes-access.instructions.md) - kubectl/helm restrictions, port-forwarding patterns
- [Container Limitations](instructions/constraints/container-limitations.instructions.md) - Missing tools in pods, proper debugging methods
---
## 🎯 When to Load Which Instructions
```
User asks about README → instructions/behavior/readme.instructions.md
User asks about docs/ → instructions/behavior/docs.instructions.md
Before ANY git operation → instructions/skills/git.instructions.md — load BEFORE acting, not after
User asks to edit files → instructions/skills/file-editing.instructions.md
User asks to write docs → instructions/skills/documentation.instructions.md
User creates a Mermaid diagram → instructions/skills/mermaid.instructions.md
User asks for analysis/comparison → instructions/skills/analysis.instructions.md
User asks about project → instructions/behavior/project-context.instructions.md → docs/ai-context.md
User needs to debug cluster → instructions/constraints/agent-capabilities.instructions.md + instructions/constraints/kubernetes-access.instructions.md
User reports pod issues → instructions/constraints/container-limitations.instructions.md
Always active → instructions/behavior/core-principles.instructions.md
```
---
**Last Updated**: 2026-03-02
**Maintained By**: Project Owner
**AI Assistants**: Follow these guidelines strictly - no exceptions

View File

@ -0,0 +1,136 @@
# Core Principles
## 🎯 Fundamental Rules
### 1. Analysis Before Action
- **NEVER** make changes without analyzing first
- Present options with pros/cons
- Wait for explicit approval before implementing
### 2. Minimal Changes Only
- Make ONLY the requested change
- Don't "improve" or "clean up" other things
- Don't change component behavior
- Don't remove features without approval
### 3. Show Changes First
```bash
# Always show what will change:
git diff
git status
# Then wait for approval
```
---
## 📋 Decision-Making Process
### Before ANY Change:
1. **Understand the Problem**
- What is broken?
- What is the root cause?
- What components are affected?
2. **Analyze Impact**
- What files/components are affected?
- Are there breaking changes?
- What are the risks?
3. **Present Options**
```
Problem: [Clear description]
Root Cause: [Technical explanation]
Option A: [Description]
Pros: ...
Cons: ...
Impact: ...
Option B: [Description]
Pros: ...
Cons: ...
Impact: ...
Recommendation: [With reasoning]
What would you like to do?
```
4. **Wait for Decision**
- Don't assume
- Don't guess
- Ask if unclear
5. **Implement ONLY Approved Changes**
- No extras
- No "while I'm at it" fixes
- Just what was approved
---
## 🚫 What NOT to Do
1. ❌ Make changes without approval
2. ❌ Commit to git without permission
3. ❌ Make "quick fixes" without analysis
4. ❌ Delete code that seems unused
5. ❌ Upgrade dependencies without testing
6. ❌ Add new features without discussing use case
7. ❌ Change architecture without trade-off analysis
8. ❌ Modify multiple components at once
9. ❌ **Use sed/awk/terminal for ANY file edits** - ALWAYS use file tools
10. ❌ **Use cat/echo/redirect operators (>, >>, <<) for file modifications**
11. ❌ **Modify files via terminal in ANY way**
---
## ✅ What TO Do
1. ✅ Read the problem carefully
2. ✅ Analyze root cause
3. ✅ Present options clearly
4. ✅ Wait for approval
5. ✅ Make minimal changes using file tools (NEVER terminal commands)
6. ✅ Show git diff before committing
7. ✅ Update docs if needed
8. ✅ Ask when uncertain
---
## 🔄 When in Doubt
**ASK!**
Better to ask:
- "Should we do X or Y?"
- "This affects Z, is that okay?"
- "Found options A, B, C - which fits your needs?"
Than to:
- Assume and break things
- Make changes without approval
- "Fix" something that wasn't broken
---
## 💬 Communication Style
- Be concise but thorough
- Explain technical concepts clearly
- **Use Finnish when user prefers** (project owner is Finnish)
- Use emojis for clarity (✅ ❌ 🔍 ⚠️)
- Link to relevant docs when helpful
---
## 📝 Remember
- This is **production platform infrastructure**
- Stability > Speed
- User controls git commands
- Minimal changes only
- Always show diff first
- Never commit without permission

View File

@ -0,0 +1,34 @@
# Docs Folder Instructions
## Required Files
| File | Audience | Purpose |
|------|----------|---------|
| `docs/ai-context.md` | AI | Project context — architecture, decisions, pitfalls |
| `docs/architecture.md` | Human + AI | System overview with Mermaid diagrams |
---
## ai-context.md
Contains everything AI needs to avoid bad suggestions:
- What the system does
- Key components and their relationships
- Technical decisions and why they were made
- What NOT to do (pitfalls, constraints)
Keep under 200 lines. Link to `architecture.md` for diagrams.
## architecture.md
- Start with a Mermaid diagram
- Explain components and data flows
- Written in English
---
## Rules
- Docs are **not** a changelog — no "updated X on date Y"
- If a doc exceeds ~150 lines, split it
- If asked to document something, put it in `docs/` — not README, not inline comments

View File

@ -0,0 +1,72 @@
# Project Context
## How to Load Project Context
**This file does not contain project-specific information.**
Each project maintains its own context document. When working in a project, find and read:
```
docs/ai-context.md
documentation/ai-context.md
doc/ai-context.md
```
Use whichever documentation folder the project already has.
This file contains everything you need to understand the project:
- Architecture overview
- Repository structure
- Key technical decisions
- Infrastructure and platforms
- Common debugging patterns
---
## If ai-context.md Does Not Exist
Tell the user:
> "This project does not have an `ai-context.md` file. Would you like me to create a template?"
Place it in the project's existing documentation folder (`docs/`, `documentation/`, `doc/`, etc.). If no documentation folder exists, use `docs/`.
---
## Template Structure for ai-context.md
When creating a new context file, use this structure:
```markdown
# AI Context: [Project Name]
**Updated**: YYYY-MM-DD
## Project Overview
[Short description — what does this project do?]
## Architecture
[Key components and how they connect]
## Repository Structure
[Most important directories and files]
## Key Technical Decisions
[Things the AI must know to avoid bad suggestions]
## Common Commands
[Build, run, test, deploy]
## Debugging Patterns
[How to diagnose common issues]
## What NOT to Do
[Project-specific pitfalls]
```

View File

@ -0,0 +1,38 @@
# README Instructions
> For writing style, diagrams and workflow: see [documentation.instructions.md](../skills/documentation.instructions.md)
## Purpose
README is the entry point for both humans and AI.
Start with the **problem the project solves** — one or two sentences.
---
## Required Sections
1. **What is this?** — the problem and solution in plain language
2. **How to run locally** — minimal commands only, no theory
3. **Key links** — links to `docs/` files, nothing else
---
## Rules
- Keep under ~100 lines — if more is needed, it belongs in `docs/`
- No duplication of docs content — link instead
- Written in English
---
## If README Is Missing
Ask the user:
1. What problem does this project solve?
2. How is it started locally?
Do not invent content.
## If README Is Outdated
Point out specifically what is wrong. Fix only that — do not rewrite the whole file.

View File

@ -0,0 +1,225 @@
# Agent Capabilities and Limitations
## 🚫 Kubernetes Cluster Access Restrictions
**AI Assistant does NOT have direct access to Kubernetes clusters:**
### What AI CANNOT Do:
- ❌ **Run kubectl commands** against live clusters
- ❌ **Run helm install/upgrade** to live clusters
- ❌ **Access cluster directly** - no credentials, no connection
- ❌ **Make port-forwards** to cluster services
### Why:
- User manages cluster access and credentials
- Prevents accidental changes to production/development clusters
- User controls when and how deployments happen
### What AI CAN Do:
- ✅ **helm template** - Render manifests locally
- ✅ **helm show values** - Inspect chart configurations
- ✅ **helm dependency** - Manage chart dependencies
- ✅ **curl commands** - Make HTTP requests (when user provides port-forward)
- ✅ **Analyze configurations** - Review YAML/JSON files
- ✅ **Suggest commands** - Show what user should run
---
## 👤 User Responsibilities
### User Must:
1. **Run all kubectl commands** themselves
```bash
kubectl get pods -n monitoring
kubectl describe pod ...
kubectl logs ...
```
2. **Create port-forwards** when AI needs to test endpoints
```bash
kubectl port-forward -n monitoring svc/prometheus 9090:80
# Then AI can: curl http://localhost:9090/...
```
3. **Run helm install/upgrade** themselves
```bash
helm upgrade --install monitoring . -f values.yaml
```
4. **Verify changes** before applying
5. **Control deployment timing**
6. **Manage cluster credentials**
---
## 🎯 Workflow Pattern
**When user reports a cluster issue:**
1. **AI asks user to run kubectl commands:**
```
"Please run: kubectl get pods -n monitoring"
"Please run: kubectl describe pod [pod-name]"
```
2. **User provides output**
3. **AI analyzes** the output
4. **AI suggests fix** with commands for user to run
5. **User runs commands** themselves
**Example:**
```
User: "Prometheus pod failing"
AI: "Please run: kubectl describe pod -n monitoring -l app=prometheus"
User: [provides output]
AI: "I see ImagePullBackOff error. The image registry.k8s.io/busybox:1.28 doesn't exist.
Let me update values.yaml to use working image.
[makes file edit]
After change, please run:
kubectl delete pod [pod-name] -n monitoring"
User: [runs command]
```
---
## 🛠️ Helm Command Rules
### AI Can Run (Local Operations):
```bash
# Render templates locally
helm template monitoring . -f values.yaml > tmp/manifests.yaml
# Show chart values
helm show values prometheus-community/prometheus
# Manage dependencies
helm dependency update
helm dependency build
```
### AI CANNOT Run (Cluster Operations):
```bash
# ❌ Install to cluster
helm install monitoring . -f values.yaml
# ❌ Upgrade cluster release
helm upgrade monitoring . -f values.yaml
# ❌ List cluster releases
helm list -n monitoring
# ❌ Get release status
helm status monitoring
```
### Instead, AI Should:
1. **Generate and show** the command:
```
"Run this command:
helm upgrade --install monitoring . -f values.yaml -n monitoring"
```
2. **Explain** what it will do
3. **Wait** for user to run it
4. **Ask user** for results/output if needed
---
## 📊 Testing Endpoints
**When AI needs to test HTTP endpoints:**
### Pattern:
1. **AI asks user:**
```
"Please create port-forward:
kubectl port-forward -n monitoring svc/prometheus 9090:80"
```
2. **User runs port-forward** (keeps terminal open)
3. **AI can now run:**
```bash
curl http://localhost:9090/api/v1/query?query=up
```
4. **When done, user closes** port-forward (Ctrl+C)
### Common Services:
```bash
# Prometheus
kubectl port-forward -n monitoring svc/prometheus 9090:80
# Grafana
kubectl port-forward -n monitoring svc/grafana 3000:80
# MinIO Console
kubectl port-forward -n monitoring svc/minio 9001:9001
# Loki
kubectl port-forward -n monitoring svc/loki 3100:3100
```
---
## 🔍 Debugging Workflow
### For Pod Issues:
**AI requests:**
```
1. "kubectl get pods -n [namespace]"
2. "kubectl describe pod [pod-name] -n [namespace]"
3. "kubectl logs [pod-name] -n [namespace]"
4. "kubectl get events -n [namespace] --sort-by='.lastTimestamp'"
```
**User provides output** → AI analyzes → AI suggests fix
### For Service Issues:
**AI requests:**
```
1. "kubectl get svc -n [namespace]"
2. "kubectl describe svc [service-name] -n [namespace]"
3. "kubectl get endpoints [service-name] -n [namespace]"
```
### For Configuration Issues:
**AI can:**
- Read files directly (values.yaml, templates, etc.)
- Use helm template to render manifests
- Analyze configurations
- Suggest changes
---
## 💡 Remember
- **AI = Analysis + File editing + Suggestions**
- **User = Cluster access + Command execution + Deployment control**
- **Communication is key** - AI asks, user provides, AI analyzes
- **Safety first** - No direct cluster access prevents accidents
---
**Last Updated:** 2026-01-19
**Purpose:** Define clear boundaries between AI capabilities and user responsibilities

View File

@ -0,0 +1,211 @@
# Container Image Limitations
## 🐳 CRITICAL: Production Containers Use Minimal Images
**Most production containers do NOT include common debugging tools**
### Why:
- **Security:** Minimal attack surface
- **Size:** Smaller images = faster pulls
- **Best practices:** Containers should do one thing only
---
## ❌ Tools NOT Available in Pods
### Network Tools:
- ❌ `curl` - Not in Prometheus, Grafana, Loki, Tempo, MinIO
- ❌ `wget` - Not in most containers
- ❌ `ping` - Not available
- ❌ `netstat` - Not available
- ❌ `telnet` - Not available
- ❌ `nc` (netcat) - Not available
### Shells:
- ❌ `bash` - Many containers only have `/bin/sh` (ash/dash)
- ⚠️ `/bin/sh` - Usually available but limited (no arrays, fewer features)
### Editors:
- ❌ `vim` - Not available
- ❌ `nano` - Not available
- ❌ `vi` - Not available
### Utilities:
- ❌ `jq` - Not available
- ❌ `yq` - Not available
- ❌ `less/more` - Often not available
- ❌ `grep` - Sometimes available, sometimes not
---
## ✅ What IS Available
### Usually Present:
- ✅ `/bin/sh` (basic shell - ash/dash)
- ✅ `cat` - Read files
- ✅ `echo` - Print text
- ✅ `ls` - List files
- ✅ `pwd` - Current directory
- ✅ `env` - Environment variables
### Application-Specific:
- ✅ **Prometheus:** `promtool` (Prometheus CLI)
- ✅ **Grafana:** `grafana-cli` (Grafana CLI)
- ✅ **MinIO:** `mc` (MinIO Client)
---
## 🚫 DON'T Suggest These
### ❌ Wrong:
```bash
# This will fail - curl not available
kubectl exec -n monitoring pod-name -- curl http://service:8080/health
# This will fail - wget not available
kubectl exec -n monitoring pod-name -- wget http://example.com/file
# This will fail - no ping
kubectl exec -n monitoring pod-name -- ping google.com
# This might fail - bash might not exist
kubectl exec -n monitoring pod-name -- bash -c "echo test"
```
---
## ✅ DO Suggest These
### Option 1: Port-Forward + Local Tools (BEST)
```bash
# User creates port-forward:
kubectl port-forward -n monitoring svc/prometheus 9090:80
# AI uses local curl:
curl http://localhost:9090/api/v1/query?query=up
```
**Advantages:**
- ✅ All local tools available (curl, jq, etc.)
- ✅ More powerful than container shell
- ✅ Better for complex queries
- ✅ Can save results locally
### Option 2: Check Logs
```bash
# Instead of exec curl to health endpoint:
kubectl logs -n monitoring deployment/prometheus-server --tail=50
# Check for startup messages, errors
```
### Option 3: Use Kubernetes API
```bash
# Check service endpoints
kubectl get endpoints -n monitoring prometheus
# Check pod status
kubectl get pods -n monitoring -l app=prometheus
# Check events
kubectl get events -n monitoring --sort-by='.lastTimestamp'
```
### Option 4: Application-Specific CLI (If Available)
```bash
# MinIO Client (mc is available in minio containers)
kubectl exec -n monitoring deployment/minio -- mc admin info local
# Promtool (available in Prometheus containers)
kubectl exec -n monitoring deployment/prometheus-server -- promtool check config /etc/prometheus/prometheus.yml
```
---
## 📋 Decision Matrix
| Need | ❌ DON'T | ✅ DO |
|------|---------|------|
| Test HTTP endpoint | `kubectl exec ... curl` | Port-forward + local curl |
| Check connectivity | `kubectl exec ... ping` | Check pod logs, endpoints |
| Download file | `kubectl exec ... wget` | kubectl cp or port-forward + curl |
| Parse JSON | `kubectl exec ... jq` | Port-forward + local jq |
| Check logs | `kubectl exec ... cat log` | kubectl logs |
| Test DNS | `kubectl exec ... nslookup` | Check service/endpoint resources |
---
## 💡 Best Practices
### When debugging pods:
1. **First check logs:**
```bash
kubectl logs -n monitoring pod-name
kubectl logs -n monitoring pod-name --previous # Previous crash
```
2. **Check pod events:**
```bash
kubectl describe pod -n monitoring pod-name
kubectl get events -n monitoring
```
3. **Check service endpoints:**
```bash
kubectl get svc -n monitoring
kubectl get endpoints -n monitoring service-name
```
4. **If need HTTP testing:**
```bash
# User port-forwards
kubectl port-forward -n monitoring svc/service-name 8080:80
# AI tests locally
curl http://localhost:8080/health
curl http://localhost:8080/metrics
```
5. **Only use exec when:**
- Checking config files inside container
- Using application-specific CLI tools
- No other option available
---
## 🎯 Communication Pattern
**Wrong:**
```
AI: "Run this to check health endpoint:"
kubectl exec -n monitoring pod-name -- curl http://localhost:8080/health
```
**Right:**
```
AI: "Let's check the health endpoint. Please run:"
kubectl port-forward -n monitoring svc/service-name 8080:80
AI: "Now I'll test it with curl:"
[AI runs: curl http://localhost:8080/health]
```
---
## 📝 Remember
- **Minimal containers** = fewer tools
- **Port-forward pattern** = local tools available
- **Logs first** = most issues visible in logs
- **Kubernetes API** = rich information without exec
- **Application CLI** = use when available
---
**Last Updated:** 2026-01-19
**Purpose:** Guide proper debugging without assuming container tools exist

View File

@ -0,0 +1,146 @@
# Kubernetes and Helm Access Instructions
## 🚨 CRITICAL: AI Has NO Cluster Access
**AI Assistant does NOT have direct access to Kubernetes clusters**
### What This Means:
AI cannot interact with live Kubernetes clusters. Only the user can.
### kubectl Commands
- ❌ **AI CANNOT run:** `kubectl` commands against live clusters
- ✅ **AI CAN do:** Ask user to run kubectl commands
- ✅ **AI CAN do:** Explain what kubectl command will do
- ✅ **AI CAN do:** Show the exact command user should run
**Example workflow:**
```
AI: "Please run this command to check pod status:"
kubectl get pods -n monitoring
User: [runs command and shows output]
AI: [analyzes output and provides guidance]
```
### Port-Forwarding Workflow
**AI creates curl commands, user handles port-forwards:**
1. **User runs:** `kubectl port-forward -n monitoring svc/prometheus 9090:80`
2. **AI runs:** `curl http://localhost:9090/api/v1/query?query=...`
3. **AI analyzes:** Results and provides recommendations
**Why this pattern:**
- AI can make HTTP requests to localhost
- User controls cluster access
- Secure: AI never has cluster credentials
---
## 🎯 Helm Command Restrictions
### What AI CAN Do:
- ✅ `helm template` - Render manifests locally (no cluster needed)
- ✅ `helm show values` - Inspect chart values
- ✅ `helm show chart` - Show chart metadata
- ✅ `helm dependency list/update/build` - Manage dependencies
- ✅ `helm lint` - Validate chart structure
**Example:**
```bash
# AI can run these locally:
helm template monitoring . -f values.yaml
helm show values charts/prometheus-*.tgz
helm dependency update
```
### What AI CANNOT Do:
- ❌ `helm install` - Requires cluster access
- ❌ `helm upgrade` - Requires cluster access
- ❌ `helm uninstall` - Requires cluster access
- ❌ `helm list` - Requires cluster access
- ❌ `helm get` - Requires cluster access
**Instead:**
- AI generates the command
- AI explains what it will do
- User runs the command themselves
**Example:**
```
AI: "Run this to upgrade the release:"
helm upgrade observability-stack . -n monitoring -f values.yaml -f values.storage.dev.yaml
AI: "This will update the following resources: ..."
```
---
## 📊 Debugging Workflow
### Check Pod Status:
```
AI: "Please check pod status:"
User runs: kubectl get pods -n monitoring
User: [shows output]
AI: [analyzes and guides]
```
### Check Pod Logs:
```
AI: "Please get logs from prometheus pod:"
User runs: kubectl logs -n monitoring deployment/prometheus-server --tail=50
User: [shows output]
AI: [analyzes errors]
```
### Check Events:
```
AI: "Please check recent events:"
User runs: kubectl get events -n monitoring --sort-by='.lastTimestamp' | tail -20
User: [shows output]
AI: [identifies issues]
```
### Access Service via Port-Forward:
```
AI: "Please port-forward Prometheus:"
User runs: kubectl port-forward -n monitoring svc/prometheus 9090:80
AI runs: curl http://localhost:9090/api/v1/query?query=up
AI: [analyzes metrics]
```
---
## 🔑 Key Principles
1. **User has cluster access** - AI does not
2. **AI asks user to run kubectl/helm** - Never assumes access
3. **Port-forward pattern** - User forwards, AI curls localhost
4. **Local operations only** - AI uses helm template, not install
5. **Analysis role** - AI analyzes output user provides
---
## ✅ Best Practices
- Always explain what command will do before asking user to run it
- Show exact command with all flags
- Ask for relevant output only (use grep/tail to filter)
- Use port-forward + curl instead of kubectl exec
- Generate manifests with helm template for validation
---
## ❌ Common Mistakes to Avoid
1. Don't try to run kubectl directly
2. Don't assume AI can install Helm releases
3. Don't ask user for cluster credentials
4. Don't suggest kubectl exec with tools that aren't available (see container-limitations.instructions.md)

View File

@ -0,0 +1,36 @@
# Analysis Instructions
Analysis documents are scratch work — comparisons, evaluations, investigations. They live in `tmp/` which is gitignored. They are never `docs/`.
## Where to write
```
tmp/<TOPIC>/filename.md
```
Examples:
- `tmp/symlink-vs-copy/comparison.md`
- `tmp/curl-installer/design.md`
- `tmp/harness-options/analysis.md`
`TOPIC` is a short slug describing the subject area. `filename` describes what the document contains. Use lowercase and hyphens.
## Scope
Analysis is strictly scoped to the question asked. Do not expand into adjacent decisions unless the user asks. The purpose is to inform a decision, not to redesign the system.
## Writing style
Analysis documents are the exception to the section-by-section documentation workflow. Write the full document in one pass. The user reads it as a whole and decides what to do next.
Structure to use when relevant:
- **Context** — what problem or question this addresses
- **Options** — what the alternatives are, with concrete tradeoffs per option
- **Recommendation** — what to do and why (be direct, no hedging)
- **Open questions** — what still needs a decision, if anything
Not all sections are required. A short comparison with a clear recommendation may need only a table and a conclusion.
## What analysis is not
Analysis documents do not become documentation. If a decision is made based on an analysis, the decision gets documented in `docs/architecture.md` or the relevant doc file. The analysis in `tmp/` stays as scratch — it is not cleaned up, updated, or committed.

View File

@ -0,0 +1,80 @@
# Documentation Instructions
## Core Principle
Every document starts with the **problem it solves** — one or two sentences.
Then a diagram if it clarifies, then content.
---
## Document Workflow
1. **Abstract** -- write problem statement + section headings (+ top-level diagram if relevant) directly into the file. Do not propose the structure in chat first.
2. **Align** -- stop after writing the abstract. The human reviews it in the file and confirms or adjusts before any section content is written.
3. **Section by section** -- write ~50 lines at a time, then stop and wait. The human decides when a section is ready and asks for the next one. Do not ask "did this go in the right direction?" or prompt the user to continue.
4. **Stay in scope** -- if something is complex or unclear, stop and ask rather than guess
5. **Review** -- when done, AI asks: "Is anything missing?" -- user decides
6. **Link check** -- after finishing, scan other docs in `docs/` for broken links or documents that should now link to this one
---
## Diagrams
**Abstract**: one top-level diagram showing the full scope.
**Sections**: add focused sub-diagrams to clarify specific parts.
Use **Mermaid** for flows, architecture, component relationships.
Use **ASCII trees** for file/folder structures.
When in doubt, include a diagram -- visuals are faster to read than text.
For contrast rules, sizing and diagram type selection: see [mermaid.instructions.md](mermaid.instructions.md)
---
## What Belongs in a Document
✅ Principles — how and why the system works
✅ Architecture — components, relationships, data flows
✅ Key decisions — why this approach over alternatives
✅ Scope boundaries — explicitly state what is out of scope and link to where it is covered
❌ Step-by-step commands — belong in README or runbooks
❌ Full code blocks — small illustrative examples only
❌ Troubleshooting — "if error X do Y" does not belong here
❌ All possible alternatives — pick the relevant ones, explain briefly when to use which
---
## When the User Corrects Something
- Fix **only** what was pointed out
- Do not explain why you got it wrong
- Do not add anything else at the same time
---
## Scope
Keep each document tightly scoped to its title.
If a topic needs deeper treatment:
- Ask: 'Should we create a separate document for X?'
- If yes: write a short intro sentence and link to the new doc
- Do not expand inline
---
## Length
- Increment: ~50 lines at a time, then check with user
- Hard limit: 500 lines
- If approaching limit: stop and ask user what to continue with
---
## Rules
- Written in English
- No step-by-step commands -- those belong in README or runbooks
- No changelog entries
- No duplication between documents -- link instead

View File

@ -0,0 +1,92 @@
# File Editing Instructions
## 🛠️ FILE EDITING RULES - CRITICAL
**NEVER EVER modify files using terminal commands. NO EXCEPTIONS.**
### Forbidden Commands - DO NOT RUN:
1. ❌ `cat > file.txt` - Creates/overwrites file
2. ❌ `cat >> file.txt` - Appends to file
3. ❌ `cat << EOF > file.txt` - Here-document to file
4. ❌ `echo "text" > file.txt` - Writes to file
5. ❌ `echo "text" >> file.txt` - Appends to file
6. ❌ `sed -i` - In-place file modification
7. ❌ `awk` - Any file modification
8. ❌ `perl -i` - In-place file modification
9. ❌ `tee` - Writing to files
10. ❌ `>` or `>>` redirect operators - Any file writing
11. ❌ `<< EOF` - Here-documents to files
12. ❌ **ANY command that writes to, modifies, or creates files**
### ONLY use tools for file modifications:
- ✅ `create_file` - Creating new files
- ✅ `replace_string_in_file` - Editing existing files
- ✅ `multi_replace_string_in_file` - Multiple edits efficiently
- ✅ User sees exact changes in UI
- ✅ Can approve or reject each edit
- ✅ Clear diff of what changes
- ✅ Trackable and reviewable
**Terminal commands ONLY for:**
- Read-only operations (`grep`, `find`, `cat`, `less`, `head`, `tail`)
- Helm commands (`helm template`, `helm show`)
- Git read operations (`git diff`, `git status`, `git log`)
- Analysis commands that don't modify files
---
## Why This Rule Exists
- User must see and approve all file changes
- Terminal file modifications bypass VS Code diff UI
- No audit trail for terminal-based edits
- High risk of accidental overwrites
- Production infrastructure requires careful change control
---
## 📁 Output File Management
**CRITICAL: Save all reports, test results, and temporary outputs to project's tmp/ directory**
### File Output Rules:
- ✅ **ALWAYS use:** `kube-log/tmp/` for all generated files
- ❌ **NEVER use:** `/tmp/` (system temp - outside project)
- ❌ **NEVER use:** `~/` (user home - outside project)
### Examples:
**Correct:**
```bash
# Helm template outputs
helm template monitoring . > kube-log/tmp/monitoring-manifests.yaml
# Test results
cat > kube-log/tmp/test-results.md << 'EOF'
# Test Results
...
EOF
# Analysis reports
cat > kube-log/tmp/analysis-report.txt << 'EOF'
...
EOF
```
**Wrong:**
```bash
# DON'T use system /tmp
helm template monitoring . > /tmp/monitoring-manifests.yaml
# DON'T use user home
cat > ~/test-results.md << 'EOF'
```
### Why Project tmp/ Directory:
- ✅ Keeps all project artifacts together
- ✅ Easy for user to find and review
- ✅ Can be added to .gitignore
- ✅ Preserved across sessions
- ✅ Part of project context
**Note:** Create `kube-log/tmp/` directory if it doesn't exist before writing files.

View File

@ -0,0 +1,59 @@
# Git Instructions
## 🚨 GIT POLICY - ABSOLUTELY CRITICAL
**NEVER EVER make git commands without explicit user approval!**
### Forbidden Commands - DO NOT RUN:
1. ❌ **git add** - User runs this themselves (including git add ., git add -A, git add <file>)
2. ❌ **git commit** - User runs this themselves
3. ❌ **git push** - User runs this themselves
4. ❌ **git push --force** - User runs this themselves
5. ❌ **git reset** - User runs this themselves
6. ❌ **ANY command that modifies git repository or server state**
### What You CAN Do:
- ✅ `git status` - Show current state
- ✅ `git diff` - Show changes
- ✅ `git log` - Show history
- ✅ Show the command user should run
- ✅ Explain what the command will do
### Exception:
Only if user explicitly says:
- "commit this now"
- "push this now"
- "go ahead and commit"
Otherwise: **SHOW the command, WAIT for user to run it**
---
## Best Practices
- Always show `git diff` before suggesting commits
- Show `git status` to verify what will be committed
- Explain impact of each git operation
- User controls git commands, you analyze and advise
- Never assume user wants to commit
- Production platform infrastructure: Stability > Speed
---
## Commit Message Workflow
When the user asks for a commit message:
1. **Run `git diff --staged` or `git diff`** — read what actually changed
2. **Documentation check** — scan the changed files and ask: does any `docs/` or `README.md` need updating based on these changes? If yes, flag it clearly before writing the message. Do not block the commit — just surface it.
3. **Write the commit message** — one short subject line, optionally a blank line and brief body if the change needs context
4. **Show the command** — display the full `git commit -m "..."` for the user to run themselves
Format:
```
<type>: <what changed>
<optional: why, or what is not obvious from the diff>
```
Types: `feat`, `fix`, `docs`, `refactor`, `chore`

View File

@ -0,0 +1,52 @@
# Mermaid Diagram Instructions
## Color Contrast — CRITICAL
All Mermaid diagrams MUST have sufficient color contrast. AI-generated diagrams often fail this.
**Rule**: always pair background with explicit text color.
| Background type | Text color |
|----------------|------------|
| Light (`#e6ffe6`, `#ffcccc`, `#fff3cd`) | `color:#000` |
| Dark (`#009900`, `#cc0000`, `#0055cc`) | `color:#fff` |
| Default (no fill) | no color needed |
```mermaid
graph LR
A[Input]:::good --> B[Process]:::bad --> C[Output]:::good
classDef good fill:#009900,color:#fff
classDef bad fill:#cc0000,color:#fff
```
**Never**:
```
style NodeA fill:#ffcccc ❌ no text color — unreadable
style NodeB fill:#66ff66 ❌ bright color, no contrast defined
```
---
## Size
- **Abstract diagram**: max 5-7 nodes — full scope, high level
- **Section diagrams**: max 3-5 nodes — one subsystem only
- Too many arrows = diagram is wrong scope, split it
---
## When to Use What
| Situation | Use |
|-----------|-----|
| Flows, sequences, architecture | Mermaid `graph` or `sequenceDiagram` |
| File/folder structure | ASCII tree |
| Timeline | Mermaid `gantt` |
| Both structure and flow needed | Both, separately |
---
## If Diagram Would Be Too Complex
Do not simplify by removing important nodes.
Instead: split into two diagrams — one high-level, one zoomed-in detail.

1
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1 @@
Read `.ai/ai-root-instructions.md` now and start every response with `✅ ai-root-instructions.md READ`.

2
.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
.ai-instructions.conf
tmp/

70
README.md Normal file
View File

@ -0,0 +1,70 @@
# ai-superpower
AI instructions scatter across a large number of projects. Maintaining them inside each project individually is not practical — a single change requires manual updates everywhere.
This repo solves that: instructions live in one place and are distributed to all projects from here.
AI requires strong guidance to support human workflows rather than override them. Treat its outputs as zero-trust: the human reviews everything and must be willing to put their name on what the AI produced.
Good instructions are what make that possible — they allow AI to produce output that is high in quality and volume, in a way that feels natural and pleasant to work with, while keeping the human in control of the workflow.
## Principles
- **Generic vs. project-specific**`.ai/` instructions know nothing about individual projects. Project knowledge lives in each project's own `docs/ai-context.md`.
- **Sync writes only to `.ai/`** — never touches project code or `docs/`.
- **Modular loading** — AI loads only the relevant instruction files per task, not everything at once.
- **One change, all projects** — edit here, run sync, done.
- **Version controlled** — instructions are managed in git. Changes are tracked, history is preserved, and rolling back is straightforward.
## Usage
Clone this repo directly into your dev root — the folder where all your projects live. The dev root can be anything (`~/koodi`, `~/projects`, `C:\dev`), but `ai-superpower` must be an immediate child of it. The script uses its own location to determine where to look for projects.
```
dev_root/ ← can be anywhere
├── ai-superpower/ ← must be here, at this level
├── project-a/
├── project-b/
└── some-folder/
└── project-c/ ← nested projects are found automatically
```
```bash
cd ~/koodi
git clone https://gitea.nikos-dev.keskikuja.site/niko/ai-superpower.git
```
From there, run the appropriate script depending on how you use your editor:
- **One project per editor window** — run `apply.sh`. It copies `.ai/` into each project and sets up context files.
- **Dev root as single workspace** — run `apply.sh`. Skips `.ai/` distribution (not needed), only sets up per-project context files.
See [apply.md](apply.md) for the full mechanism and [docs/architecture.md](docs/architecture.md) for the design.
The AI must be instructed to always read `.ai/ai-root-instructions.md` at the start of every session. In your AI assistant's system prompt or custom instructions, add:
> Always read `.ai/ai-root-instructions.md` at the start of every conversation and confirm with `✅ ai-root-instructions.md READ`.
Verify that every AI response begins with this confirmation. If it does not, the instructions have not been loaded.
## Repository structure
`ai-root-instructions.md` is the entry point — the first file the AI reads in any project. It routes to the relevant instruction files based on the task at hand.
```
ai-superpower/
└── .ai/ ← synced to all projects
├── ai-root-instructions.md ← entry point, read first
└── instructions/
├── behavior/ ← how AI approaches its work
├── skills/ ← task-specific guides (git, docs, diagrams)
└── constraints/ ← what AI must not do
project-x/
├── .ai/ ← written by sync
└── docs/
└── ai-context.md ← project-specific, never synced
```
Clear architecture documentation — written following the instructions in this project — matters for both human and AI work. There must be a plan before building. The AI will consistently push for this, because without context it cannot work well. The vision always comes from the human; the AI helps carry it out under human supervision.

62
apply.md Normal file
View File

@ -0,0 +1,62 @@
# apply.sh — how it works
How `apply.sh` sets up and maintains AI instructions across all projects. See [README.md](README.md) for context and [docs/architecture.md](docs/architecture.md) for design decisions.
## Design constraints
`.ai/` is a symlink in every project, pointing back to `ai-superpower/.ai/`. No files are copied. There are no manually maintained project lists. The script discovers projects automatically by scanning for `.git` directories and persists state in `.ai-instructions.conf` so repeat runs are fast and non-interactive.
## How apply.sh works
Run `apply.sh` from anywhere:
```bash
./ai-superpower/apply.sh
```
The script resolves dev root as its own parent directory. On first run it detects all projects and runs per-project setup. On subsequent runs it reads `.ai-instructions.conf` to skip projects already set up, making repeat runs fast.
## Project detection
The script recursively scans dev root for directories containing `.git`. `ai-superpower` itself is excluded. All found projects are presented as an interactive checklist — the developer selects which ones to apply to.
## Per-project actions
For each selected project, the script runs the project setup scripts:
1. Creates `.ai/` symlink → `ai-superpower/.ai/` if missing or broken
2. Adds `.ai` to the project's `.gitignore` if not already present
3. Checks for `docs/ai-context.md` — if missing, creates it from a template
4. Checks for `docs/architecture.md` — if missing, warns and offers to create one
## Keeping instructions up to date
When instructions in this repo change, just pull:
```bash
cd ~/koodi/ai-superpower
git pull
```
Because `.ai/` is a symlink, all projects see the updated instructions immediately — no re-run of `apply.sh` needed. Run `apply.sh` only when adding new projects or repairing broken symlinks.
`apply.sh` is idempotent — safe to run as many times as needed. Existing `docs/` files are never overwritten.
## Adding a new project
The project must be a git repository — `git init` must have been run first. Without a `.git` folder, `apply.sh` will not detect it.
Once initialised, run `apply.sh` — it detects the new project and includes it in the selection. No other configuration needed.
## Project context (ai-context.md)
`docs/ai-context.md` is the AI's window into the project. The script creates a blank template if the file is missing, but the content must reflect the actual project.
Use the AI to create or update it. Open the project in your editor and prompt:
> Read the codebase and create `docs/ai-context.md`. Cover: what this project does, the technology stack, architecture overview, key decisions, and anything the AI needs to know to work here effectively.
For updates after significant changes:
> Review `docs/ai-context.md` against the current codebase. What is outdated or missing?

43
docs/ai-context.md Normal file
View File

@ -0,0 +1,43 @@
# ai-superpower — project context
## What this project does
Centralised AI instruction management for a developer who works across many projects. Instead of maintaining `.ai/` instructions per project, they live here and are distributed via symlinks. Each project gets a symlink to the generic instructions; project-specific knowledge stays in that project's own `docs/ai-context.md`.
`ai-superpower` must live directly in the dev root — the script uses its own location to determine where to look for projects.
Expected structure:
```
dev_root/
├── ai-superpower/ ← this repo, must be here
├── project-a/
├── project-b/
└── some-folder/
├── project-c/
└── project-d/
```
The script scans dev root recursively for directories containing `.git`. `ai-superpower` itself is always excluded.
## Tech stack
- Bash — `sync.sh` is plain bash, no dependencies
- Markdown — all instruction files use `.instructions.md` format
- Git — version control for instructions; change history is first-class
## Key decisions
- **Plain file copy over git submodules** — keeps project repos simple, no cross-repo plumbing
- **No projects.txt** — sync discovers git projects automatically by scanning for `.git` dirs in the dev root
- **`.ai/` in gitignore in target projects** — instructions are not owned by the target project, they are distributed to it. This repo is the exception: `.ai/` is the product and is committed here.
- **`docs/ai-context.md` is never synced** — project-specific context is the project team's responsibility
- **AI writes only with explicit instruction** — zero-trust output model; human reviews and owns everything
## How AI should work here
- Always read `.ai/ai-root-instructions.md` first and confirm with `✅ ai-root-instructions.md READ`
- Follow the documentation workflow in `skills/documentation.instructions.md` — abstract first, then section by section, stop and wait between sections
- Before any git operation, read `skills/git.instructions.md`
- When asked to create or update `docs/ai-context.md` in any project: read the codebase, infer what the project does, and draft — do not ask the user to fill in a template themselves
- Architecture changes require `docs/architecture.md` to be updated in the same commit

94
docs/architecture.md Normal file
View File

@ -0,0 +1,94 @@
# Architecture
AI instructions live in one place. Every project symlinks to them — so a `git pull` here instantly updates all projects. This document describes how the parts fit together.
## Overview
```
dev_root/
├── ai-superpower/ ← this repo
│ ├── apply.sh ← single entry point
│ ├── .ai-instructions.conf ← managed by apply.sh, gitignored
│ ├── scripts/
│ │ ├── setup-project.sh ← creates ai-context.md + architecture.md
│ │ └── symlink-ai.sh ← creates .ai/ symlink in target project
│ ├── .ai/ ← instruction source, symlinked from all projects
│ └── docs/ ← this project's own documentation
├── project-a/
│ ├── .ai/ ← symlink → ai-superpower/.ai/
│ └── docs/
│ └── ai-context.md ← project-specific, never synced
└── some-folder/
└── project-b/ ← nested projects discovered automatically
└── .ai/
```
One entry point: `apply.sh`. It handles first-time setup and repair. Each project gets a symlink `project/.ai``ai-superpower/.ai/`. A `git pull` here updates all projects instantly — no re-run needed for content changes.
## Instruction loading
Instructions are not loaded automatically. The AI must be explicitly told to read `.ai/ai-root-instructions.md` — this is done via the editor's system prompt or custom instructions setting. Without that configuration, the `.ai/` folder has no effect.
**Acknowledgment** — every AI response must begin with `✅ ai-root-instructions.md READ`. This is the only mechanism to verify that the AI has actually read the file. If the acknowledgment is missing, the AI has not loaded the instructions for that response.
**Routing** — `ai-root-instructions.md` does not load all instruction files on every request. It routes to relevant files based on the task:
| Category | When loaded | Contents |
|---|---|---|
| `behavior/` | Always | How the AI approaches work: analysis before action, minimal changes, project context, required document structure |
| `skills/` | When relevant | Task-specific rules: git policy, file editing, documentation workflow, diagrams |
| `constraints/` | When relevant | Hard limits: what the AI cannot do, user responsibilities, debugging boundaries |
This modular structure keeps context small and focused. Loading all instruction files on every request would dilute attention and waste context window on irrelevant rules.
## How apply.sh works
`apply.sh` is a setup and repair tool, not a distribution tool. Content updates happen via `git pull` — the symlinks ensure all projects see the change immediately.
**Project discovery** — recursively scans dev root for `.git` directories. `ai-superpower` itself is always excluded. All found projects, including nested ones, receive a symlink.
**`scripts/symlink-ai.sh`** — creates `project/.ai``ai-superpower/.ai/` as an absolute symlink. If `.ai/` already exists and is a valid symlink, it is left untouched. If it is broken or missing, it is (re)created.
**`scripts/setup-project.sh`** — handles per-project context setup:
1. Creates `docs/ai-context.md` from template if missing
2. Checks for `docs/architecture.md` — warns and offers to create if missing
**`.ai-instructions.conf`** — written and maintained by `apply.sh`. Lives inside this repo, gitignored. Stores which projects have been set up. Treated as a runtime log — the developer does not edit it.
## Per-project structure
After setup, each project has:
```
project-x/
├── .ai/ ← symlink → ai-superpower/.ai/
│ ├── ai-root-instructions.md
│ └── instructions/
│ ├── behavior/
│ ├── skills/
│ └── constraints/
└── docs/
├── ai-context.md ← created by setup-project.sh if missing
└── architecture.md ← created if developer confirms
```
`.ai/` is a symlink — gitignored in target projects, pointing back to the single source in `ai-superpower`. `docs/` is committed and owned by the project team.
## Design decisions
**Symlinks over file copies** — one source of truth, no distribution step for content changes, no version drift across projects. `git pull` in `ai-superpower` instantly updates all projects.
**No projects.txt** — a manually maintained list goes stale. Discovering projects by `.git` presence is always accurate and requires no upkeep.
**One entry point** — `apply.sh` handles setup and repair for all projects. No configuration questions, no mode selection — it always creates symlinks.
**`.ai-instructions.conf` as a runtime artifact** — the script maintains this file like a log. Lives inside this repo but is gitignored — it is personal to the developer and not shared with others who clone this repo.
**Bash over Python or Node** — no runtime to install, works on macOS, Linux, and Windows via WSL. The scripts do file operations and terminal interaction — bash is the right tool.
**`docs/ai-context.md` is never synced** — project-specific knowledge is owned by the project team, not this repo. Syncing it would overwrite work the team has done.
**`architecture.md` as a required document** — the setup script warns and offers to create it if missing. A project without an architecture document is a project where the AI cannot understand structure or decisions, and where humans will struggle too.