Select a file to edit
← Select a file from the sidebar
Changes take effect after restarting Eva.
Channels
Active message channels and their connection status
Loading...
Skills
Agent skills mounted from container/skills/
API Keys
Each key is its own named channel with independent identity and permissions
| Name | Key | Created | Last Used | Requests | Status | Actions |
|---|
API Documentation
Base URL:
Auth:
http://<host>:<HTTP_API_PORT>Auth:
X-API-Key: your-key or Authorization: Bearer your-key
Text Chat — POST /api/v1/chat
curl -X POST http://host:PORT/api/v1/chat \
-H "X-API-Key: YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"message": "Turn on the living room lights"}'
Audio Chat — POST /api/v1/chat/audio
curl -X POST http://host:PORT/api/v1/chat/audio \
-H "X-API-Key: YOUR_KEY" \
-F "audio=@recording.mp3"
Both endpoints return
{"id":"...","status":"queued"}.
Responses arrive via SSE stream: GET /api/stream
Create API Key
API Key Created
Copy this key now — it won't be shown again in full.
Local Model
Route simple intents to a local LLM for faster, cheaper responses
How it works: Simple intents (lights, timers, time queries) are classified first.
High-confidence matches go to the local model. Everything else goes to Claude.
Requires llama-cpp-python or ollama running locally.
Configuration
Available Models
Download Model
Quick-start Instructions
Option A: llama-cpp-python
pip install llama-cpp-python[server]
python -m llama_cpp.server --model /path/to/model.gguf --host 0.0.0.0 --port 8080
Option B: Ollama
ollama pull qwen2.5:0.5b # tiny fast model
ollama serve
GitHub Backup
Auto-commit code, memory, skills and config to a GitHub branch
What gets backed up: All code, memory files, skills, and config.
.env secrets are redacted — keys, tokens, passwords replaced with REDACTED.
Backup only runs if there are uncommitted changes.