Getting Started
The fastest way to try GlassFlow AI Runtime is with Docker Compose. This starts all platform services locally — you only need Docker installed.
Prerequisites
- Docker and Docker Compose v2+
Start the stack
Clone the repository and start all services:
git clone https://github.com/glassflow/glassflow-ai-runtime.git
cd glassflow-ai-runtime
docker compose up -dThis starts:
| Service | URL | Description |
|---|---|---|
| UI | http://localhost:3000 | Web interface |
| Control Plane API | http://localhost:8080 | Management API |
| Receiver (Data Plane) | http://localhost:4318 | OTLP ingest + agent output |
| Pipeline | (internal) | Filter/transform + agent forwarding |
| Sink | (internal) | Webhook/Slack dispatch |
| NATS | localhost:4222 | Message streaming |
| PostgreSQL | localhost:5432 | Config and auth storage |
Create an account
Open http://localhost:3000 and click Sign Up to create your first user account.
Create a project
- After logging in, click Create Project and give it a name.
- Navigate to API Keys and create a key — you’ll need this to send data and connect agents.
Send test data
Send a test OTLP log to the receiver:
curl -X POST http://localhost:4318/v1/logs \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{
"resourceLogs": [{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "my-service"}}
]
},
"scopeLogs": [{
"logRecords": [{
"timeUnixNano": "1700000000000000000",
"severityNumber": 17,
"severityText": "ERROR",
"body": {"stringValue": "Connection refused: database pool exhausted"}
}]
}]
}]
}'Configure the pipeline
In the UI, go to your project’s Pipeline page:
- Optionally enable a filter (e.g.
severityNumber >= 17to only process errors) - Optionally add transforms to extract or reshape fields
- Set the Agent Endpoint URL — this is where the pipeline sends batched data (e.g.
http://agent:8000/process)
Connect an agent
See Agents for how to build and connect an AI agent. The bundled example agent at agents/otlp-error-summary/ classifies and enriches error logs using OpenAI.
Next steps
- Architecture — understand the data flow
- Installation — deploy to Kubernetes
- Configuration — pipeline, filter, transform, and sink setup