Architecture
GlassFlow AI Runtime is built as a set of loosely coupled services connected by NATS JetStream. Each service has a single responsibility, and the platform separates the control plane (management) from the data plane (telemetry ingestion and output).
Services
Control Plane
The control plane is a Go service that handles:
- User authentication (JWT-based signup/login)
- Project management (CRUD, API keys, members)
- Pipeline configuration (filter expressions, transforms, agent endpoint)
- Sink configuration (webhook URLs, Slack webhooks)
- Internal APIs consumed by other services to fetch project configs
It stores all state in PostgreSQL and connects to NATS for service coordination. The web UI talks exclusively to the control plane API.
Receiver (Data Plane)
The receiver is the data ingestion service. It exposes two groups of HTTP endpoints:
- OTLP ingest (
/v1/logs,/v1/traces,/v1/metrics) — accepts telemetry from your applications. Validates theX-API-Keyheader against the control plane, unpacks OTLP log batches into flat JSON records, and publishes each record to the project’s NATS raw stream. - Agent output (
/internal/agent-output) — accepts enriched results from AI agents via the GlassFlow SDK. Validates the API key and publishes the payload directly to the project’s NATS output stream.
The receiver is the only service that external clients talk to for data. The control plane never touches telemetry payloads.
Pipeline
The pipeline service combines two logical stages in a single process:
- Filter & Transform — consumes from the project’s raw NATS stream, applies expr-lang filter expressions and field transforms, and drops non-matching records.
- Bridge — collects filtered records into batches (default: 10), marshals them into a JSON array, and POSTs the batch to the project’s configured agent endpoint.
A pipeline is only started for projects that have an agent endpoint URL configured. Projects without an agent are not consumed from.
Sink
The sink service consumes from the project’s NATS output stream (where agent results land) and dispatches each message to configured destinations:
- Webhook — HTTP POST to a URL with configurable method and headers
- Slack — Posts to a Slack incoming webhook URL
AI Agent (external)
Agents are not part of the platform — they are standalone services that you deploy separately. An agent:
- Receives a JSON array of log records via HTTP POST from the pipeline
- Processes them (classification, enrichment, summarization, etc.)
- Sends results back to the receiver via the GlassFlow Python SDK
See Agents for details on building agents.
Data flow
Your Application
│
│ OTLP HTTP (logs/traces/metrics)
│ X-API-Key header
▼
┌─────────────────────┐
│ Receiver │ ← Data Plane
│ (port 4318) │
└────────┬────────────┘
│ Flat JSON records
▼
NATS Raw Stream
(gf-ai-raw-{projectID})
│
▼
┌─────────────────────┐
│ Pipeline │
│ ┌───────────────┐ │
│ │ 1. Filter │ │ ← Drop non-matching records
│ │ 2. Transform │ │ ← Reshape fields
│ │ 3. Batch │ │ ← Collect into JSON array
│ │ 4. Forward │ │ ← HTTP POST to agent
│ └───────────────┘ │
└────────┬────────────┘
│ JSON array batch
▼
┌─────────────────────┐
│ AI Agent │ ← External (your deployment)
│ (e.g. port 8000) │
└────────┬────────────┘
│ GlassFlow SDK
│ POST /internal/agent-output
▼
┌─────────────────────┐
│ Receiver │ ← Agent output ingest
└────────┬────────────┘
│
▼
NATS Output Stream
(gf-ai-output-{projectID})
│
▼
┌─────────────────────┐
│ Sink │
│ → Webhook │
│ → Slack │
└─────────────────────┘NATS streams
Each project gets three dedicated JetStream streams:
| Stream | Subject | Producer | Consumer |
|---|---|---|---|
gf-ai-raw-{id} | gf-ai-raw.{id}.in | Receiver | Pipeline |
gf-ai-output-{id} | gf-ai-output.{id}.out | Receiver (agent output) | Sink |
All streams use durable consumers with explicit ACK policy, ensuring no data is lost even if a service restarts.
There is no intermediate “filtered” stream. The pipeline processes records in-memory: raw stream → filter → transform → batch → HTTP forward. This reduces NATS overhead and simplifies the architecture.
Flat log record format
The receiver unpacks OTLP log batches into individual flat JSON records. Each record looks like this:
{
"resourceAttributes": {
"service.name": "my-service",
"k8s.pod.name": "my-pod-abc123",
"k8s.namespace.name": "production"
},
"scope": {
"name": "my-library"
},
"severityNumber": 17,
"severityText": "ERROR",
"body": "Connection refused: database pool exhausted",
"attributes": {
"error.type": "ConnectionError"
},
"traceId": "abc123...",
"spanId": "def456...",
"timestamp": "2024-01-15T10:30:00Z"
}This is the format that filter expressions operate on and that agents receive in batches.
Technology choices
| Component | Technology | Why |
|---|---|---|
| Backend | Go + Huma framework | Low overhead, fast startup, single binary per service |
| Frontend | Next.js 15 + React 19 | App Router, server components, standalone mode |
| Messaging | NATS JetStream | Durable streaming, lightweight, per-project isolation |
| Database | PostgreSQL | Reliable, widely supported, handles auth + config |
| AI Agents | Python + OpenAI Agents SDK | Rich AI ecosystem, easy to build custom agents |
| SDK | Python (requests) | Simple HTTP client, no complex dependencies |