Skip to Content
Introduction

GlassFlow AI Runtime

GlassFlow AI Runtime is an open-source platform for building real-time AI-powered telemetry pipelines. It receives OTLP logs and traces, filters and transforms them, routes batches to AI agents for classification and enrichment, and delivers results to downstream sinks like Slack and webhooks.

What it does

  • Receives OTLP telemetry data (logs, traces, metrics) via a high-throughput data plane
  • Filters and transforms records using expression-based rules before they reach the agent
  • Batches and forwards filtered data to external AI agents for processing
  • Routes enriched output from agents to configured sinks (Slack, webhooks)
  • Manages everything through a control plane API and web UI

Key features

  • Separate control and data planes — management API stays isolated from the high-throughput data path
  • Pluggable AI agents — agents are standalone services (Python, Go, any language) that receive batched data and send results back via the GlassFlow SDK
  • Per-project pipelines — each project has its own filter rules, transforms, agent endpoint, and sinks
  • NATS JetStream backbone — durable, exactly-once message delivery between pipeline stages
  • Helm chart for Kubernetes — production-ready deployment with bundled NATS and PostgreSQL

Getting started

  1. Getting Started — run GlassFlow locally with Docker Compose
  2. Architecture — understand the system design and data flow
  3. Installation — deploy to Kubernetes with Helm
  4. Configuration — set up pipelines, filters, transforms, and sinks
  5. Agents — build and connect your own AI agent