Head of Product
AI Feature Observability
You built AI features with the Vercel AI SDK and stored everything in Supabase. Now Dreambase connects your usage tables, Vercel API data, and PostHog events into one unified observability layer so you can actually improve what you shipped.
We shipped the AI feature two weeks ago and we have no idea if it's actually working.

The Value
See exactly how your AI features are performing, from token costs to user behavior, without building a custom observability stack.
Engineers building AI products on Supabase generate a wealth of signals across multiple platforms. Dreambase connects Vercel AI SDK data, PostHog feature events, and your Supabase usage tables into one place so you can monitor, debug, and improve your AI features with real data.
The Problem
AI features are live but nobody actually knows if they're working well.
Token usage, model costs, and latency are logged but never surfaced in a useful way
Feature adoption data lives in PostHog and product usage data lives in Supabase with no connection between them
Vercel AI SDK completions and errors are scattered across logs with no aggregate view
Engineers have to write custom queries every time someone asks how the AI feature is performing
No clear signal on which AI features are driving retention vs. which ones users ignore
The Dreambase Solution

Vercel API Integration
Connect Vercel via the Dreambase data catalog marketplace. Pull AI SDK usage, deployment metrics, function performance, and completion data directly into your analytics workspace alongside your Supabase records.
PostHog MCP Integration
Connect PostHog via MCP to bring feature flag data, user events, and product analytics into Dreambase. Combine PostHog behavioral events with your Supabase AI usage tables for a complete picture of how users interact with your AI features.
Supabase AI Usage Tables
Your Vercel AI SDK completions, token counts, model selections, latency, and error rates are already being logged to Supabase. Dreambase reads those tables directly and surfaces them as clean dashboards without any additional instrumentation.
Analyst Agent for Deep Dives When something looks off, use the Analyst Agent to investigate. It understands your schema, your AI usage patterns, and Supabase best practices, and can help you root cause issues in minutes instead of hours.
Before & After
Before | After |
|---|---|
Token costs logged but never summarized | Live cost per user, per feature, per model in one dashboard |
PostHog events and Supabase usage data siloed | Behavioral events and AI usage combined in one view |
Engineers write custom queries for every performance question | Self-serve AI observability for the whole team |
No visibility into which AI features users actually adopt | Feature adoption tracked from PostHog events alongside usage depth from Supabase |
Debugging slow completions requires digging through logs | Latency and error trends surfaced automatically |
Vercel deployment data disconnected from product impact | Vercel and Supabase data unified in one workspace |
Never guess how effective your AI features are again
Shipping an AI feature is the easy part. Understanding whether it is actually working is where most teams get stuck.
You built the feature with the Vercel AI SDK. Completions are being logged to Supabase. Token counts, model selections, latency, and errors are all there in your usage tables. PostHog is tracking which users are clicking into the AI feature and how often they come back. Vercel has deployment and function performance data. You have more signal about your AI feature than you realize. The problem is that it is scattered across three platforms and nobody has connected the dots.
Dreambase connects them in minutes.
Start with your Supabase AI usage tables. If you are logging Vercel AI SDK completions to Supabase, Dreambase reads those tables directly and turns them into dashboards without any additional instrumentation. Token usage by user. Cost per completion by model. Average latency trends over time. Error rates by feature. These are all questions Dreambase can answer from data you are already collecting.
From there, bring in your external sources. The Dreambase data catalog marketplace includes a Vercel API integration so you can pull deployment metrics, function performance, and AI SDK usage data directly into your workspace. Paste the OpenAPI URL and every endpoint is configured in seconds. For PostHog, connect via MCP and your behavioral event data flows in alongside your Supabase usage records. Now you can see not just how the AI is performing technically, but how users are actually engaging with it.
The combination is what makes this powerful. You can see that average completion latency spiked on Tuesday, cross-reference it with a Vercel deployment that went out Tuesday morning, and check PostHog to see whether the engagement drop that followed was caused by the latency or something else entirely. Three platforms, one unified view, no custom query required.
Topics in the Data Dictionary lock in your AI observability definitions so the whole engineering team is working from the same metrics. Token cost per active user. AI feature retention at day 7. Completion success rate by model. Define them once and every dashboard, every report, and every quick insight your team generates uses the same logic automatically.
When something requires deeper investigation, the Analyst Agent goes further. Give it context about an anomaly you noticed and it will dig into your Supabase schema, analyze usage patterns, and help you root cause the issue with the kind of reasoning you would normally get from a senior data engineer. Except it is available at 11pm when the issue is actually happening.
For engineering teams shipping AI features on Supabase, this is the observability layer you would have had to build yourself. Dreambase gives it to you in minutes.
AI Feature Observability
