What You’ll Learn

  • Why internal AI dashboards are a growing freelance niche
  • How to wire up Vercel AI SDK with Next.js for streaming responses
  • A pattern for mixing chat interfaces with structured data views
  • How structured output turns chat into something operationally useful
  • The architecture I use for dashboard projects that stay maintainable

Clients keep asking for the same thing: “We want an internal tool where our team can ask questions about our data and get useful answers.”

That sounds like ChatGPT. But what they actually want is a dashboard that understands their domain, connects to their systems, and presents results in a way their team can act on without becoming prompt engineers.

This is where Next.js and the Vercel AI SDK fit well. You get streaming, structured output, server actions, and a React frontend that can render both chat and data views in the same interface.

The Shape of the Project

Most AI dashboards I build have the same three layers:

  1. A chat interface where users describe what they want
  2. A processing layer that turns the request into structured actions
  3. Data views that render results as tables, charts, or summaries

The chat part is the entry point, but it is not the product. The product is the structured result that the team can actually use.

Streaming Chat with Vercel AI SDK

The basic streaming setup in Next.js is straightforward. Here is a minimal server action:

'use server';

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { createStreamableValue } from 'ai/rsc';

export async function chat(messages: { role: string; content: string }[]) {
  const stream = createStreamableValue('');

  (async () => {
    const result = streamText({
      model: openai('gpt-4o'),
      system: 'You are a helpful assistant for an internal operations dashboard.',
      messages,
    });

    for await (const text of (await result).textStream) {
      stream.update(text);
    }

    stream.done();
  })();

  return { output: stream.value };
}

On the frontend, you consume it with useStreamableValue:

'use client';

import { useStreamableValue } from 'ai/rsc';

function ChatMessage({ stream }: { stream: any }) {
  const [text] = useStreamableValue(stream);
  return <div className="message">{text}</div>;
}

This gives you real-time streaming with minimal boilerplate.

Where Structured Output Changes Everything

Streaming text is nice for conversation. But for a dashboard, you usually want the model to produce data, not prose.

This is where generateObject becomes the most important function in the project:

import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const analysisSchema = z.object({
  summary: z.string(),
  metrics: z.array(z.object({
    label: z.string(),
    value: z.number(),
    trend: z.enum(['up', 'down', 'stable']),
  })),
  recommendations: z.array(z.string()),
});

export async function analyzeData(prompt: string, context: string) {
  const result = await generateObject({
    model: openai('gpt-4o'),
    schema: analysisSchema,
    system: 'Analyze the provided data and return structured insights.',
    prompt: `${prompt}\n\nData:\n${context}`,
  });

  return result.object;
}

Now instead of a wall of text, you get typed data that your React components can render directly:

function AnalysisView({ data }: { data: z.infer<typeof analysisSchema> }) {
  return (
    <div>
      <p className="summary">{data.summary}</p>
      <div className="metrics-grid">
        {data.metrics.map((m) => (
          <div key={m.label} className="metric-card">
            <span className="label">{m.label}</span>
            <span className="value">{m.value}</span>
            <span className={`trend trend-${m.trend}`}>{m.trend}</span>
          </div>
        ))}
      </div>
      <ul className="recommendations">
        {data.recommendations.map((r, i) => (
          <li key={i}>{r}</li>
        ))}
      </ul>
    </div>
  );
}

This is the pattern that separates a “chatbot wrapper” from a real dashboard.

Connecting to Client Data

The AI layer is only useful if it has access to the client’s actual data. I usually connect through one of these approaches:

Direct database queries

For internal tools where security is controlled:

import { db } from '@/lib/db';

export async function getOrderContext(customerId: string) {
  const orders = await db.order.findMany({
    where: { customerId },
    orderBy: { createdAt: 'desc' },
    take: 20,
  });

  return JSON.stringify(orders);
}

API integration

For clients with existing services:

export async function getTicketContext(query: string) {
  const res = await fetch(`${process.env.HELPDESK_API}/tickets?q=${encodeURIComponent(query)}`, {
    headers: { Authorization: `Bearer ${process.env.HELPDESK_KEY}` },
  });

  return res.json();
}

Either way, the data gets serialized and passed to the model as context. The model analyzes it and returns structured output that the dashboard renders.

The Layout Pattern I Keep Using

Most of these dashboards end up with a split layout:

  • Left panel: chat history and input
  • Right panel: structured results, tables, charts

The chat drives the analysis. The results panel shows the output. Users can keep chatting to refine or ask follow-up questions, and the results panel updates.

export default function Dashboard() {
  const [results, setResults] = useState(null);

  return (
    <div className="dashboard-layout">
      <aside className="chat-panel">
        <ChatInterface onResult={setResults} />
      </aside>
      <main className="results-panel">
        {results ? <AnalysisView data={results} /> : <EmptyState />}
      </main>
    </div>
  );
}

This is simple, but it works well because it matches how people actually want to interact with data: ask a question, see the answer in a useful format, refine if needed.

Mistakes I Avoid

Treating everything as chat

If the user asks “show me this month’s revenue by region” and gets a paragraph of text, the dashboard has failed. Structured output should be the default for data questions.

Sending too much context

Dumping an entire database into the prompt does not work. I select relevant records, summarize when needed, and keep context focused.

Skipping error states

AI calls fail. APIs time out. The dashboard needs loading states, error messages, and graceful fallbacks. A spinner that never resolves is worse than a clear error.

Over-designing the first version

The first version should be one chat input and one results view. Add tabs, filters, and saved queries after the core flow works.

Final Thought

AI dashboards are one of the most practical things you can build for a client right now. The technology is straightforward — Next.js, Vercel AI SDK, and a schema — but the value comes from connecting it to real data and presenting results in a format the team can actually use.

If you need an AI-powered internal tool or dashboard built for your team, take a look at my portfolio: voidcraft-site.vercel.app.