|
1 | | -<a href="https://chat.vercel.ai/"> |
2 | | - <img alt="Next.js 14 and App Router-ready AI chatbot." src="app/(chat)/opengraph-image.png"> |
3 | | - <h1 align="center">Chat SDK</h1> |
4 | | -</a> |
| 1 | +# streaming-markdown-react |
5 | 2 |
|
6 | | -<p align="center"> |
7 | | - Chat SDK is a free, open-source template built with Next.js and the AI SDK that helps you quickly build powerful chatbot applications. |
8 | | -</p> |
| 3 | +React components for streaming-safe Markdown and AI chat interfaces. |
9 | 4 |
|
10 | | -<p align="center"> |
11 | | - <a href="https://chat-sdk.dev"><strong>Read Docs</strong></a> · |
12 | | - <a href="#features"><strong>Features</strong></a> · |
13 | | - <a href="#model-providers"><strong>Model Providers</strong></a> · |
14 | | - <a href="#deploy-your-own"><strong>Deploy Your Own</strong></a> · |
15 | | - <a href="#running-locally"><strong>Running locally</strong></a> |
16 | | -</p> |
17 | | -<br/> |
| 5 | +[](https://www.npmjs.com/package/streaming-markdown-react) |
| 6 | +[](https://www.npmjs.com/package/streaming-markdown-react) |
18 | 7 |
|
19 | | -## Features |
| 8 | +> Prefer Chinese docs? See [README.zh-CN.md](./README.zh-CN.md). |
20 | 9 |
|
21 | | -- [Next.js](https://nextjs.org) App Router |
22 | | - - Advanced routing for seamless navigation and performance |
23 | | - - React Server Components (RSCs) and Server Actions for server-side rendering and increased performance |
24 | | -- [AI SDK](https://ai-sdk.dev/docs/introduction) |
25 | | - - Unified API for generating text, structured objects, and tool calls with LLMs |
26 | | - - Hooks for building dynamic chat and generative user interfaces |
27 | | - - Supports xAI (default), OpenAI, Fireworks, and other model providers |
28 | | -- [shadcn/ui](https://ui.shadcn.com) |
29 | | - - Styling with [Tailwind CSS](https://tailwindcss.com) |
30 | | - - Component primitives from [Radix UI](https://radix-ui.com) for accessibility and flexibility |
31 | | -- Data Persistence |
32 | | - - [Neon Serverless Postgres](https://vercel.com/marketplace/neon) for saving chat history and user data |
33 | | - - [Vercel Blob](https://vercel.com/storage/blob) for efficient file storage |
34 | | -- [Auth.js](https://authjs.dev) |
35 | | - - Simple and secure authentication |
| 10 | +## Highlights |
| 11 | +- **Streaming-safe rendering**: `useSmoothStream` queues graphemes so partially streamed Markdown never breaks code fences or inline structures. |
| 12 | +- **Shiki-powered code blocks**: `useShikiHighlight` lazy-loads themes and languages, falling back gracefully while syntax highlighting boots. |
| 13 | +- **Message-aware primitives**: `MessageItem`, `MessageBlockRenderer`, and `MessageBlockStore` model complex assistant replies (thinking, tool calls, media, etc.). |
| 14 | +- **Highly customizable**: Extend `react-markdown` via the `components` prop, swap the default `CodeBlock`, or plug in your own themes and callbacks. |
| 15 | +- **Tiny API surface**: Stream text, toggle `status`, and receive `onComplete` when everything has flushed—no heavy state machines required. |
36 | 16 |
|
37 | | -## Model Providers |
| 17 | +## Installation |
38 | 18 |
|
39 | | -This template uses the [Vercel AI Gateway](https://vercel.com/docs/ai-gateway) to access multiple AI models through a unified interface. The default configuration includes [xAI](https://x.ai) models (`grok-2-vision-1212`, `grok-3-mini`) routed through the gateway. |
| 19 | +```bash |
| 20 | +pnpm add streaming-markdown-react |
| 21 | +# or |
| 22 | +npm install streaming-markdown-react |
| 23 | +# or |
| 24 | +yarn add streaming-markdown-react |
| 25 | +``` |
40 | 26 |
|
41 | | -### AI Gateway Authentication |
| 27 | +## Basic Usage |
| 28 | + |
| 29 | +```tsx |
| 30 | +import { StreamingMarkdown, StreamingStatus } from 'streaming-markdown-react'; |
| 31 | + |
| 32 | +export function MessageBubble({ |
| 33 | + text, |
| 34 | + status, |
| 35 | +}: { |
| 36 | + text: string; |
| 37 | + status: StreamingStatus; |
| 38 | +}) { |
| 39 | + return ( |
| 40 | + <StreamingMarkdown |
| 41 | + status={status} |
| 42 | + className="prose prose-neutral max-w-none" |
| 43 | + onComplete={() => console.log('stream finished')} |
| 44 | + > |
| 45 | + {text} |
| 46 | + </StreamingMarkdown> |
| 47 | + ); |
| 48 | +} |
| 49 | +``` |
42 | 50 |
|
43 | | -**For Vercel deployments**: Authentication is handled automatically via OIDC tokens. |
| 51 | +Pass the latest chunked Markdown through `children`, keep `status="streaming"` until the LLM closes the stream, and use `onComplete` for follow-up UI work once every queued token is painted. |
| 52 | + |
| 53 | +## Streaming Example |
| 54 | + |
| 55 | +```tsx |
| 56 | +import { useState, useEffect } from 'react'; |
| 57 | +import { StreamingMarkdown, StreamingStatus } from 'streaming-markdown-react'; |
| 58 | + |
| 59 | +export function LiveAssistantMessage({ stream }: { stream: ReadableStream<string> }) { |
| 60 | + const [text, setText] = useState(''); |
| 61 | + const [status, setStatus] = useState<StreamingStatus>('streaming'); |
| 62 | + |
| 63 | + useEffect(() => { |
| 64 | + const reader = stream.getReader(); |
| 65 | + let cancelled = false; |
| 66 | + |
| 67 | + async function read() { |
| 68 | + while (!cancelled) { |
| 69 | + const { value, done } = await reader.read(); |
| 70 | + if (done) { |
| 71 | + setStatus('success'); |
| 72 | + break; |
| 73 | + } |
| 74 | + setText((prev) => prev + (value ?? '')); |
| 75 | + } |
| 76 | + } |
| 77 | + |
| 78 | + read(); |
| 79 | + return () => { |
| 80 | + cancelled = true; |
| 81 | + reader.releaseLock(); |
| 82 | + }; |
| 83 | + }, [stream]); |
| 84 | + |
| 85 | + return ( |
| 86 | + <StreamingMarkdown |
| 87 | + status={status} |
| 88 | + minDelay={12} |
| 89 | + onComplete={() => console.log('assistant block done')} |
| 90 | + > |
| 91 | + {text} |
| 92 | + </StreamingMarkdown> |
| 93 | + ); |
| 94 | +} |
| 95 | +``` |
44 | 96 |
|
45 | | -**For non-Vercel deployments**: You need to provide an AI Gateway API key by setting the `AI_GATEWAY_API_KEY` environment variable in your `.env.local` file. |
| 97 | +`minDelay` throttles animation frames for high-throughput streams, while `status` flips to `'success'` the moment upstream tokenization ends. |
| 98 | + |
| 99 | +## Components & Hooks |
| 100 | + |
| 101 | +| Export | Description | |
| 102 | +| --- | --- | |
| 103 | +| `StreamingMarkdown` | Streaming-safe Markdown renderer with GFM and overridable components. | |
| 104 | +| `StreamingStatus` | `'idle' \| 'streaming' \| 'success' \| 'error'` helper union for UI state. | |
| 105 | +| `MessageItem` | Splits assistant responses into typed blocks backed by `MessageBlockStore`. | |
| 106 | +| `MessageBlockRenderer` | Default renderer for text, thinking, tool, media, and error blocks. | |
| 107 | +| `MessageBlockStore` | Lightweight in-memory store for diffing and hydrating message blocks. | |
| 108 | +| `useSmoothStream` | Grapheme-level streaming queue powered by `Intl.Segmenter`. | |
| 109 | +| `useShikiHighlight` | Lazy-loaded Shiki highlighter with light/dark themes. | |
| 110 | +| `CodeBlock` | Default code block component; wrap or replace it for custom UI. | |
| 111 | + |
| 112 | +## StreamingMarkdown Props |
| 113 | + |
| 114 | +| Prop | Type | Description | |
| 115 | +| --- | --- | --- | |
| 116 | +| `children` | `ReactNode` | Markdown (partial or complete) to render. | |
| 117 | +| `className` | `string` | Utility classes for the container. | |
| 118 | +| `components` | `Partial<Components>` | Extend/override `react-markdown` element renderers. | |
| 119 | +| `status` | `StreamingStatus` | Controls the internal streaming lifecycle. | |
| 120 | +| `onComplete` | `() => void` | Fires once the queue drains after the stream finishes. | |
| 121 | +| `minDelay` | `number` | Minimum milliseconds between animation frames (default `10`). | |
| 122 | +| `blockId` | `string` | Reserved for coordinating multi-block updates. | |
| 123 | + |
| 124 | +## Customization |
| 125 | + |
| 126 | +- **Override Markdown elements**: provide a `components` map to inject callouts, alerts, or custom typography. |
| 127 | + |
| 128 | + ```tsx |
| 129 | + <StreamingMarkdown |
| 130 | + components={{ |
| 131 | + blockquote: (props) => ( |
| 132 | + <div className="rounded-lg border-l-4 border-amber-500 bg-amber-50 p-3 text-sm"> |
| 133 | + {props.children} |
| 134 | + </div> |
| 135 | + ), |
| 136 | + }} |
| 137 | + > |
| 138 | + {text} |
| 139 | + </StreamingMarkdown> |
| 140 | + ``` |
| 141 | + |
| 142 | +- **Theme-aware code blocks**: use the exported `CodeBlock` or compose `useShikiHighlight` with your own chrome. |
| 143 | + |
| 144 | + ```tsx |
| 145 | + import { CodeBlock, useShikiHighlight } from 'streaming-markdown-react'; |
| 146 | + ``` |
| 147 | + |
| 148 | +- **Message-first UIs**: `MessageItem` and `MessageBlockRenderer` coordinate per-block rendering so chat transcripts stay in sync during streaming diffs. |
| 149 | + |
| 150 | +## Type-safe Message Blocks |
| 151 | + |
| 152 | +All message-related types (`Message`, `MessageBlock`, `MessageMetadata`, etc.) are exported so your AI pipeline and UI can share a single contract. |
| 153 | + |
| 154 | +```ts |
| 155 | +import type { Message, MessageBlockType } from 'streaming-markdown-react'; |
| 156 | + |
| 157 | +const assistant: Message = { |
| 158 | + id: 'msg-1', |
| 159 | + role: 'assistant', |
| 160 | + blocks: [ |
| 161 | + { |
| 162 | + id: 'block-1', |
| 163 | + type: MessageBlockType.MAIN_TEXT, |
| 164 | + content: 'Here is your SQL query...', |
| 165 | + }, |
| 166 | + ], |
| 167 | +}; |
| 168 | +``` |
46 | 169 |
|
47 | | -With the [AI SDK](https://ai-sdk.dev/docs/introduction), you can also switch to direct LLM providers like [OpenAI](https://openai.com), [Anthropic](https://anthropic.com), [Cohere](https://cohere.com/), and [many more](https://ai-sdk.dev/providers/ai-sdk-providers) with just a few lines of code. |
| 170 | +## Development Playground |
48 | 171 |
|
49 | | -## Deploy Your Own |
| 172 | +This repository also serves as a **development playground** for `streaming-markdown-react`. The root project is a full-featured Next.js AI Chatbot that demonstrates the package in action. |
50 | 173 |
|
51 | | -You can deploy your own version of the Next.js AI Chatbot to Vercel with one click: |
| 174 | +### Playground Features |
52 | 175 |
|
53 | | -[](https://vercel.com/templates/next.js/nextjs-ai-chatbot) |
| 176 | +- [Next.js](https://nextjs.org) App Router with React Server Components |
| 177 | +- [AI SDK](https://ai-sdk.dev/docs/introduction) integration with xAI (Grok) models via Vercel AI Gateway |
| 178 | +- [shadcn/ui](https://ui.shadcn.com) components styled with Tailwind CSS |
| 179 | +- [Neon Serverless Postgres](https://vercel.com/marketplace/neon) for chat history |
| 180 | +- [Vercel Blob](https://vercel.com/storage/blob) for file storage |
| 181 | +- [Auth.js](https://authjs.dev) authentication |
54 | 182 |
|
55 | | -## Running locally |
| 183 | +### Running the Playground Locally |
56 | 184 |
|
57 | | -You will need to use the environment variables [defined in `.env.example`](.env.example) to run Next.js AI Chatbot. It's recommended you use [Vercel Environment Variables](https://vercel.com/docs/projects/environment-variables) for this, but a `.env` file is all that is necessary. |
| 185 | +1. Install dependencies: |
| 186 | +```bash |
| 187 | +pnpm install |
| 188 | +``` |
58 | 189 |
|
59 | | -> Note: You should not commit your `.env` file or it will expose secrets that will allow others to control access to your various AI and authentication provider accounts. |
| 190 | +2. Set up environment variables (see `.env.example`): |
| 191 | +```bash |
| 192 | +# For Vercel users: |
| 193 | +vercel env pull |
60 | 194 |
|
61 | | -1. Install Vercel CLI: `npm i -g vercel` |
62 | | -2. Link local instance with Vercel and GitHub accounts (creates `.vercel` directory): `vercel link` |
63 | | -3. Download your environment variables: `vercel env pull` |
| 195 | +# Or manually create .env.local with: |
| 196 | +# - POSTGRES_URL |
| 197 | +# - AUTH_SECRET |
| 198 | +# - AI_GATEWAY_API_KEY (for non-Vercel deployments) |
| 199 | +``` |
64 | 200 |
|
| 201 | +3. Run database migrations: |
| 202 | +```bash |
| 203 | +pnpm db:migrate |
| 204 | +``` |
| 205 | + |
| 206 | +4. Start the development server: |
65 | 207 | ```bash |
66 | | -pnpm install |
67 | | -pnpm db:migrate # Setup database or apply latest database changes |
68 | 208 | pnpm dev |
69 | 209 | ``` |
70 | 210 |
|
71 | | -Your app template should now be running on [localhost:3000](http://localhost:3000). |
| 211 | +The playground will run on [localhost:4000](http://localhost:4000). |
| 212 | + |
| 213 | +### Development Commands |
| 214 | + |
| 215 | +```bash |
| 216 | +pnpm dev # Start dev server |
| 217 | +pnpm build # Build for production |
| 218 | +pnpm lint # Check code with Ultracite |
| 219 | +pnpm format # Auto-fix code |
| 220 | + |
| 221 | +# Database (Drizzle ORM) |
| 222 | +pnpm db:migrate # Apply migrations |
| 223 | +pnpm db:generate # Generate new migrations |
| 224 | +pnpm db:studio # Open Drizzle Studio GUI |
| 225 | + |
| 226 | +# Testing |
| 227 | +pnpm test # Run Playwright e2e tests |
| 228 | +``` |
| 229 | + |
| 230 | +For detailed development instructions, see [packages/streaming-markdown/README.md](packages/streaming-markdown/README.md). |
| 231 | + |
| 232 | +## License |
| 233 | + |
| 234 | +MIT © 2024-present. Feel free to use it in production or open-source projects. |
0 commit comments