agent-chat
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in miliseconds.
A clean AI chat interface built with React, Hono, and a custom AI agent.
- ✅ Clean, minimal chat interface
- ✅ AI Chat powered by custom agent
- ✅ Robust streaming proxy with proper format conversion and error handling
- ✅ Real-time streaming responses with proper OpenAI-compatible format
- ✅ Auto-scrolling to latest messages
- ✅ Enhanced loading indicators with animated dots
- ✅ Centered layout (1/2 screen width)
- ✅ Input anchored to bottom
- ✅ Locked viewport for stable experience
- ✅ Prompt buttons for quick conversation starters
- ✅ Tailwind CSS styling
- Set the
AGENT_API_KEY
environment variable with your agent's API key - The chat will automatically connect to your agent at:
https://abrinz--3be1cc2632ad11f080f5569c3dd06744.web.val.run/
Get a copy of this starter template by clicking the Remix button in the top-right.
-
The entrypoint is
/backend/index.ts
. That's the backend HTTP server, which also serves the all the frontend assets. -
The client-side entrypoint is
/frontend/index.html
- which in turn imports
/frontend/index.tsx
- which in turn imports the React app from
/frontend/components/App.tsx
.
- which in turn imports
-
The chat feature uses:
/frontend/components/Chat.tsx
- React component with useChat hook/api/chat
endpoint in/backend/index.ts
- Proxies requests to your agent
The chat interface features:
- Centered layout - Takes up 1/2 of screen width and full height
- useChat hook from Vercel AI SDK handles message state and streaming
- Robust streaming proxy - Properly converts agent's custom format to OpenAI-compatible SSE
- Custom agent integration - Seamlessly connects to your agent API with full error handling
- Real-time streaming - Word-by-word responses with proper buffering and parsing
- Auto-scrolling - Automatically scrolls to show latest messages
- Enhanced loading state - Animated dots while AI is thinking
- Bottom-anchored input for optimal UX
- Locked viewport prevents zooming and jumpiness
- Prompt buttons for quick conversation starters
- Clean design with Tailwind CSS
The chat sends requests to your agent in this format:
{ "messages": [ {"role": "user", "content": "Hello"} ], "streamResults": true }
The agent should return either:
- Streaming: Custom format (
f:
,0:"content"
,e:
/d:
markers) which gets converted to OpenAI-compatibletext/event-stream
- Non-streaming: JSON with
content
,message
, orchoices[0].message.content
The backend features a robust streaming proxy that:
- Properly parses the agent's custom streaming format line by line
- Handles incomplete chunks and buffering correctly
- Converts to OpenAI-compatible Server-Sent Events in real-time
- Provides graceful error handling and recovery
- Works seamlessly with the Vercel AI SDK
- Vercel AI SDK Documentation for more chat features and customization options.