FeaturesTemplatesShowcaseTownie
AI
BlogDocsPricing
Log inSign up
abrinz
abrinzagent-chat
Remix of std/reactHonoStarter
Public
Like
agent-chat
Home
Code
3
backend
1
frontend
5
README.md
Branches
1
Pull requests
Remixes
History
Environment variables
1
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in miliseconds.
Sign up now
Code
/
Code
/
Search
README.md

React Hono AI Chat

A clean AI chat interface built with React, Hono, and a custom AI agent.

Features

  • ✅ Clean, minimal chat interface
  • ✅ AI Chat powered by custom agent
  • ✅ Robust streaming proxy with proper format conversion and error handling
  • ✅ Real-time streaming responses with proper OpenAI-compatible format
  • ✅ Auto-scrolling to latest messages
  • ✅ Enhanced loading indicators with animated dots
  • ✅ Centered layout (1/2 screen width)
  • ✅ Input anchored to bottom
  • ✅ Locked viewport for stable experience
  • ✅ Prompt buttons for quick conversation starters
  • ✅ Tailwind CSS styling

Setup

  1. Set the AGENT_API_KEY environment variable with your agent's API key
  2. The chat will automatically connect to your agent at: https://abrinz--3be1cc2632ad11f080f5569c3dd06744.web.val.run/

Getting started

Get a copy of this starter template by clicking the Remix button in the top-right.

How it works

  • The entrypoint is /backend/index.ts. That's the backend HTTP server, which also serves the all the frontend assets.

  • The client-side entrypoint is /frontend/index.html

    • which in turn imports /frontend/index.tsx
    • which in turn imports the React app from /frontend/components/App.tsx.
  • The chat feature uses:

    • /frontend/components/Chat.tsx - React component with useChat hook
    • /api/chat endpoint in /backend/index.ts - Proxies requests to your agent

Chat Implementation

The chat interface features:

  • Centered layout - Takes up 1/2 of screen width and full height
  • useChat hook from Vercel AI SDK handles message state and streaming
  • Robust streaming proxy - Properly converts agent's custom format to OpenAI-compatible SSE
  • Custom agent integration - Seamlessly connects to your agent API with full error handling
  • Real-time streaming - Word-by-word responses with proper buffering and parsing
  • Auto-scrolling - Automatically scrolls to show latest messages
  • Enhanced loading state - Animated dots while AI is thinking
  • Bottom-anchored input for optimal UX
  • Locked viewport prevents zooming and jumpiness
  • Prompt buttons for quick conversation starters
  • Clean design with Tailwind CSS

Agent Integration

The chat sends requests to your agent in this format:

{ "messages": [ {"role": "user", "content": "Hello"} ], "streamResults": true }

The agent should return either:

  • Streaming: Custom format (f:, 0:"content", e:/d: markers) which gets converted to OpenAI-compatible text/event-stream
  • Non-streaming: JSON with content, message, or choices[0].message.content

The backend features a robust streaming proxy that:

  • Properly parses the agent's custom streaming format line by line
  • Handles incomplete chunks and buffering correctly
  • Converts to OpenAI-compatible Server-Sent Events in real-time
  • Provides graceful error handling and recovery
  • Works seamlessly with the Vercel AI SDK

Further resources

  • Vercel AI SDK Documentation for more chat features and customization options.
Code
backendfrontendREADME.md
Go to top
X (Twitter)
Discord community
GitHub discussions
YouTube channel
Bluesky
Product
FeaturesPricing
Developers
DocsStatusAPI ExamplesNPM Package Examples
Explore
ShowcaseTemplatesNewest ValsTrending ValsNewsletter
Company
AboutBlogCareersBrandhi@val.town
Terms of usePrivacy policyAbuse contact
© 2025 Val Town, Inc.