/vercel+arthur.png)
Chat SDK is a free, open-source template built with Next.js and the AI SDK that helps you quickly build powerful chatbot applications protected by The Arthur GenAI Engine, an enterprise-grade AI safety and content filtering.
Built with Vercel - The platform for frontend developers. Deploy, preview, and ship faster.
Protected by Arthur - Transform your AI pilots into robust, enterprise-grade applications.
Read Chat SDK Docs · Read Arthur AI Docs · Quick Start · Features · Model Providers · Testing · Arthur AI Setup · Deploy · Documentation
- Clone this repository
- Set up Arthur AI (see Arthur AI Setup)
- Configure environment variables (see Running Locally)
- Install dependencies:
pnpm install
- Run locally:
pnpm dev
- Visit localhost:3000
📚 Need more details? Check out our Setup Guide for comprehensive installation instructions.
- Next.js App Router
- Advanced routing for seamless navigation and performance
- React Server Components (RSCs) and Server Actions for server-side rendering and increased performance
- AI SDK
- Unified API for generating text, structured objects, and tool calls with LLMs
- Hooks for building dynamic chat and generative user interfaces
- Supports xAI (default), OpenAI, Fireworks, and other model providers
- shadcn/ui
- Styling with Tailwind CSS
- Component primitives from Radix UI for accessibility and flexibility
- Data Persistence
- Neon Serverless Postgres for saving chat history and user data
- Vercel Blob for efficient file storage
- Auth.js
- Simple and secure authentication
- Arthur AI Guardrails
- Enterprise AI Safety: Real-time content filtering and safety controls for production AI applications
- PII Detection: Advanced detection and blocking of Personally Identifiable Information (SSN, email, phone, etc.)
- Toxicity Detection: Automatic filtering of harmful, inappropriate, or toxic content
- Prompt Injection Detection: Protection against malicious prompt injection attacks
- Hallucination Checks: Detection and prevention of AI-generated false or misleading information
- Sensitive Data Check: Custom training capabilities to detect organization-specific sensitive information
- Customizable Rules: Flexible validation rules tailored to your organization's specific needs
- Active Enforcement: Real-time blocking and redaction of violating content before it reaches your AI models
- Compliance Ready: Built-in support for enterprise compliance requirements (GDPR, HIPAA, etc.)
- Production Deployment: Self-hosted Arthur AI Engine for enterprise-grade reliability and security
This template ships with xAI grok-2-1212
as the default chat model. However, with the AI SDK, you can switch LLM providers to OpenAI, Anthropic, Cohere, and many more with just a few lines of code.
You will need to use the environment variables to run Next.js AI Chatbot. It's recommended you use Vercel Environment Variables for this, but a .env
file is all that is necessary.
Use Arthur AI's Docker-based local deployment option for local Vercel development runs (vercel dev
)
Required Environment Variables:
AUTH_SECRET
- Secret for authentication (generate withopenssl rand -base64 32
)ARTHUR_API_KEY
- Your Arthur AI API keyARTHUR_TASK_ID
- Your Arthur AI model IDARTHUR_API_BASE
- Arthur AI API base URL (for production, use your self-hosted engine endpoint)ARTHUR_USE_GUARDRAILS
- Set totrue
to enable content blocking (PII/toxicity),false
for logging only
Note: You should not commit your
.env
file or it will expose secrets that will allow others to control access to your various AI and authentication provider accounts.
- Install Vercel CLI:
npm i -g vercel
- Link local instance with Vercel and GitHub accounts (creates
.vercel
directory):vercel link
- Download your environment variables:
vercel env pull
pnpm install
pnpm dev
This project uses Playwright for end-to-end testing and Vitest for unit testing.
# Run all E2E tests (Playwright)
pnpm test:e2e
# Run all unit tests (Vitest)
pnpm test:unit
# Run unit tests with UI
pnpm test:unit:ui
- Unit Tests: Fast feedback on PRs (< 30 seconds) for middleware, utilities, and business logic
- E2E Tests: Comprehensive validation on merge to main for user flows and integrations
- CI/CD Ready: Optimized for automated pipelines with clear separation of concerns
🧪 Comprehensive Testing Guide: For detailed testing instructions, examples, and troubleshooting, see our Testing Documentation.
This project uses modern development tools for code quality and consistency:
# Lint and format code
pnpm lint
# Format code only
pnpm format
# Type checking
pnpm build
- ESLint v9: Latest linting with TypeScript support
- Biome: Fast formatting and linting
- TypeScript: Full type safety
- Prettier: Code formatting (via Biome)
🔧 Development Workflow: For detailed development guidelines, contribution standards, and best practices, see our Contributing Guide.
The Arthur Platform provides enterprise-grade AI safety that goes beyond simple content filtering. It actively enforces guardrails in real-time, ensuring your AI applications remain safe, compliant, and trustworthy to help you scale with confidence.
The Arthur GenAI Engine provides comprehensive AI safety through five core detection systems:
- 🔒 PII Detection: Automatically identifies and blocks sensitive personal information (SSN, email, phone, credit cards, etc.)
- 🚫 Toxicity Detection: Filters harmful content and inappropriate responses
- ⚡ Prompt Injection Detection: Protects against malicious prompt attacks
- 🎯 Hallucination Checks: Detects and prevents AI-generated false or misleading information
- 📊 Sensitive Data Check: Custom training capabilities to detect organization-specific sensitive information
The Arthur Control Plane provides enterprise-grade monitoring and management:
- 📈 Dashboards: Real-time visibility into AI safety metrics and performance
- 📊 Metrics: Comprehensive analytics on content filtering, violations, and compliance
- 🚨 Alerts: Proactive notifications for safety violations and compliance issues
- 🎛️ Management: Centralized control over safety rules and configurations
The Arthur Platform provides enterprise-grade AI safety with real-time content filtering and compliance controls.
Quick Setup:
- Sign up at platform.arthur.ai/signup
- Deploy the Arthur GenAI Engine locally or in production
- Configure your safety metrics and model
- Add your API credentials to environment variables
📋 Detailed Setup Guide: For step-by-step Arthur AI configuration, see our Setup Guide.
Arthur Platform provides active enforcement of AI safety rules, ensuring that violating content never reaches your AI models through real-time content validation and blocking.
🔧 Technical Details: For implementation details and troubleshooting, see our Setup Guide and Troubleshooting Guide.
Build and deploy your AI chatbot with enterprise-grade safety and compliance. This template includes Arthur Platform's active guardrails to ensure your AI applications remain safe, compliant, and trustworthy.
- 🛡️ Built-in Safety: Enterprise-grade AI guardrails with Arthur Platform
- 🔒 Compliance Ready: HIPAA, GDPR, and regulatory compliance support
- ⚡ Production Ready: Self-hosted Arthur AI Engine for enterprise reliability
- 🎯 Customizable: Tailor safety rules to your specific requirements
- 📊 Monitoring: Real-time safety monitoring and compliance reporting
Production Deployment Note: Deploy a self-hosted Arthur AI Engine instance on AWS or Kubernetes as described in the Arthur AI documentation for production Vercel deployments. Update ARTHUR_API_BASE
to point to your deployed engine endpoint.
🚨 Need Help? If you encounter issues during setup or deployment, check our Troubleshooting Guide for common solutions and debugging tips.
This project includes comprehensive documentation to help you get started and succeed:
- 📋 Setup Guide - Detailed installation and configuration instructions
- 🧪 Testing Guide - Complete testing strategy and examples
- 🔧 Contributing Guide - Development workflow and contribution standards
- 🚨 Troubleshooting Guide - Common issues and solutions
- 📚 Documentation Index - Overview of all available documentation
For quick navigation and comprehensive information, start with our Documentation Index.