Backend Setup
Create an AI backend for VectraDocs plugins using our CLI tool
Backend Setup
The VectraDocs frontend plugins (VitePress, Docusaurus, Scalar) require a backend service to communicate with AI models. This page shows you how to create one in seconds using our CLI.
Why do I need a backend?
The frontend plugins run in the browser and cannot securely store API keys. A backend:
- Keeps your API keys safe (never exposed to users)
- Handles CORS (allows your docs site to connect)
- Streams responses (for real-time AI output)
Quick Start
Run this command in your terminal:
npx create-vetradocs-backend@latestThe CLI will guide you through:
- Select backend type: Node.js (Express) or Cloudflare Workers
- Enter project name: e.g.,
my-docs-backend - Configure Frontend URL: For CORS (e.g.,
http://localhost:5173) - Generate API Key: Auto-generates a secure key for you
Backend Options
Option 1: Node.js (Express) — Recommended
A standard Node.js server that proxies requests to any OpenAI-compatible API.
Best for:
- Full control over the LLM provider
- Deploying to Vercel, Railway, Render, or your own VPS
- Using OpenAI, Anthropic, Groq, Together AI, or local Ollama
Prerequisites:
- Node.js 18+
- An API key from your LLM provider
Step-by-step Setup:
# 1. Create the backend
npx create-vetradocs-backend@latest
# Select "Node.js (Express)" and enter a project name
# 2. Navigate to the project
cd chat-backend
# 3. Install dependencies
npm install
# 4. Configure environment
# The CLI already created .env with your API_KEY.
# Now add your LLM provider credentials:Edit .env:
PORT=3000
FRONTEND_URL=http://localhost:5173
API_KEY=auto-generated-key-here
# Add your LLM provider:
LLM_BASE_URL=https://api.openai.com/v1
LLM_API_KEY=sk-your-openai-key
LLM_MODEL=gpt-4oStart the server:
npm run dev # Development with auto-reload
npm start # ProductionYour backend is now running at http://localhost:3000.
Option 2: Cloudflare Workers
A serverless backend that runs on Cloudflare's edge network using Workers AI.
Best for:
- Zero infrastructure management
- Free tier (Llama 3 models are free)
- Global low-latency responses
Prerequisites:
- A Cloudflare account
wranglerCLI:npm install -g wrangler
Step-by-step Setup:
# 1. Create the backend
npx create-vetradocs-backend@latest
# Select "Cloudflare Workers" and enter a project name
# 2. Navigate to the project
cd chat-workers
# 3. Install dependencies
npm install
# 4. Login to Cloudflare
npx wrangler login
# 5. Deploy
npm run deployYour worker will be deployed to a URL like:
https://chat-workers.your-username.workers.devConfigure secrets (via Cloudflare Dashboard > Settings > Variables):
| Variable | Description |
|---|---|
API_KEY | Secret key for frontend authentication |
FRONTEND_URL | Your docs URL for CORS (e.g., https://docs.example.com) |
AI_MODEL | Model to use (default: @cf/meta/llama-3-8b-instruct) |
Environment Variables Reference
Node.js Backend
| Variable | Required | Description |
|---|---|---|
PORT | No | Server port (default: 3000) |
FRONTEND_URL | Yes | Your docs site URL for CORS |
API_KEY | Yes | Secret key the frontend uses to authenticate |
LLM_BASE_URL | Yes | LLM provider endpoint (e.g., https://api.openai.com/v1) |
LLM_API_KEY | Yes | Your LLM provider API key |
LLM_MODEL | Yes | Model name (e.g., gpt-4o, claude-3-sonnet) |
Cloudflare Workers Backend
| Variable | Required | Description |
|---|---|---|
API_KEY | Yes | Secret key for authentication |
FRONTEND_URL | Yes | Your docs site URL for CORS |
AI_MODEL | No | Cloudflare AI model (default: @cf/meta/llama-3-8b-instruct) |
Connecting Your Frontend
Once your backend is deployed, configure your frontend to use it.
VitePress Plugin
Add to your .env file in the VitePress project root:
VITE_VETRADOCS_BACKEND_URL=https://your-backend.com
VITE_VETRADOCS_API_KEY=your-generated-api-keyDocusaurus Plugin
Pass props to the component in src/theme/Root.js:
<VetradocsChat
apiEndpoint="https://your-backend.com"
apiKey="your-generated-api-key"
/>
<VetradocsFloatingBar
apiEndpoint="https://your-backend.com"
apiKey="your-generated-api-key"
/>Scalar / Web Component
Add attributes to the custom element:
<vetradocs-widget
api-endpoint="https://your-backend.com"
api-key="your-generated-api-key"
></vetradocs-widget>Security Best Practices
- Use strong API keys: The CLI generates secure random keys. Never use simple passwords.
- Set FRONTEND_URL correctly: This prevents other websites from using your backend.
- Use HTTPS in production: Always deploy your backend with SSL.
- Keep LLM keys secret: Never expose
LLM_API_KEYto the frontend.
Deployment Options
Node.js Backend
| Platform | Command | Notes |
|---|---|---|
| Vercel | npx vercel | Automatic HTTPS, free tier |
| Railway | Connect GitHub repo | Easy deploys |
| Render | Connect GitHub repo | Free tier available |
| VPS | npm start with PM2 | Full control |
Cloudflare Workers
Already deployed globally when you run npm run deploy. No additional steps needed.
Troubleshooting
CORS Errors
Check that FRONTEND_URL matches your docs site URL exactly (including http:// or https://).
401 Unauthorized
Verify the API_KEY in your backend matches what you configured in the frontend.
AI responses are empty
- Check backend logs for errors
- Verify
LLM_API_KEYis valid - Ensure
LLM_BASE_URLhas no trailing slash
Next Steps
- VitePress Plugin: Add AI chat to VitePress
- Docusaurus Plugin: Add AI chat to Docusaurus
- Scalar Plugin: Add AI chat to any website