Vetradocs

Backend Setup

Create an AI backend for VectraDocs plugins using our CLI tool

Backend Setup

The VectraDocs frontend plugins (VitePress, Docusaurus, Scalar) require a backend service to communicate with AI models. This page shows you how to create one in seconds using our CLI.

Why do I need a backend?

The frontend plugins run in the browser and cannot securely store API keys. A backend:

  1. Keeps your API keys safe (never exposed to users)
  2. Handles CORS (allows your docs site to connect)
  3. Streams responses (for real-time AI output)

Quick Start

Run this command in your terminal:

npx create-vetradocs-backend@latest

The CLI will guide you through:

  1. Select backend type: Node.js (Express) or Cloudflare Workers
  2. Enter project name: e.g., my-docs-backend
  3. Configure Frontend URL: For CORS (e.g., http://localhost:5173)
  4. Generate API Key: Auto-generates a secure key for you

Backend Options

A standard Node.js server that proxies requests to any OpenAI-compatible API.

Best for:

  • Full control over the LLM provider
  • Deploying to Vercel, Railway, Render, or your own VPS
  • Using OpenAI, Anthropic, Groq, Together AI, or local Ollama

Prerequisites:

  • Node.js 18+
  • An API key from your LLM provider

Step-by-step Setup:

# 1. Create the backend
npx create-vetradocs-backend@latest
# Select "Node.js (Express)" and enter a project name

# 2. Navigate to the project
cd chat-backend

# 3. Install dependencies
npm install

# 4. Configure environment
# The CLI already created .env with your API_KEY.
# Now add your LLM provider credentials:

Edit .env:

PORT=3000
FRONTEND_URL=http://localhost:5173
API_KEY=auto-generated-key-here

# Add your LLM provider:
LLM_BASE_URL=https://api.openai.com/v1
LLM_API_KEY=sk-your-openai-key
LLM_MODEL=gpt-4o

Start the server:

npm run dev    # Development with auto-reload
npm start      # Production

Your backend is now running at http://localhost:3000.


Option 2: Cloudflare Workers

A serverless backend that runs on Cloudflare's edge network using Workers AI.

Best for:

  • Zero infrastructure management
  • Free tier (Llama 3 models are free)
  • Global low-latency responses

Prerequisites:

Step-by-step Setup:

# 1. Create the backend
npx create-vetradocs-backend@latest
# Select "Cloudflare Workers" and enter a project name

# 2. Navigate to the project
cd chat-workers

# 3. Install dependencies
npm install

# 4. Login to Cloudflare
npx wrangler login

# 5. Deploy
npm run deploy

Your worker will be deployed to a URL like:

https://chat-workers.your-username.workers.dev

Configure secrets (via Cloudflare Dashboard > Settings > Variables):

VariableDescription
API_KEYSecret key for frontend authentication
FRONTEND_URLYour docs URL for CORS (e.g., https://docs.example.com)
AI_MODELModel to use (default: @cf/meta/llama-3-8b-instruct)

Environment Variables Reference

Node.js Backend

VariableRequiredDescription
PORTNoServer port (default: 3000)
FRONTEND_URLYesYour docs site URL for CORS
API_KEYYesSecret key the frontend uses to authenticate
LLM_BASE_URLYesLLM provider endpoint (e.g., https://api.openai.com/v1)
LLM_API_KEYYesYour LLM provider API key
LLM_MODELYesModel name (e.g., gpt-4o, claude-3-sonnet)

Cloudflare Workers Backend

VariableRequiredDescription
API_KEYYesSecret key for authentication
FRONTEND_URLYesYour docs site URL for CORS
AI_MODELNoCloudflare AI model (default: @cf/meta/llama-3-8b-instruct)

Connecting Your Frontend

Once your backend is deployed, configure your frontend to use it.

VitePress Plugin

Add to your .env file in the VitePress project root:

VITE_VETRADOCS_BACKEND_URL=https://your-backend.com
VITE_VETRADOCS_API_KEY=your-generated-api-key

Docusaurus Plugin

Pass props to the component in src/theme/Root.js:

<VetradocsChat 
  apiEndpoint="https://your-backend.com"
  apiKey="your-generated-api-key"
/>
<VetradocsFloatingBar 
  apiEndpoint="https://your-backend.com"
  apiKey="your-generated-api-key"
/>

Scalar / Web Component

Add attributes to the custom element:

<vetradocs-widget
  api-endpoint="https://your-backend.com"
  api-key="your-generated-api-key"
></vetradocs-widget>

Security Best Practices

  1. Use strong API keys: The CLI generates secure random keys. Never use simple passwords.
  2. Set FRONTEND_URL correctly: This prevents other websites from using your backend.
  3. Use HTTPS in production: Always deploy your backend with SSL.
  4. Keep LLM keys secret: Never expose LLM_API_KEY to the frontend.

Deployment Options

Node.js Backend

PlatformCommandNotes
Vercelnpx vercelAutomatic HTTPS, free tier
RailwayConnect GitHub repoEasy deploys
RenderConnect GitHub repoFree tier available
VPSnpm start with PM2Full control

Cloudflare Workers

Already deployed globally when you run npm run deploy. No additional steps needed.


Troubleshooting

CORS Errors

Check that FRONTEND_URL matches your docs site URL exactly (including http:// or https://).

401 Unauthorized

Verify the API_KEY in your backend matches what you configured in the frontend.

AI responses are empty

  1. Check backend logs for errors
  2. Verify LLM_API_KEY is valid
  3. Ensure LLM_BASE_URL has no trailing slash

Next Steps

Ctrl+I
Assistant

How can I help?

Ask me about configuration, installation, or specific features.