Vetradocs

Installation

How to set up and run VectraDocs locally or deploy it

Installation Guide

Follow these steps to get your AI-powered documentation site up and running.

Choose Your Path

Using VitePress, Docusaurus, or another framework?
You don't need to clone this repo. Instead, install our plugins:


Prerequisites

Before you begin, ensure you have:

RequirementVersionNotes
Node.js18+Required for Next.js 15
npm/pnpm/yarnLatestPackage manager
GitAnyFor cloning the repository
LLM API Key-OpenAI, Anthropic, or local Ollama

Step 1: Clone the Repository

git clone https://github.com/iotserver24/VectraDocs.git
cd VectraDocs

The repository structure:

VectraDocs/
├── app/                 # Next.js App Router
│   ├── api/chat/        # AI chat endpoint
│   └── docs/            # Documentation pages
├── content/docs/        # Your MDX documentation files
├── components/          # React components (AI chat, footer)
├── scripts/             # Build scripts (search index)
├── public/              # Static assets + search-index.json
└── packages/            # CLI tools (create-vetradocs-backend)

Step 2: Install Dependencies

npm install

This installs the following core packages:

PackagePurpose
fumadocs-core, fumadocs-uiDocumentation framework with MDX support
langchain, @langchain/openaiAI orchestration and LLM integration
@orama/oramaClient-side vector search engine
@orama/plugin-data-persistencePersist and load search index
react-markdown, remark-gfmRender AI responses as rich Markdown
lucide-reactBeautiful icons

Step 3: Configure Environment Variables

Create your environment file:

cp .env.example .env.local

Open .env.local and configure your LLM provider:

LLM_BASE_URL="https://api.openai.com/v1"
LLM_API_KEY="sk-your-openai-api-key"
LLM_MODEL="gpt-4o"

Get your API key from OpenAI Platform.

Option B: Anthropic Claude

LLM_BASE_URL="https://api.anthropic.com/v1"
LLM_API_KEY="sk-ant-your-key"
LLM_MODEL="claude-3-sonnet-20240229"

Option C: Local Ollama (Free & Private)

  1. Download and install Ollama.
  2. Pull a model: ollama pull llama3
  3. Start Ollama: ollama serve
  4. Configure .env.local:
LLM_BASE_URL="http://localhost:11434/v1"
LLM_API_KEY="ollama"
LLM_MODEL="llama3"

Option D: Any OpenAI-Compatible API

VectraDocs works with any API that follows the OpenAI chat completions format:

  • Groq: https://api.groq.com/openai/v1
  • Together AI: https://api.together.xyz/v1
  • Fireworks AI: https://api.fireworks.ai/inference/v1

Step 4: Build the Search Index

The AI assistant uses Orama to search your documentation. Before it can answer questions, you must build the search index:

npm run build:index

This script (scripts/build-index.mjs):

  1. Scans all .mdx files in content/docs/
  2. Extracts text content
  3. Creates a vector search index
  4. Saves it to public/search-index.json

Important

Run this command every time you add or edit documentation!
The AI won't know about changes until you rebuild the index.


Step 5: Run the Development Server

Start the Next.js development server:

npm run dev

Open http://localhost:3000 in your browser.

You should see:

  • ✅ The documentation homepage
  • ✅ A floating "Ask AI" bar at the bottom
  • ✅ Clicking it opens the AI chat sidebar

Step 6: Production Build (Optional)

To build for production:

npm run build
npm start

Or deploy to Vercel:

npx vercel

Troubleshooting

AI says "I don't know" or gives wrong answers

  1. Make sure you ran npm run build:index after editing docs.
  2. Check that .env.local has valid LLM credentials.
  3. Verify the model name is correct for your provider.

Chat endpoint returns 500 error

  1. Check terminal for error messages.
  2. Verify LLM_BASE_URL is correct (no trailing slash).
  3. For Ollama, ensure it's running: ollama serve.

Search index is empty

  1. Ensure your docs are in content/docs/ with .mdx extension.
  2. Each file needs frontmatter with title and description.

Next Steps

Ctrl+I
Assistant

How can I help?

Ask me about configuration, installation, or specific features.