Installation
How to set up and run VectraDocs locally or deploy it
Installation Guide
Follow these steps to get your AI-powered documentation site up and running.
Choose Your Path
Using VitePress, Docusaurus, or another framework?
You don't need to clone this repo. Instead, install our plugins:
Prerequisites
Before you begin, ensure you have:
| Requirement | Version | Notes |
|---|---|---|
| Node.js | 18+ | Required for Next.js 15 |
| npm/pnpm/yarn | Latest | Package manager |
| Git | Any | For cloning the repository |
| LLM API Key | - | OpenAI, Anthropic, or local Ollama |
Step 1: Clone the Repository
git clone https://github.com/iotserver24/VectraDocs.git
cd VectraDocsThe repository structure:
VectraDocs/
├── app/ # Next.js App Router
│ ├── api/chat/ # AI chat endpoint
│ └── docs/ # Documentation pages
├── content/docs/ # Your MDX documentation files
├── components/ # React components (AI chat, footer)
├── scripts/ # Build scripts (search index)
├── public/ # Static assets + search-index.json
└── packages/ # CLI tools (create-vetradocs-backend)Step 2: Install Dependencies
npm installThis installs the following core packages:
| Package | Purpose |
|---|---|
fumadocs-core, fumadocs-ui | Documentation framework with MDX support |
langchain, @langchain/openai | AI orchestration and LLM integration |
@orama/orama | Client-side vector search engine |
@orama/plugin-data-persistence | Persist and load search index |
react-markdown, remark-gfm | Render AI responses as rich Markdown |
lucide-react | Beautiful icons |
Step 3: Configure Environment Variables
Create your environment file:
cp .env.example .env.localOpen .env.local and configure your LLM provider:
Option A: OpenAI (Recommended for beginners)
LLM_BASE_URL="https://api.openai.com/v1"
LLM_API_KEY="sk-your-openai-api-key"
LLM_MODEL="gpt-4o"Get your API key from OpenAI Platform.
Option B: Anthropic Claude
LLM_BASE_URL="https://api.anthropic.com/v1"
LLM_API_KEY="sk-ant-your-key"
LLM_MODEL="claude-3-sonnet-20240229"Option C: Local Ollama (Free & Private)
- Download and install Ollama.
- Pull a model:
ollama pull llama3 - Start Ollama:
ollama serve - Configure
.env.local:
LLM_BASE_URL="http://localhost:11434/v1"
LLM_API_KEY="ollama"
LLM_MODEL="llama3"Option D: Any OpenAI-Compatible API
VectraDocs works with any API that follows the OpenAI chat completions format:
- Groq:
https://api.groq.com/openai/v1 - Together AI:
https://api.together.xyz/v1 - Fireworks AI:
https://api.fireworks.ai/inference/v1
Step 4: Build the Search Index
The AI assistant uses Orama to search your documentation. Before it can answer questions, you must build the search index:
npm run build:indexThis script (scripts/build-index.mjs):
- Scans all
.mdxfiles incontent/docs/ - Extracts text content
- Creates a vector search index
- Saves it to
public/search-index.json
Important
Run this command every time you add or edit documentation!
The AI won't know about changes until you rebuild the index.
Step 5: Run the Development Server
Start the Next.js development server:
npm run devOpen http://localhost:3000 in your browser.
You should see:
- ✅ The documentation homepage
- ✅ A floating "Ask AI" bar at the bottom
- ✅ Clicking it opens the AI chat sidebar
Step 6: Production Build (Optional)
To build for production:
npm run build
npm startOr deploy to Vercel:
npx vercelTroubleshooting
AI says "I don't know" or gives wrong answers
- Make sure you ran
npm run build:indexafter editing docs. - Check that
.env.localhas valid LLM credentials. - Verify the model name is correct for your provider.
Chat endpoint returns 500 error
- Check terminal for error messages.
- Verify
LLM_BASE_URLis correct (no trailing slash). - For Ollama, ensure it's running:
ollama serve.
Search index is empty
- Ensure your docs are in
content/docs/with.mdxextension. - Each file needs frontmatter with
titleanddescription.
Next Steps
- AI Configuration: Customize prompts, models, and UI.
- Backend Setup: Create backends for VitePress/Docusaurus plugins.