Vetradocs

Introduction

Welcome to the Ultimate AI-Powered Documentation Starter

VectraDocs

VectraDocs is a next-generation, open-source documentation starter kit designed to provide a premium, AI-native experience out of the box. Built on top of Next.js 15, Fumadocs, LangChain, and Orama, it combines a beautiful UI with powerful client-side RAG (Retrieval-Augmented Generation) search.

100% Open Source

All VectraDocs components are free and open source under the MIT license.

Why "VectraDocs"?

Traditional documentation sites are static. Users search for keywords and hope for matches. VectraDocs changes the game by embedding a context-aware AI Assistant directly into the reading experience.

Key Features

  • 🧠 Context-Aware AI Chat: An intelligent assistant that reads your documentation and answers user questions instantly.
  • ⚡ Client-Side RAG: Powered by Orama, search indexing happens at build time and runs incredibly fast in the browser or edge.
  • 💬 Premium UI Experience:
    • Floating Action Bar: A sleek, non-intrusive input bar that expands as you type.
    • Rich Markdown Rendering: The chat supports code blocks, bold text, lists, and links.
    • Code Copying: One-click copy for all code snippets generated by the AI.
  • 🛠️ Easy Configuration: Precise control over the System Prompt, LLM Model (OpenAI, Ollama, Anthropic), and UI styling.
  • 🚀 Next.js 15 & React 19: Built on the bleeding edge for maximum performance.

Ecosystem Overview

VectraDocs isn't just one tool — it's an ecosystem of components that work together.

ComponentDescriptionLinks
VectraDocs (Next.js)This full starter kit with AI built-in.GitHub
vetradocs-vitepressVitePress plugin for Vue-based docs.npm · GitHub
vetradocs-docusaurusDocusaurus plugin for React-based docs.npm · GitHub
vetradocs-scalarWeb Component for Scalar or any HTML site.npm · GitHub
create-vetradocs-backendCLI to scaffold AI backends (Node.js/Express, Cloudflare Workers).npm · GitHub


Architecture

VectraDocs uses a Retrieval-Augmented Generation (RAG) approach:

  1. Ingestion (scripts/build-index.mjs): Scans your .mdx files at build time and creates a search index.
  2. Storage: The index is saved as a JSON file in public/search-index.json.
  3. Retrieval (app/api/chat/route.ts): When a user asks a question, the API loads the index, finds relevant docs, and feeds them to the LLM.
  4. Generation: The LLM (via LangChain) generates a streaming response based only on your documentation.

This approach ensures the AI only answers based on YOUR content, reducing hallucinations.


Open Source

VectraDocs and all its plugins are MIT licensed and free to use.

Contributions are welcome!


Next Steps

Ready to build? Check out the Installation Guide to get started.

Ctrl+I
Assistant

How can I help?

Ask me about configuration, installation, or specific features.