Get started

Install, configure, and start using Codehere in under 2 minutes. Free, open source, works offline.

Quick Start

1. Install Codehere

npm install -g codehere

Requires Node.js 20+

2. Set your AI provider API key

# Option A: OpenAI (auto-selected when key is present)
export OPENAI_API_KEY=sk-...

# Option B: Cohere (free trial: 1000 calls/month)
export COHERE_API_KEY=...

# Option C: OpenRouter (400+ models, pay per use)
export OPENROUTER_API_KEY=...

# Option D: Ollama (fully offline, no key needed)
# Install ollama at ollama.com, then:
export CODEHERE_AI_PROVIDER=local

Auto-selection order: OpenAI → Cohere → OpenRouter → Ollama. Set CODEHERE_AI_PROVIDER to override.

3. Index your repository

cd your-project
codehere index

Creates a local SQLite database in .codehere-cache/

4. Start using it

# Ask questions about your codebase
codehere ask "where is authentication handled?"

# Plan a change
codehere plan "add rate limiting to the API"

# Fix a file — safety pipeline runs automatically
codehere fix src/api.ts "add input validation"

# Describe a bug and get a fix
codehere fix "null pointer in auth middleware"

Commands Reference

codehere indexstable

Build a semantic search index of your codebase. Stores embeddings in local SQLite.

codehere ask "question"stable

Ask natural language questions about your codebase. Returns relevant files and AI-generated answers.

--json Output as JSON
codehere plan "goal"stable

Break down a task into steps with affected files, dependencies, and risk assessment.

--review Review before executing
--execute Execute the plan
codehere fix <file> "instruction"stable

AI-assisted code editing with automatic safety pipeline. Scans the instruction for prompt injection, generates the fix, scans output for vulnerabilities, then applies.

--explain Show AI reasoning
codehere fix "description"stable

Describe a bug and get a fix suggestion. Safety pipeline runs automatically.

codehere orchestrate "goal"stable

Multi-step task execution with coordination across files.

--review Review at each step
codehere healthstable

Check system status — provider connectivity, index status, configuration.

codehere configstable

View and manage configuration.

codehere setupstable

Interactive setup wizard for first-time configuration.

codehere modelsstable

List available AI models for your configured provider.

Provider Configuration

Codehere works with multiple AI providers. Pick one that fits your needs:

OpenAI (Recommended)

GPT-4o, o1, o3-mini. Best quality. Auto-selected when OPENAI_API_KEY is present.

export OPENAI_API_KEY=sk-...

Get your key at platform.openai.com. Usage is billed by OpenAI directly.

Cohere

Command-R, Command-R+. Good for code understanding. Free trial: 1000 calls/month.

export COHERE_API_KEY=...

Get your key at dashboard.cohere.com. Auto-selected when OPENAI_API_KEY is absent.

OpenRouter

Access 400+ models through a single API. Pay per use.

export OPENROUTER_API_KEY=...

Get your key at openrouter.ai. Note: OpenRouter doesn't support embeddings — indexing uses a local fallback.

Ollama (Fully Offline)

Zero network calls. No API key. Your code never leaves your machine.

export CODEHERE_AI_PROVIDER=local
# Ollama runs at http://localhost:11434 by default

Install from ollama.com. Run `ollama serve` before using. Needs 8GB+ RAM.

Using .env files

Instead of exporting environment variables, create a .env file in your project root:

OPENAI_API_KEY=sk-your-key-here
# COHERE_API_KEY=your-cohere-key
# OPENROUTER_API_KEY=your-openrouter-key

⚠️ Add .env to your .gitignore — never commit API keys.

VS Code ExtensionUpcoming · v0.2.0

The VS Code extension is in development and has not been tested with real users. It is not published to the VS Code Marketplace yet.

We will add full documentation here once it has been validated. Expected in v0.2.0. Until then, use the CLI — it works today.

Troubleshooting

"API key not found"

Make sure you've set the right environment variable for your provider. Run `codehere health` to check.

"429 Too Many Requests"

You've hit your provider's rate limit. Wait a few minutes, or upgrade your API key tier.

Indexing is very slow

Large repos (100K+ lines) take time. Try indexing a subdirectory first, or use a faster embedding provider.

"No embeddings found" when asking questions

Run `codehere index` first. Embeddings are per-repository and stored locally.

Ollama not connecting

Make sure Ollama is running (`ollama serve`) and accessible at http://localhost:11434.

Poor quality answers

Try a stronger model. GPT-4 or Command-R+ give better results than smaller models. Also ensure your repo is indexed.

Security & Privacy

  • Never commit API keys to version control — use .env files (add to .gitignore)
  • All data stored locally in .codehere-cache/ per repository
  • No telemetry, no analytics, no data sent to Codehere servers — ever
  • Use Ollama for fully offline operation — zero network calls
  • Every AI-generated edit is security-scanned (SAST) before applying
  • Audit trails stored locally for every operation

Ready to get started?

Free, open source, works offline. Install and index your first repo in under 2 minutes.