Open Source · Local-First · v0.1.1 on npm

AI coding agent with a safety pipeline

Every instruction is scanned for prompt injection. Every AI-generated edit is checked for vulnerabilities before it touches your code. Every operation leaves an audit trail. Local-first, open source, free.

0Attack patterns
0%Detection rate
0-stageSafety pipeline
0Telemetry
Prompt injection detection on every instruction
SAST scan before any edit applies
OpenAI · Cohere · Ollama · OpenRouter
VS Code extension + CLI

The Problem with AI Coding Agents

×

Edit code without showing you what changed

Most AI agents apply changes directly. You discover what happened after the fact.

×

No security scanning on generated code

AI can introduce SQL injection, hardcoded secrets, or unsafe patterns — and no one catches it.

×

Vulnerable to prompt injection attacks

Malicious instructions in comments or files can redirect the agent to do something harmful.

×

No audit trail for compliance

In regulated industries, you need to prove what AI did and why. Most tools give you nothing.

What Codehere actually does

Public beta — core features work today. Install it, break it, tell us.

Semantic Code Search

Index your codebase into a local SQLite database. Ask natural language questions and get real answers with file paths and code snippets.

codehere index && codehere ask "how does auth work?"

Safety Pipeline on Every Edit

Prompt injection scan → AI generates fix → SAST scan → license check → human review gate → audit log. Six stages, every time.

codehere fix src/api.ts "add input validation"

Safety Benchmark (50 Patterns)

A public, reproducible benchmark: 50 attack patterns across 5 categories. Run in ~50ms from source, no API key. 100% detection in pattern mode. Targeting v0.2.0.

node dist/benchmark/run.js # run from source

Task Planning

Break down complex changes into step-by-step plans with affected files, dependency analysis, and risk assessment.

codehere plan "refactor auth to use JWT"

VS Code Extension (Upcoming)

Alpha code in development — not yet published to the Marketplace or tested with real users. Targeting v0.2.0. Do not rely on it yet.

# Coming in v0.2.0 — not yet available

Local-First, No Lock-In

All embeddings stay in .codehere-cache/ on your machine. Works fully offline with Ollama. Switch providers with one env var.

export OPENAI_API_KEY=sk-... # auto-selected

Three steps. That's it.

1

Index your code

Run `codehere index` in your project. It builds a local SQLite database of semantic embeddings. Your code stays on your machine.

2

Ask, plan, or fix

Ask questions about your codebase. Plan multi-step changes. Or fix files directly — the safety pipeline runs automatically on every fix.

3

Review and confirm

See exactly what will change. Security scan results, file diffs, risk assessment. Approve to apply, reject to discard.

terminal
live

Safety claims you can verify yourself

We built a public benchmark: 50 real-world attack patterns, 5 categories. Run it in 50ms. No API key needed.

10/10
Prompt Injection
10/10
Credential Extraction
10/10
Code Injection
10/10
Supply Chain
10/10
OWASP Patterns
benchmark results
# Run it yourself — no API key needed
$ node dist/benchmark/run.js
Overall Detection Rate: 100%
✅ Detected: 50 / 50
❌ Missed: 0 / 50
Total time: 50ms · Pattern mode · No AI calls
View full methodology on GitHub

Our safety commitment

Frontier AI labs build powerful models. We build the safety layer between those models and your codebase.

  • Prompt injection detection before any instruction reaches the AI
  • SAST security scan on every AI-generated edit, before it applies
  • 50-pattern safety benchmark — publicly reproducible, run in 50ms
  • Local-first: code stays on your machine (Ollama for fully offline)
  • Complete audit trail: every operation logged locally
  • No telemetry, no analytics, no data collection — ever
  • Open source — inspect every line, fork it, contribute

Try it yourself

Install. Index. Ask a question. Under 2 minutes. Free and open source. Works offline.