You're searching for a better linter. What you actually need is custom AI code review — AI agents that read your codebase, understand context, and enforce your rules in plain English.
$100 in free usage and 1,000 free agent credits every month.
req.query.name flows through formatFilter() (which does not sanitize) into an unparameterized SQL query. Use parameterized queries instead of string interpolation.Static analysis tools were built for a different era. Here's what's broken.
Industry-standard static analysis tools have 80%+ false positive rates. Your team ignores most alerts — and the real ones slip through.
Writing a custom Semgrep or SonarQube rule takes days. Learning the DSL is a project in itself. Most teams never bother.
Every new developer is another license fee, whether they trigger one review or a thousand. Costs grow faster than your team.
Scheduled scans find issues after code is already merged. By the time the alert fires, the bug is in production.
Macroscope is custom AI code review — not a one-size-fits-all linter. Every dial is yours: rules, scope, model, severity, integrations. All versioned with your code.
A .macroscope/check-run-agents/name.md file IS the rule. Write what to check in natural language — no DSL, no YAML, no AST patterns. The agent reads it and enforces it.
Limit each check to specific paths, file globs, or languages with include/exclude patterns. Run in full_diff mode for whole-PR context, or code_object mode for parallel per-function analysis.
Each agent specifies which LLM to use — Claude Opus for security audits, Sonnet for lightweight checks, GPT-5 variants for specific tasks. Tune reasoning depth and effort level per agent.
Set conclusion to failure to block merging when the agent finds issues, or leave it as neutral for advisory-only feedback. You control severity per check.
Agents read the full repo — not just the diff. They can grep, navigate references, run git log and blame, read config files. They understand context like a human reviewer.
Agents pull context from your team's tools — Sentry for errors, Linear for tickets, BigQuery for data, LaunchDarkly for flags, Slack for notifications. Connect any MCP server.
Precision in our published benchmark — highest of any AI code review tool. Highest bug-detection rate across 8 languages.
Read the benchmark results



Define a rule in markdown. See the result on your next PR.
--- title: Security Review model: claude-opus-4-6 tools: - browse_code - git_tools conclusion: failure --- Review for security vulnerabilities: - SQL injection, XSS, CSRF - Hardcoded secrets or credentials - Insecure authentication patterns Check imports and data flow across files, not just the changed lines.
Catches a SQL injection that ESLint would miss — the vulnerability spans two files with user input flowing through a helper function into an unparameterized query.
SQL injection: user input from req.query flows through formatFilter() into raw SQL in db.query()
Missing CSRF token validation on POST /api/transfer
No hardcoded secrets detected
Macroscope is not a static analysis tool. It's a different category — here's the difference.
| Legacy static analysis | Macroscope (custom AI code review) |
|---|---|
| Regex / AST pattern matching | Codebase-aware AI agents |
| Custom rules = YAML or DSL | Custom rules = plain English markdown |
| Applies to entire codebase uniformly | Scope to specific paths, globs, or languages per rule |
| One model, no tuning | Choose model + reasoning depth + effort per check |
| Pass / fail only | Block merge or advise — per check |
| Scheduled scans (nightly / CI) | Runs on every PR open + push |
| 80%+ false positive rate (industry) | 98% precision (published benchmark) |
| Misses semantic bugs | Built to catch semantic bugs |
| Per-seat licensing | Usage-based, no seat fees |
| Configuration = engineer-week | Configuration = markdown file, versioned with code |
Deep AST-level analysis with full reference graphs across your entire codebase.
No. Macroscope is custom AI code review — a different category entirely. Static analyzers match patterns against your code using regex, AST rules, or DSL. Macroscope is an AI agent that reads your codebase and applies your team's rules, written in plain English. The difference matters: static analysis finds syntax patterns, Macroscope finds semantic bugs.
No — Macroscope is a different category. It's custom AI code review, not a static analyzer. Static analyzers match patterns against your code with regex, AST, or DSL rules. Macroscope is an AI agent that reads your codebase and applies your team's rules — written in plain English in a markdown file.
Most teams running SAST or dependency-scanning tools keep them alongside Macroscope. Macroscope replaces the code review side of the stack, not SAST.
Yes. Macroscope catches security vulnerabilities including SQL injection, XSS, CSRF, hardcoded secrets, insecure authentication patterns, and more. Because it understands your codebase semantically rather than matching regex patterns, it catches vulnerabilities that static pattern matchers miss — like injection risks that span multiple files or auth bypasses hidden in complex control flow.
ESLint catches syntax patterns — formatting issues, unused variables, simple anti-patterns. Macroscope catches semantic bugs — null dereferences, race conditions, logic errors, breaking API changes, security vulnerabilities. They operate at different levels. ESLint tells you about a missing semicolon; Macroscope tells you about a null pointer that will crash in production. Most teams keep ESLint for formatting and use Macroscope for real bug detection.
Macroscope supports Python, TypeScript, JavaScript, Kotlin, Java, Rust, Swift, and Go with deep AST-level analysis. It builds full reference graphs for each language, tracking function calls, type references, and imports across your entire codebase.
A rule is a markdown file in .macroscope/check-run-agents/ in your repository. Frontmatter sets the dials — model, reasoning depth, effort level, include/exclude file globs, and whether failures block merging. The body is plain-English instructions describing what the agent should check for.
Each rule becomes an agentic check run on every PR. Agents can browse your full codebase, run git commands, read GitHub metadata, and pull context from connected integrations like Sentry, Linear, and BigQuery. Rules are versioned with your code — they go through PR review like any other change. See the full docs.
Macroscope has 98% precision per our published benchmark — the highest of any AI code reviewer. This is far below the industry-standard 80%+ false positive rate of regex-based static analysis tools. Macroscope achieves this by understanding what your code does semantically, not just pattern-matching against syntax. See our benchmark results.
$100 in free usage and 1,000 free agent credits every month. After that, pricing is usage-based with no per-seat fees. Code review is $0.05 per KB reviewed, and agent credits are $0.01 each.
Most teams pay less than they would with per-seat static analysis tools. You can set monthly budget limits and per-review caps to control costs.
About 60 seconds. Install the Macroscope GitHub App, and it starts reviewing your pull requests immediately. For custom rules, add a markdown file to your repo — no configuration files, no CI pipeline changes, no YAML to write. The agent picks up the rule on the next PR.
No per-seat fees. Pay for what you use. Most teams pay less than legacy static analysis tools.
In free usage to get started. No credit card required.
Free agent credits every month for custom check run agents.
Per-seat fees. Ever. Pricing scales with usage, not headcount.
To set up. Install the GitHub App and start catching bugs immediately.
Replace linter noise with custom AI code review that actually reads your code. Get started in 60 seconds.
$100 in free usage and 1,000 free agent credits every month. No seat fees.
Join teams building with Macroscope to catch more bugs and merge PRs faster than ever. Simple usage-based pricing. No seat fees, no surprises.