How to Set Up AI Code Review on GitHub in 5 Minutes
A step-by-step guide to setting up AI code review on GitHub in under 5 minutes — install, first review, customizing with Check Run Agents, spend controls, and Slack integration.
How to set up AI code review on GitHub is the first question most teams have when they start thinking about automated code review. The good news: setting up AI code review on GitHub is faster than setting up most CI integrations. With Macroscope, you can go from zero to your first AI-powered GitHub code review in under five minutes, with no configuration required before the first review runs.
TL;DR — GitHub Code Review Setup in 5 Minutes
- Step 1 (1 min): Install the Macroscope GitHub App, pick which repositories to connect, and activate your subscription in the dashboard ($100 free credit is applied automatically)
- Step 2 (1 min): Push a pull request to a connected repository
- Step 3 (2-3 min): Macroscope's AI code reviewer analyzes the diff, builds a reference graph of your codebase, and leaves review comments directly on the PR
- Step 4 (optional): Add Check Run Agents by dropping
.mdfiles into a.macroscope/directory in your repo- Step 5 (optional): Set spend controls, connect Slack, and add Jira/Linear/Sentry integrations from the dashboard
- New workspaces get $100 in free credits — enough to run AI code review for weeks on most teams before you pay anything
What "AI Code Review on GitHub" Actually Means
Before walking through the setup, it helps to know what you're signing up for. AI code review on GitHub means an AI reviewer runs automatically on every pull request you open, the same way GitHub Actions or a linter would, and leaves review comments directly on the PR. The reviewer:
- Reads the diff and the surrounding code
- Builds a reference graph of your repository to understand cross-file relationships
- Identifies bugs, security issues, logic errors, and structural problems
- Leaves comments at the exact line where each issue appears
- Optionally summarizes the PR, suggests fixes, and blocks merges when custom rules fail
No CLI tool. No per-machine installation. No changes to your developer workflow. The AI reviewer lives inside your GitHub pull request workflow and shows up as another reviewer on the PR.
Prerequisites
To set up AI code review on GitHub, you need:
- A GitHub account with admin access to the repository or organization you want to connect
- A pull request workflow — most teams already have this. If you already merge via PRs on GitHub, you're ready.
- A browser. That's it.
You do not need:
- Any CI configuration
- A YAML config file
- To change your branch protection rules
- To install anything locally
- To write custom prompts
All of those are optional once you're up and running. For the initial setup, a five-minute install is enough.
Step 1: Install the Macroscope GitHub App (1 minute)
The entire GitHub code review setup starts with installing the Macroscope GitHub App.
- Go to macroscope.com and click Get Started.
- Sign in with GitHub. Macroscope authenticates via GitHub OAuth, so there is no separate account to create.
- Choose which GitHub organization or user account to install into. If you have admin access to multiple orgs, you can install into any of them.
- Pick which repositories to connect. You have two options:
- All repositories — Macroscope reviews every PR in the org (recommended for teams that want blanket coverage)
- Selected repositories — pick specific repos for a scoped pilot
- Click Install.
- Activate your subscription from the Macroscope dashboard. This is required to enable AI code review immediately — without it, your first PR will open but no review will appear. Activation is a one-click step and applies the $100 free credit to your workspace.
Macroscope is now installed as a GitHub App on the chosen scope, and code review is enabled on your workspace. You can see the GitHub App in your GitHub organization's Installed GitHub Apps settings at any time.
If you ever need to add or remove repositories later, you can do it from GitHub's app settings page or from the Macroscope dashboard — no re-installation required.
Step 2: Push a Pull Request (1 minute)
The second step of AI code review setup on GitHub is: do what you already do. Push a PR to any connected repository.
- Make a change on a branch in any repository you connected in Step 1.
- Push the branch to GitHub.
- Open a pull request against your main branch (or whatever branch you review against — Macroscope respects your existing PR workflow).
That's it. You don't need to tag a reviewer, add a label, or run a command. Macroscope detects the new PR automatically via GitHub webhooks the moment it's opened.
Step 3: Your First AI Code Review (2-3 minutes)
Once you open the pull request, the AI code review starts automatically. Within 2-3 minutes, you'll see:
- A PR summary — Macroscope generates a written summary of what changed, organized by logical sections. This replaces the "I don't have time to write a good PR description" gap that most engineering teams have.
- Review comments at specific lines — Macroscope leaves review comments directly on the lines that have issues. Bug-finding comments include a clear description of the problem and often a suggested fix. Style or nit-level comments are kept to a minimum by default.
- A check run on the PR — Macroscope appears as a GitHub check (alongside your CI, tests, and other checks). The built-in Correctness Check completes as
SUCCESSorNEUTRAL— it flags issues in review comments rather than failing the check. If you configure Check Run Agents withconclusion: failure, they appear as their own individual check runs that can fail and block merges via branch protection.
If Macroscope finds an issue, you can:
- Fix it manually — read the comment, make the change, push a new commit. Macroscope will re-review automatically.
- Reply "fix it for me" — Macroscope's Fix It For Me feature creates a branch, implements the fix, opens a PR, and iterates against your CI until tests pass. This is the fastest way to close a simple bug.
- Dismiss with a reply — if the comment isn't relevant, a reply tells Macroscope to treat this as false positive feedback and adjust future reviews.
This is the full out-of-the-box experience. No configuration required. For most teams, this is enough to start getting value on day one.
Step 4: Customize With Check Run Agents (Optional, 2 minutes)
Most teams eventually want to tell the AI reviewer about team-specific conventions. Macroscope's recommended way to do this is Check Run Agents: named, scoped AI agents defined as markdown files in a .macroscope/ directory at the root of your repository. Think of it as CI-as-code for AI code review.
Each Check Run Agent lives in its own .md file inside .macroscope/. For example, .macroscope/payment-flow-safety.md:
---
title: Payment Flow Safety
conclusion: failure
include:
- "src/payments/**"
- "src/billing/**"
---
Flag any change to the payments flow that does not include an
accompanying test file change in `tests/payments/`. Treat any
hard-coded currency conversion or tax calculation as a blocker.
The YAML frontmatter controls the agent's behavior:
title— Name of the check, visible in the GitHub check runs listconclusion: failure— Makes the agent a merge-blocking check (omit for advisory checks)include/exclude— Glob patterns that scope which files this agent reviews. Useincludeto run only on specific paths (e.g.,"*.go","src/auth/**"),excludeto skip paths (e.g., vendored dependencies or generated code)
The body of the file is the agent's instructions in natural language.
Commit the file to your repository. The next PR Macroscope reviews will run the agent as its own GitHub check run — it shows up alongside your CI, tests, and the built-in Correctness Check. If conclusion: failure is set and the agent flags an issue, GitHub branch protection can block the merge the same way a failing test would.
Older: Custom Rules in macroscope.md
If you have seen older documentation or examples that put rules inside a single macroscope.md file at the repo root, that's the earlier Custom Rules surface. Check Run Agents replaces Custom Rules for new setups — the recommended path is to put each rule in its own .macroscope/<rule-name>.md file with YAML frontmatter. This gives you better scoping, per-agent configurability, and individual check runs on every PR.
For advanced configuration — models, tool usage, rewriting file content, and agent composition — see the Check Run Agents guide.
Step 5: Set Spend Controls (Optional, 1 minute)
Macroscope uses usage-based pricing — $0.05 per KB of code reviewed. For most teams this works out to under $1 per review on average. Spend controls let you set hard limits to prevent surprise bills:
- Go to your workspace's Billing page on the Macroscope dashboard.
- Set a monthly spend limit — Macroscope will pause reviews for the rest of the month if you hit this. Default: off.
- Set a per-PR cap — the maximum spend on any single PR, with a default of $50. Huge PRs (700+ KB) will stop reviewing when they hit this cap and resume on the next push.
- Set a per-review cap — the maximum for a single review pass, with a default of $10.
Most small teams running AI code review on GitHub never hit these limits. They exist for teams doing large refactors, migrations, or running many automated PRs from coding agents.
Step 6: Connect Slack (Optional, 1 minute)
Macroscope has deep Slack integration. From the dashboard:
- Go to Settings → Connections in the dashboard and click Connect Slack.
- Authorize Macroscope in your Slack workspace.
- Choose a default channel for review notifications, and optionally configure per-repo channels.
Once connected, Slack becomes a first-class surface for your AI code reviews:
- Review comments and PR summaries are posted to the channel when PRs are opened
- You can trigger Macroscope Agent queries in Slack (
@Macroscope what changed in the payments service this week?) - Sprint digests and weekly broadcasts are posted automatically
- You can reply to review threads from Slack without leaving the channel
Teams that live in Slack find this is the highest-leverage integration to set up after the core code review is running.
Step 7: Connect Jira, Linear, Sentry, PostHog (Optional)
Macroscope's code review uses external context to make reviews more useful. Connect each system from the dashboard:
- Jira or Linear — If a PR references a ticket, Macroscope reads the ticket description and acceptance criteria to contextualize the review
- Sentry — Macroscope factors in recent errors related to the code being changed
- PostHog — Feature-flag and analytics context factor into reviews of customer-facing code
- LaunchDarkly — Similar treatment for feature flags
Each integration takes about 30 seconds to set up and noticeably improves review quality for teams that use the connected system.
How Macroscope Works Under the Hood
AI code review on GitHub is built on a few moving parts. Knowing what happens during a review helps teams debug when something doesn't look right.
- Webhook trigger. When you push to a PR, GitHub sends a webhook to Macroscope.
- Codewalker parse. Macroscope runs a language-specific codewalker — a dedicated parser for Go, TypeScript, Python, Java, Kotlin, Swift, Rust, Ruby, Elixir, and others — to build an Abstract Syntax Tree of every relevant file.
- Reference graph construction. The codewalker connects every function, class, and variable into a reference graph spanning your whole repository. This is what enables cross-file bug detection.
- Diff analysis. Macroscope walks the PR's diff through the reference graph, identifying every caller, dependent, and type that the change touches.
- Bug detection. Language-specific rule engines plus Macroscope's AI reviewer evaluate the change against the full context.
- Custom rule application. Any Check Run Agents or
macroscope.mdrules run in this step. - PR summary. A structured summary of what changed is written using the same context.
- GitHub comments posted. Review comments are posted to the PR via the GitHub API.
- Check run status updated. Pass, fail, or pending — visible in the PR's check runs section alongside your other CI.
What to Expect in Your First Week
Most teams follow the same pattern after setting up AI code review on GitHub:
- Day 1 — First PR reviewed. Summary, one or two review comments. Often the team says "that's actually useful" and moves on.
- Days 2-3 — Macroscope finds a real bug. Team is surprised. Slack discussion about "how did it know."
- Days 4-7 — Team starts relying on PR summaries for daily review work. Authors begin pre-reviewing their own PRs based on Macroscope feedback before tagging humans.
- Week 2 — First Check Run Agent dropped into
.macroscope/. Custom rules start running as their own GitHub check runs. - Week 3+ — Slack integration, Fix It For Me usage, and Approvability auto-approval take hold. Team's PR cycle time starts visibly dropping.
None of this requires any further setup — it's the natural progression after the initial install.
Common Setup Issues
Q: I installed the app but I don't see a review on my new PR.
Check that the repository is listed in the installation scope on GitHub. If the PR's base branch is unusual (not main or master), the review still runs — Macroscope reviews PRs against whatever branch you target.
Q: Macroscope is leaving too many comments.
You can tune the review tone with a Check Run Agent — drop a file at .macroscope/review-tone.md (no conclusion: failure) containing a line like Skip style nitpicks unless they impact readability. Macroscope's v3 engine already aggressively filters low-signal comments, but the explicit rule helps teams with specific preferences. You can also leave a thumbs-down reaction on any review comment you think was off-target — Macroscope uses those reactions to calibrate future reviews.
Q: Review is taking longer than 5 minutes.
For very large PRs (700+ KB), reviews can take up to 10-15 minutes. If a review hangs longer than that, check the Macroscope dashboard for any error banner on the workspace. Very rarely, a repo with an extremely large initial indexing job (tens of millions of lines) takes longer on its first review — this is a one-time cost.
Q: How do I pause AI code review temporarily?
From the dashboard, you can pause reviews on a per-repo or workspace-wide basis. Reviews resume immediately when un-paused. You can also uninstall the GitHub App entirely if you need a hard off switch — no data is retained after uninstall beyond what's required for audit compliance.
Advanced: Scaling Check Run Agents
Once your team is running AI code review on GitHub, the next step is expanding Check Run Agents for custom enforcement. Each agent is a separate .md file in .macroscope/ with YAML frontmatter controlling behavior — you can stack as many as you want, each with its own scope and merge policy.
A few patterns teams typically set up:
Security gate — .macroscope/security-review.md:
---
title: Security Review
conclusion: failure
include:
- "src/auth/**"
- "src/payments/**"
- "src/api/**"
exclude:
- "**/*.test.ts"
---
Flag any PR that introduces new HTTP endpoints without
authentication middleware, new SQL queries without parameterized
statements, or new file upload handlers without size limits.
Migration gate — .macroscope/migration-check.md:
---
title: Migration Check
conclusion: failure
include:
- "schema/migrations/**"
---
Flag any migration that drops a column, renames a table, or
changes a NOT NULL constraint without a backfill plan in the PR
description.
Advisory style nudge — .macroscope/typescript-style.md (no conclusion: failure):
---
title: TypeScript Style
include:
- "src/**/*.ts"
- "src/**/*.tsx"
exclude:
- "src/**/*.generated.ts"
---
Prefer `const` over `let` unless reassignment is required.
Flag `any` types and suggest concrete alternatives.
Each agent shows up as its own GitHub check run on every PR in its scope. Agents with conclusion: failure integrate into GitHub branch protection — the merge button is blocked until the check passes, exactly like a failing CI job. Agents without that directive are advisory and leave review comments without blocking.
This is the pattern most teams use for compliance, security, and architecture enforcement.
What If You Already Have Another AI Code Reviewer?
If you already have CodeRabbit, Greptile, Cursor BugBot, or another AI code reviewer installed on GitHub, you can run Macroscope alongside it during evaluation. Both tools will review the same PRs and leave comments independently. Side-by-side evaluation on real PRs is the best way to compare review quality for your codebase before making a commitment.
For more on side-by-side comparisons, see Macroscope vs Greptile and CodeRabbit vs Macroscope.
Frequently Asked Questions
How long does it take to set up AI code review on GitHub?
With Macroscope, the GitHub code review setup takes under 5 minutes: install the GitHub App, pick your repositories, activate your subscription, and push a PR. The first AI review starts automatically and completes in 2-3 minutes for most PRs. No YAML, no configuration files, no CLI. Customizing with Check Run Agents (drop .md files into .macroscope/), connecting Slack, and setting spend controls are all optional and take another few minutes each when you want them.
Do I need to change my GitHub workflow to use AI code review?
No. AI code review on GitHub is designed to drop into your existing pull request workflow. You don't need to change branch protection rules, add a new CI step, or rework your review process. The AI reviewer runs on every PR automatically and posts comments the same way a human reviewer would.
What permissions does Macroscope need on my GitHub repo?
Macroscope requests read access to code, pull requests, and webhooks, and write access to post review comments and check run status on PRs. It does not have write access to your source branches and cannot push code (except via Fix It For Me, which pushes to a new branch it creates, never to your existing branches).
Does AI code review work with private GitHub repositories?
Yes. Macroscope supports public, private, and internal GitHub repositories. Code is processed securely and not used for model training. Macroscope is SOC 2 Type II certified and publishes its subprocessors and data handling practices in the trust center.
Can I use AI code review with GitHub Enterprise Server?
Macroscope currently supports GitHub.com. If your organization uses GitHub Enterprise Server (self-hosted GitHub), check the latest integration status on the Macroscope website or contact sales for enterprise deployment options.
How much does AI code review on GitHub cost?
Macroscope uses usage-based pricing: $0.05 per KB reviewed, with a $0.50 floor per review. The historical average is $0.95 per review. New workspaces get $100 in free credits, which is enough for most teams to run AI code review for weeks before paying anything. Spend controls let you cap monthly, per-PR, and per-review costs.
What languages does Macroscope support for GitHub code review?
Macroscope has native support for Go, TypeScript, JavaScript, Python, Java, Kotlin, Swift, Rust, Ruby, Elixir, Vue.js, and Starlark, with AST-based codewalkers for each. Other languages are supported through Macroscope's generic reviewer but without the cross-file reference graph. See the language support matrix in the documentation for the latest list.
Can I customize what the AI code reviewer checks?
Yes. The recommended way is Check Run Agents — individual .md files inside a .macroscope/ directory at your repo root, each with YAML frontmatter defining scope (include / exclude glob patterns) and whether the agent can block merges (conclusion: failure). The body of each file is plain-language instructions that Macroscope's AI interprets and applies. Each agent shows up as its own GitHub check run, so it integrates into GitHub branch protection the same way a CI job does. An older surface called Custom Rules lives in a single macroscope.md file at the repo root — Check Run Agents replaces Custom Rules for new setups.
Does the AI reviewer integrate with GitHub branch protection?
Yes. Macroscope's Check Run Agents appear as individual GitHub check runs on every PR. You can require any Check Run Agent to pass as part of GitHub branch protection rules — the merge button is blocked until the check passes, exactly like a failing CI job.
What happens if the AI code reviewer makes a mistake?
Reply to the review comment with feedback (a thumbs down, a short correction, or an explanation of why the comment is off-target). Macroscope uses that feedback to calibrate future reviews. You can also ignore any comment and merge anyway — the AI reviewer is advisory unless you configure a Check Run Agent to block.
Can I pause AI code review without uninstalling?
Yes. From the Macroscope dashboard, you can pause reviews per-repo or workspace-wide with one click. Reviews resume immediately when un-paused. This is the preferred pattern for teams that want to stop reviews during a migration or incident without losing their workspace configuration.
Is there a way to test AI code review before rolling it out to my whole team?
Yes. Install Macroscope on a scoped set of repositories — one or two, ideally ones where you control the review culture — and evaluate for a sprint. Once the team is comfortable, expand the install to the rest of the org. Most teams run a 1-2 week pilot on a single repo before org-wide rollout. The $100 free credit usually covers the entire pilot.
How does AI code review handle large pull requests?
Macroscope scales to large PRs (700+ KB) with per-PR spend caps to prevent runaway cost. Very large PRs may take 10-15 minutes to review fully. The review focuses on structural and cross-file issues first, then layers bug detection and custom rule enforcement on top. For teams that frequently push large PRs, the per-PR cap is the most important spend control to configure.
What is the easiest way to set up AI code review on GitHub?
The easiest way to set up AI code review on GitHub is with Macroscope: it's a GitHub App, so the install is a browser-only flow with no CLI tools, no YAML to write, and no CI config to edit. The install-to-first-review path is literally: click Get Started → sign in with GitHub → pick repositories → activate your subscription → push a PR. Your first AI code review appears on the PR within 2-3 minutes. Teams that want to customize behavior can add Check Run Agents later, but the zero-config default is enough to get real value on day one.
Do I need to write code to add AI code review to GitHub?
No. Setting up AI code review on GitHub with Macroscope is entirely configuration-free for the out-of-the-box experience — install the GitHub App and you're done. If you want custom rules, Check Run Agents are written in plain markdown with a small bit of YAML frontmatter (no code), so adding enforcement rules is closer to writing a README than writing software. For most teams, the only configuration you write is the text of the rule itself.
How does AI code review on GitHub compare to CodeRabbit or Greptile?
Macroscope, CodeRabbit, and Greptile all run as GitHub Apps and review PRs automatically, but they use different underlying approaches. Macroscope uses AST-based codewalkers and reports 48% bug detection on a 118-bug benchmark; CodeRabbit uses an AI pipeline plus learnings (46% on the same benchmark); Greptile uses agentic codebase search (24% on the same benchmark). For head-to-head comparisons, see CodeRabbit vs Macroscope and Macroscope vs Greptile.
Setting up AI code review on GitHub is the rare engineering tool install that's actually as fast as the marketing page claims. Five minutes to install, 2-3 minutes to first review, and everything past that — custom rules, Slack, integrations, auto-fix — is incremental. Most teams have their first bug caught by Macroscope within the first day.
Ready to get started? Install Macroscope on your GitHub organization — the $100 new-workspace credit covers the first few weeks for most teams, and your first AI code review on GitHub runs on the next PR you open.
