What Is PR Cycle Time? How to Measure and Improve Pull Request Speed
PR cycle time measures the total time from a developer's first commit to a pull request being merged. Learn its four phases and how to improve each.
PR cycle time is the total elapsed time from a developer's first commit on a branch to the pull request being merged into the target branch. It captures every phase of the code change lifecycle: writing the code, waiting for a reviewer, going through review, and merging. PR cycle time is the single most actionable metric for understanding how fast your engineering team delivers code changes, because it exposes exactly where time is being spent and where bottlenecks form.
Unlike broader metrics like lead time for changes (which extends to production deployment), PR cycle time focuses specifically on the code review and collaboration process. This makes it directly actionable for engineering teams. You cannot always control deployment pipelines or release schedules, but you can control how fast code moves through review.
Research consistently shows that PR cycle time has an outsized impact on developer productivity and satisfaction. GitHub's analysis of over 10 million pull requests found that PRs with cycle times under 24 hours are 40% less likely to contain defects than those that take over a week. Faster review does not just feel better. It produces better code.
What Are the Four Phases of PR Cycle Time?
PR cycle time breaks down into four distinct phases, each driven by different factors and improved by different strategies.
1. Coding Time
Coding time is the period from a developer's first commit to opening the pull request. This is the time spent writing code, running local tests, and preparing the change for review. It is the phase where the developer is in control.
Coding time is often the least problematic phase, but it can be inflated by unclear requirements, excessive scope, or the developer's hesitation to open a PR until the code is "perfect." Teams that practice draft PRs and early feedback tend to have shorter coding times because developers share work in progress rather than polishing in isolation.
2. Pickup Time
Pickup time is the period from when a pull request is opened (or marked ready for review) to when the first reviewer leaves a comment or starts reviewing. This is pure wait time. The developer has finished their work and is blocked until someone looks at it.
Pickup time is frequently the largest contributor to long cycle times. Analysis across engineering organizations shows that the median pickup time is 8 to 12 hours, but many teams have P90 pickup times exceeding 48 hours. This means one in ten PRs sits for two full days before anyone looks at it.
The causes are predictable: reviewer overload (a few senior engineers review most PRs), timezone gaps in distributed teams, and the absence of clear review assignment or SLA expectations.
3. Review Time
Review time is the period from first review activity to the final approval. It includes all review cycles: initial comments, author responses, follow-up reviews, and eventual approval. This phase captures the back-and-forth collaboration between author and reviewer.
Review time is influenced by PR size, code complexity, and the clarity of the change. A well-scoped PR with a clear description and small diff gets approved quickly. A sprawling PR with 800 changed lines and no description triggers multiple rounds of clarification.
4. Merge Time
Merge time is the period from final approval to the PR being merged. In many teams, this phase is near-instant because developers merge immediately after approval. But in teams with merge queues, required CI checks after approval, or manual merge processes, this phase can add hours or days.
Merge time also increases when PRs develop merge conflicts after approval. If main has moved significantly since the PR was approved, the author may need to rebase, re-run CI, and sometimes get re-approval.
What Are Good PR Cycle Time Benchmarks?
Benchmarks vary by team size, codebase complexity, and industry, but data from engineering analytics platforms and published research provides useful reference points.
| Phase | Elite | Good | Needs Improvement |
|---|---|---|---|
| Coding Time | Less than 1 day | 1-3 days | More than 5 days |
| Pickup Time | Less than 2 hours | 2-12 hours | More than 24 hours |
| Review Time | Less than 4 hours | 4-24 hours | More than 48 hours |
| Merge Time | Less than 1 hour | 1-4 hours | More than 24 hours |
| Total Cycle Time | Less than 1 day | 1-3 days | More than 5 days |
Elite teams consistently merge PRs within the same business day they are opened. The key driver is not faster coding. It is dramatically shorter pickup and review times, which come from smaller PRs, clear ownership, and systematic review practices.
The benchmarks also reveal an important pattern: total cycle time is dominated by wait time, not work time. In most teams, 60-70% of cycle time is pickup time plus merge time. Developers are not slow. The process around them is slow.
How Do You Measure PR Cycle Time?
Measuring PR cycle time accurately requires data from your version control system (GitHub, GitLab, or Bitbucket) and a clear definition of each phase boundary.
First commit timestamp marks the start of coding time. This comes from the git history of the PR's branch.
PR opened timestamp marks the end of coding time and the start of pickup time. For teams that use draft PRs, use the "ready for review" timestamp instead of the "opened" timestamp to avoid counting draft time as pickup time.
First review activity timestamp marks the end of pickup time and the start of review time. This is the timestamp of the first comment, review, or approval from a reviewer (not the PR author).
Final approval timestamp marks the end of review time and the start of merge time.
Merge timestamp marks the end of the entire cycle.
Measurement Pitfalls
Counting weekends and after-hours. If a PR is opened at 5 PM Friday and reviewed at 9 AM Monday, the raw pickup time is 64 hours. But the effective pickup time is closer to 1 hour of the next business day. Decide whether you measure calendar time or business hours, and be consistent.
Ignoring draft PRs. If developers open draft PRs for early feedback, counting from "PR opened" inflates coding time and deflates pickup time. Use "marked ready for review" as the boundary.
Averaging instead of using percentiles. A team's average cycle time can look healthy while a significant tail of PRs takes over a week. Track P50 (median), P75, and P90 to understand the full distribution. The P90 often reveals systemic issues that the median hides.
Not segmenting by PR size. A 10-line bug fix and a 500-line feature refactor have fundamentally different expected cycle times. Benchmark and track cycle time by PR size category (small, medium, large) for meaningful comparisons.
How Do You Improve Each Phase of PR Cycle Time?
Reducing Coding Time
Keep PRs small. The single most impactful practice is breaking work into small, focused pull requests. Research from Google and Microsoft shows that PRs under 200 lines of code are reviewed 3x faster and have 40% fewer post-merge defects. If a feature requires 1,000 lines of changes, ship it as a series of 5 PRs rather than one monolith.
Use draft PRs for early direction checks. Open a draft PR after the first meaningful commit to get early feedback on approach before investing hours in implementation. This catches "wrong direction" issues early, when they are cheap to fix.
Write clear PR descriptions. A well-written PR description reduces coding time on future PRs by establishing patterns, and reduces review time by giving reviewers context upfront. Include what changed, why it changed, and how to verify it.
Reducing Pickup Time
Set review SLAs. Establish a team norm for maximum pickup time, such as "all PRs get first review within 4 business hours." Without an explicit expectation, reviews get deprioritized indefinitely.
Distribute review load. If two engineers review 70% of PRs, your team has a structural bottleneck. Use CODEOWNERS files and automated review assignment (round-robin or load-balanced) to spread reviews across the team. Tools like GitHub's auto-assign and review routing features make this systematic.
Create review time blocks. Encourage the team to check for pending reviews at specific times, such as morning and after lunch. This creates a predictable cadence without requiring constant context switching.
Automate the first pass. AI code review tools can provide immediate feedback on a PR within minutes of opening. This does not replace human pickup, but it gives the author useful feedback during what would otherwise be dead wait time. Platforms like Macroscope provide automated first-pass review that catches issues before human reviewers engage, effectively reducing the impact of pickup delays.
Reducing Review Time
Review small PRs quickly. Small PRs (under 100 lines) should take a single review pass. If a small PR requires multiple review rounds, it often means the PR scope was fine but the author and reviewer have misaligned expectations that need a conversation, not more review cycles.
Use inline suggestions. Instead of leaving a comment that says "this variable name should be more descriptive," leave a GitHub suggestion with the actual renamed variable. The author can accept it with one click rather than making a new commit.
Resolve discussions promptly. Open review threads that go unresolved for days are a common source of review time inflation. Authors should respond to all review comments within the same day. Reviewers should re-review promptly after the author addresses feedback.
Reducing Merge Time
Enable auto-merge. GitHub, GitLab, and Bitbucket all support auto-merge, which merges the PR automatically when all required checks pass after approval. This eliminates the delay between approval and someone remembering to click the merge button.
Use merge queues. For repositories with high commit velocity, merge queues (GitHub Merge Queue, Mergify, or Bors) test PRs against the latest main branch before merging, preventing broken builds without requiring manual rebasing.
Minimize required checks after approval. If your CI pipeline takes 45 minutes and must re-run after every rebase, merge time will always include at least one CI cycle. Optimize CI speed, or use merge queues that batch CI runs.
How Does PR Cycle Time Relate to Other Engineering Metrics?
PR cycle time is one component of the broader DORA metric "lead time for changes," which measures from first commit to production deployment. PR cycle time captures the code review portion. Deployment pipeline time captures the rest.
PR cycle time also correlates with several other important metrics:
Deployment frequency. Teams with fast cycle times deploy more often because code moves through the pipeline faster. Reducing cycle time from 5 days to 1 day can increase deployment frequency by 3-5x without any changes to deployment infrastructure.
Developer satisfaction. Multiple studies, including GitHub's SPACE framework research, show that slow code review is one of the top frustrations for software engineers. Improving cycle time directly improves how developers feel about their work.
Defect rate. Counterintuitively, faster cycle times correlate with fewer defects, not more. The mechanism is that faster review means smaller PRs, fresher reviewer context, and less time for the code to diverge from the rapidly evolving main branch.
Engineering intelligence platforms like Macroscope track PR cycle time alongside these related metrics, making it possible to see how improvements in review speed ripple through to deployment frequency, team throughput, and code quality. This connected view is what turns metrics from dashboard decoration into actionable intelligence.
Frequently Asked Questions
What is a good PR cycle time for a startup versus an enterprise?
Startups typically achieve cycle times of 4 to 12 hours because teams are small, communication is fast, and process overhead is low. Enterprise teams typically see cycle times of 2 to 5 days due to compliance requirements, larger teams with specialized reviewers, and more complex CI/CD pipelines. Both can improve, but the target benchmarks differ.
Should we count bot-generated PRs in our cycle time metrics?
No. Dependabot updates, automated formatting fixes, and other bot-generated PRs have fundamentally different characteristics than human-authored PRs. Including them distorts your metrics. Track them separately if you want to monitor bot PR efficiency, but exclude them from team cycle time analysis.
How does PR size affect cycle time?
PR size is the strongest predictor of cycle time. Data across thousands of repositories shows a roughly exponential relationship: a 100-line PR takes an average of 4 hours to merge, while a 500-line PR takes an average of 3 days. Beyond 1,000 lines, PRs are frequently abandoned rather than merged. The most effective way to reduce cycle time is to reduce PR size.
Is it better to optimize for cycle time or review thoroughness?
This is a false trade-off. Research from Google's engineering practices team shows that faster reviews are also more thorough, because reviewers engage more carefully with small, focused PRs than with large, overwhelming ones. A reviewer who spends 20 minutes on a 100-line PR catches more issues per line than a reviewer who spends 2 hours on a 1,000-line PR.
How do timezone differences affect PR cycle time?
Timezone gaps directly inflate pickup time. A PR opened at end-of-day in one timezone may not get reviewed until start-of-day in another, adding 12 to 16 hours of wait time. Strategies to mitigate this include overlapping review windows, automated first-pass review to provide immediate feedback, and structuring teams so that most PRs can be reviewed within a single timezone.
