The executive's guide: How engineering teams are balancing AI and human oversight in modern code reviews

July 29, 2025 // 2 min read

image

Balance AI-powered code review with essential human oversight to enhance the development experience.

Published via GitHub Executive Insights | Authored by Jared Bauer

Code review with fast turnaround times helps developers feel 20% more innovative by enabling them to move to their next idea quickly. And how developers feel directly affects the quality and momentum of their work. Now that AI is authoring more code, how are code review practices adapting in response? To answer this, we've been collecting perspectives from engineering teams across the industry about AI's impact on their workflows. What we've learned might just change your approach to code reviews, too. These emerging patterns highlight both unexpected challenges and promising opportunities that forward-thinking engineering leaders should be evaluating now.

What's new on the horizon

  • AI reviews first: Developers are using AI to review their code before human eyes ever see it.

    “If I don't see that someone else from my company has requested a review from Copilot, then I'm requesting it first. Then I’ll go do some other work, come back to the review, and read through the Copilot comments.”

    - Mikołaj Bogucki, Software Developer

  • "Needle-in-haystack" detection skills for spotting critical issues within large AI-generated changes. Successful teams are developing their intuition regarding where to focus on reviews that have been generated with AI and code segments that deserve extra attention.

  • Developers guard against confirmation bias by recognizing that AI tools may not pick up every possible opportunity for improvement. They acknowledge the subtle psychological trap where an AI review returning minimal feedback can be misinterpreted as comprehensive validation. Forward-thinking teams deliberately counteract this tendency by maintaining healthy skepticism toward AI reviews, establishing clear expectations around what AI reviews can realistically detect.

  • Strategic tool-switching between IDEs (for complex semantic reviews) and browsers (for simpler changes).

    “I will use [the GitHub web UI] for light technical reviews. If I know that I'm going to look through this code and I'm doing a full architectural review; I want to sit down and make sure that their code makes sense, there's not too many impacts up and down stream, then I do that in VS code."

    -Jack Timmons, Senior Software Engineer

What remains bedrock

  • Keeping changes small to ensure that the ease of writing code with AI doesn’t impact delivery time by making pull requests too large for effective review

  • Tests as a necessary, but not sufficient, tool for ensuring quality. Engineers continue to have a critical role in testing ensuring that as a code base evolves test coverage remains comprehensive

  • Human experience, oversight, and foresight for logical correctness and consistency with the codebase. This ensures that changes always align with organizational priorities.

    “I tend to think that If an AI agent writes code, it’s on me to clean it up before my name shows up in git blame.”

    -Jon Wiggins, Machine Learning Engineer

This is just the beginning of what promises to be an evolution in how we ensure code quality. These observations aren't universal yet. Different teams are finding different paths. But these insights affirm our approach at GitHub. We've always believed that AI's greatest value isn't just in generating more code faster, but in elevating the entire development experience to produce better software.


Unlock engineering success with GitHub’s ESSP, our three-step playbook for productivity, velocity, and impact.

Tags