Getting started with AI coding tools on a Drupal project
This documentation needs review. See "Help improve this page" in the sidebar.
AI coding tools range from simple chat assistants you can use in a browser right now, to agents that run directly in your codebase and make file changes autonomously. This page walks through going from "I have never used any of this" to "I have done something genuinely useful" — in order of increasing complexity and trust required.
You will need a coding agent installed and running in your project root to complete the tasks. If you are new to AI and just want to get a feel for what these tools are like first, a chat assistant like Claude.ai or ChatGPT is a good way to experiment — you can ask general Drupal questions, paste in a small snippet and ask it to explain or improve it, and get a sense of how the tools respond before committing to a full setup.
When you are ready to make the jump to coding agents, Claude Code is a widely used option with good Drupal community adoption.
Prefer open source and local tools? Ollama is an open source tool that lets you run AI models entirely on your own hardware — no data leaves your machine and no API keys or subscriptions are required. Open WebUI sits on top of Ollama and gives you a self-hosted chat interface that works in a browser, similar to ChatGPT. There is even a DDEV plugin if that is your local development environment. For a coding agent, Aider is the most established open source option and also works with Ollama. Output quality is currently lower than cloud-hosted frontier models, but the gap is narrowing.
The following tasks all assume you're using a coding assistant like Claude Code which have a greater ability to understand your specific codebase than Chat UI's because they can examine the project's files for context.
Task 1: Build trust — fix a bug you already fixed
Before you rely on an AI coding tool for anything important, you need to know how well it understands your codebase. The fastest way to calibrate that is to give it a problem you already know the answer to.
Find a bug you fixed recently — something small and self-contained, ideally in the last few days while it is fresh in your mind. Revert your fix, then ask the agent to fix it:
I have a bug where [describe the bug and its symptoms].
Can you identify the cause and fix it?Then compare what the agent produces against what you did. It does not need to match exactly — there is often more than one valid fix — but you should be able to answer these questions:
- Did it correctly identify the cause?
- Is the fix it produced safe and correct?
- Did it follow Drupal coding standards?
- Did it miss anything you had to handle?
This is not about whether the AI "passed" — it is about building a calibrated picture of what this tool is actually good at on your specific codebase, before you start relying on it for things you cannot verify as easily.
Task 2: Understand unfamiliar code
Once you have a sense of how the tool performs on code you know well, you can start using it as a reading aid on code you do not.
Find a module or service in your codebase that you have never had reason to look at closely — something a previous developer wrote, or a contrib module your project depends on. Ask the agent to explain it:
Can you explain what this module does, how it works, and what other parts of the codebase depend on it?A good agent will give you a clear summary of the module's purpose, walk through the key classes and methods, and flag anything unusual or worth knowing. Because you built some trust in Task 0, you now have a sense of when to take the explanation at face value and when to verify it yourself.
This is one of the most reliable use cases for AI coding tools — reading and summarizing existing code rather than generating new code. It is hard for the agent to hallucinate something dangerous here, and the value is immediate.
Task 3: Generate something tedious
Now that you are comfortable reading and evaluating agent output, it is time to have it write something from scratch — something you know how to write yourself, but find tedious.
Good candidates for Drupal developers:
- A
hook_update_N()config update hook - A migration for a known data structure
- A config schema file for an existing module
- A PHPUnit kernel test for a service you just wrote
- Routing, controller, and service boilerplate for a new feature
Pick something where you could write it yourself if you had to — that way you can review the output with confidence. Give the agent enough context:
I need a hook_update_N() that migrates all existing values
of the field [field_name] on [content_type] nodes from
[old format] to [new format]. The site is running Drupal 10.3.Review the output carefully before using it. Check that it follows Drupal coding standards, uses the right APIs, and handles edge cases. But notice how much faster you got to a reviewable first draft than if you had written it from scratch.
Where to go next
You have now used an AI coding tool for three genuinely useful Drupal tasks without any special setup. If you want to go further:
- AI tools for Drupal contributors — if you contribute to Drupal core or contrib, there is a whole set of ways AI can make that workflow faster and less painful, with guidance on using these tools in ways that respect maintainers' time
- Setting up AI tools for a Drupal project — how to give your agent persistent Drupal-specific context so it produces better output from the start of every session
- AI tools and projects in the Drupal ecosystem — what the community has built specifically for Drupal development
- Security and contrib considerations — what to look for when reviewing AI-generated code before it goes anywhere near production
Help improve this page
You can:
- Log in, click Edit, and edit this page
- Log in, click Discuss, update the Page status value, and suggest an improvement
- Log in and create a Documentation issue with your suggestion