Do my values need to change at the speed of AI?
I fully admit that I went into DrupalCon Chicago suspicious of AI. Not exactly “curious-suspicious.” It was more like arms-crossed, waiting-to-be-convinced suspicious. I’d been following the issue queue debates and noticing the tension between people who wanted more or less involvement of LLMs in our community. I held some assumptions that led to discomfort, and I resisted participating in the community. I almost didn’t go this year, for the first time since 2009.
But at the last moment, I found a way to make it work, and I went to DrupalCon in spite of my skepticism. Over the course of the week, talking to people, sitting in sessions, and watching how the community was working through this, I started to notice something. Many people in the community doing thoughtful work with AI weren’t necessarily the ones who had picked a “side.” They were the ones drawing on something the Drupal community built years ago: our Values and Principles. Those conversations began to change my assumptions.
When the Conversation Gets Heated
There seems to be real tension in the Drupal community around AI right now. Proposals to ban AI-generated contributions from Drupal core, and related discussions about where to draw the line, have stirred some strong feelings. Dries wrote honestly about the bind he feels in “AI creates asymmetric pressure on Open Source,” naming both the real burden on maintainers and the risk of falling behind. He described feeling caught between two truths: maintainers hold everything together, and the people who depend on Drupal watch other platforms accelerate.
I understand why some of the debates can get heated. But I notice what happens when the conversation gets framed as a choice between two sides: people dig in and defend positions. The conversation gets smaller. After all, we have to make specific choices about things like including AGENTS.md files in core or not.
I’m sorry for all of you who feel like your opinions are not welcome in our community. I serve on the Community Working Group as part of the Community Health Team, which focuses on the proactive side of community well-being. I’ve watched how quickly debates about tools and policies can turn into debates about people. Once that flip happens, finding common ground gets a lot harder. It might feel worrisome when AI conversations in the Drupal community start sliding in that direction and start using de-escalation techniques like “nudges.”
There’s a phrase I often recall when I catch myself wanting to push someone (or something) away: “don’t throw someone out of your heart.” It doesn’t mean you agree with everything. It means you keep the door open while you figure things out together. A community that sorts itself into opposing camps before consulting its own shared principles is skipping a step.
A Green Web Foundation piece that Mike Gifford shared in Drupal Slack over the weekend resonated with me. That article maps out four positions on AI: adoption, hype, refusal, resistance. The part that stuck with me was the invitation to move between them. Most AI discussions push you to pick a side and stay put. This piece encouraged people to keep learning, and let your position shift as necessary. That’s closer to how I think when I’m not under pressure to perform certainty. I suspect the same goes for most of us.
We Already Have What We Need
Here’s what struck me at DrupalCon: I didn’t hear people specifically naming our Values and Principles in the AI conversations. People talked about policy, tools, what to ban, and what to allow. But the framework we already built together, the one Dries maintains with community input, barely came up (at least in my conversations). When I went back and read through those values, I realized they already speak to the questions AI raises. Perhaps we just need to talk about them more.
“Impact gives us purpose” says we should build software that everyone can use and think beyond our own needs. I wrote about impact and generosity in the Drupal community back in 2020, and the principle holds up. So when we evaluate AI tools, the question becomes: does this widen access or narrow it? Drupal as a content management system, meeting large language models as content creation systems, opens real possibilities for the people who use what we build. But only if we stay focused on impact for others, not just convenience for ourselves.
“We foster a learning environment” and “prefer collaborative decision-making” describe how we should approach this: not as a top-down mandate, but through the messy, slow process of working it out together. That’s harder than picking a side. It’s also more honest.
And “Change is constant” reminds us we’ve absorbed big shifts before. Every code change that goes into Drupal core passes through the Core Gates. We have a long tradition of accepting change carefully, with review, with process, with room for disagreement. AI doesn’t require us to abandon our processes. Rather, I think AI should be understood as a reminder to trust our processes.
The value I keep coming back to, though, is “Every person is welcome, every behavior is not.” That distinction between people and behaviors gives us something practical to hold onto when the AI debates get heated. We can welcome contributors who use AI tools while still setting expectations about quality, attribution, and care. We don’t have to choose between openness and standards. (This may sound obvious. But go read some of the issue queue threads, and you’ll see how easy it is to lose that distinction in practice.)
The AI Initiative as a Model
When the Drupal community launched the AI Initiative in June 2025, it described four principles: AI-Human Partnership, Comprehensive Trust Infrastructure, True Freedom of Choice, and Community-Driven Innovation. Read those alongside the community’s Values and Principles and you can see the DNA. The AI Initiative didn’t need to invent a new ethical framework. It drew from one that already existed.
“Come for the code, stay for the community” is a phrase we say all the time. From my perspective, the AI Initiative lives it. It has 30+ organizations and over fifty sponsored contributors working across time zones. It has people from different Drupal agencies working together, showing up at conferences around the world wearing their Drupal shirts, not their agency logos. I don’t sense that these people do this because someone told them to pick a side, but rather because they found common ground in how the community already works. That’s just the Drupal community doing what the Drupal community does so well: running a new technology through shared governance, the same way we’d handle any other contribution. Honestly, I am getting goose bumps just writing those words. I am proud of the Drupal community.
At Lullabot, where I work, we do something similar, but on a smaller scale. We’re employee-owned, so decisions about how we adopt AI go through real conversations, not executive decrees. We have company values like “Inspire & Empower” and engineering values that include “People matter more” and “Cultivate inclusivity.” I won’t pretend we’ve figured it all out. But the values keep the conversation from drifting into panic or hype. In fact, when Matt Westgate and Jeff Robbins started Lullabot in 2006, they modeled it after the Drupal community.
So it can be useful to keep it front-of-mind that our actions in the Drupal community, not just the code we produce, radiate out into the world.
You Don’t Have to Pick a Side
I juggle values from a lot of different groups: the Drupal community, Lullabot, the Community Working Group, the Twin Cities Drupal user groups, various yoga and meditation groups, activist groups, my family, and countless others. You probably carry a similar mix, and you’ve probably noticed the same thing I have, which is that the principles across all groups tend to overlap. Common patterns I’ve noticed include: don’t cause harm, don’t take what isn’t given, and don’t deceive. Most people hold some version of these whether they’ve formalized them or not.
AI doesn’t change what matters. It changes what’s possible. The values help you sort through what’s possible and choose well. The four freedoms of free software still apply. The commitment to building something open, something that serves more people than just the ones who built it, still applies.
The pressure to declare (or conceal) a fixed position on AI was making me less thoughtful, not more. What I found at DrupalCon was something better than a position. The community reminded me that Drupal’s Values and Principles give me a way to keep deciding, situation by situation, without losing my footing. That is what finally made AI feel like something I could work with instead of something I had to fight or surrender to. I could start to envision a future where our AI-tools aligned more closely with my values. I wish I’d gotten there sooner, but I suppose that’s how learning works.
Putting Values into Practice
Let’s consider a real-life example of what it might look like in practice to carry out Dries’s suggestion to “never submit code you don’t understand.”
Imagine a merge request shows up in an issue you’re following. The code looks competent, although generic. The text in the comment reads like it could have come from an LLM. Your gut says “slop,” but the values suggest a different first move. Instead of writing “this looks AI-generated,” you could open with a question about the contributor’s understanding of the problem. Ask what they tried before this approach. First engage with the person. If the code has real problems, those problems exist regardless of how it was written, and you can address them on their merits. If the contributor can’t explain their own merge request, that tells you something too, but you’ve given them the chance to show up as a person rather than tagging them as a tool.
That’s “Every person is welcome, every behavior is not” in practice. Many of you already do this. You hold the standard (the code needs to be good, the contributor needs to understand it) without throwing the person out of your heart before you’ve even talked to them.
Making Room for All People
A Beloved Community doesn’t require perfect agreement. It requires the practice of making room for one another. The people showing up with concerns about AI-generated code deserve a seat at the table, and so do the people experimenting with these tools to solve problems they couldn’t solve before. Dries framed this tension in a helpful way: protecting maintainers and accelerating innovation shouldn’t be opposites. The values help us hold both.
I encourage you to go (re-)read the Values and Principles page before your next AI conversation. Bring those with you instead of a fixed position, and see what happens.
Comments