Caring about Software, Caring about People

A white masc person holding a microphone stands in front of a large projector screen. The screen says:  Minnebar 2026  What tech lessons should we learn from the ICE invasion?  Paul Cantrell  Web: innig.net Mastodon: @inthehands

What if caring about software is one way to care about people?

The most thought-provoking talk I attended at Minnebar on Saturday was Paul Cantrell’s “What tech lessons should we learn from the ICE invasion?” I shared his opening slide on Mastodon, and the response suggests I was not the only one who resonated with the message.

Paul teaches computer science at Macalester College (it looks like Macalester professors go by their first name). He spoke from his experiences in South Minneapolis working against a state actor using pervasive surveillance. He reminded us of a difficult truth: no one is coming to save us.

As people in our state faced kidnapping, detention, and state violence, the stakes changed. No clever app or technical platform could fix the situation we faced. I’m familiar with the refrain, “Let’s solve this with technology.” I’ve seen first-hand how it can become a way to avoid the work right in front of us.

Paul’s talk included multiple warnings about technological solutionism. He said that if we ask “Can we?” the answer is often “yes.” If we ask “Should we?” we get closer to the problem. He warned against caring about tech, then hunting for problems it can solve. Instead, he told us to care about people and try to understand what they need.

His warnings resonated with me. I also want to dig a little deeper into the tensions he highlighted.

Photo of a presentation slide contrasting “Caring about tech and trying to find problems it can solve” (over a red X) with “Caring about people and trying to understand what they need” (over a green heart), with a speaker holding a microphone at bottom right.

Start with values, not tools

I’ve been thinking about Paul’s talk through the values that shape my own work. For instance, whether I am building websites or teaching yoga, I consider concepts like non-harming, not stealing, simplicity, and contentment. For most of us, this is just basic decency.

To consider non-harming is to ask whether a tool puts people at risk. Not stealing informs my choices about how or if it makes sense to collect data that people did not freely offer. The principle of simplicity guides decisions about whether that extra feature helps someone, or whether it gives us the pleasure of feeling clever. Thoughts of contentment can slow down my eagerness to add more features.

When I start with values, I do not stop caring about technology. I care about it in a different way. I care less about novelty and more about defaults. Less about features and more about consent. Less about scale and more about whether people can trust the tool when the stakes are high.

This is part of why free software matters to me.

The problem with being the hero

Paul called Signal the “hero” of the response, and then went on to describe its many limitations. Signal was both essential and troublesome, due to its 1,000 person group limit, usability problems, and design choices that frustrated people active in the resistance.

Still, Signal helped because it gave people a flexible, encrypted way to communicate. He mentioned how CryptPad and Proton Docs helped for similar reasons. None of these tools solved the root problem. Rather, they helped people solve problems together.

As Evgeny Morozov put it in To Save Everything, Click Here: “Solutionism presumes rather than investigates the problems that it is trying to solve.” Both individuals and free software communities can fall into the same trap. They can fall in love with a tool, a process, a platform, a metric, and then forget to ask whether we have understood the need.

I see this in myself too. In my trauma-informed yoga classes, I try not to tell people what to do. I offer options and let people find their own way. So as Paul offered us a suite of “lessons,” I noticed the form as much as the content. Are we telling people what to do, or are we helping them discover what works for themselves?

These questions are not limited to engineers. We all make them, in small ways, every day.

Paul ended with a lesson that felt less like a command and more like a practice: “Don’t chase the feeling of being the hero who solved the problem. Just show up and do the work.”

I like that, and it reflects my lived experiences working professionally in tech since the late 1990s.

Presentation slide titled “Lesson” reading: “How does tech solve problems? People are amazing. Mostly, it doesn’t.” with a speaker holding a microphone in the lower right.

Sometimes software does help

If technology never helped, my work would make little sense, and agencies like Lullabot (where I have worked the past 12 years) would not exist. Public-interest free software projects would not exist. There would be no Drupal.

The problem is not that technology helps. The problem comes when technology gets to define the problem.

For instance, as lead architect of the team that recently rebuilt the Maryland.gov platform, I was responsible for the technical choices. But the work didn’t start with tech. It started with people. We used surveys, forms, and conversations with residents to understand what was not working. We combed through lots of data the state provided us. We interviewed the state employees who were our stakeholders. Their dedication to helping the people of Maryland inspired me.

This certainly was not the same situation as the ICE invasion here in Minnesota. But in Maryland, the people who were suffering similarly guided our choices. People were losing jobs and looking for unemployment information. People needed SNAP benefits and were frustrated with the website. State employees were serving residents through systems that made the work harder than it needed to be. The pressure came from government systems too, including federal systems causing massive disruption in people’s lives around the US Capitol.

Software helped the situation because it came after listening. It helped because the work started with what people needed: clearer paths, plain language, usable services, and sites people could reach under stress. The recent Maryland Digital Service Impact Report tells part of that story.

I would not call that tech solutionism. I would call it public service with software in the mix.

Free software is not magic

After Paul’s talk, I told him about my years of free software advocacy. He seemed skeptical of my enthusiasm for free software, and pushed back. He was right to do so. Free software will not save everything.

A bad tool with an open-source license can still cause harm. A free software project can still ignore the people who use it, and a community can still become careless with power. A license does not do the ethical work for us.

Still, free software gives communities more room to practice their values. We can inspect it, share it, repair it, host it, leave it, and build around it without asking permission from a company whose business model depends on tracking us.

The importance of these features gets amplified when the threat is pervasive surveillance. After all, the most pervasive surveillance infrastructure probably runs on free software, and the default business model on the web treats human attention and behavior as raw material. But that doesn’t change the fact that people trying to protect their neighbors from being kidnapped or killed need tools they can trust without becoming products.

Some people might genuinely not care about being tracked, but most people tolerate being tracked more than they accept it. They put up with surveillance. They want safety. They want control over what they share and with whom. Free software does not guarantee any of that, but it can give us a better starting point.

Show up and do the work

I care about Drupal. I care about the Drupal community. If no one cared about the software, we would have worse software. Worse defaults. Worse accessibility, privacy, and public infrastructure.

Caring about software can be a political act. Governments, schools, nonprofits, mutual aid groups, and public-interest organizations need tools that do not depend on surveillance capitalism. They need software shaped by people who care about accessibility, translation, editorial workflows, security, privacy, and long-term maintenance.

As Paul said, defaults rule the world. Defaults are not neutral. They express values, whether we admit it or not. Free software lets us argue about those values in public. It lets us change the defaults. It lets us ask not only “Can we build this?” but “Who does this serve?” and “Who could this hurt?”

That kind of work rarely feels heroic. Much of it can be boring and repetitive. Someone fixes a bug. Someone improves a form label. Someone says no to a tracking pixel. Someone else asks whether we need a new database column.

Paul offered a list of these unglamorous practices: use Signal, learn how end-to-end encryption works, read the EFF’s Surveillance Self-Defense guide, don’t collect data just because you can, don’t add the column unless someone can explain the need and the risk, choose tools people can leave, choose tools communities can repair.

And yes, when it fits the need, choose free/libre and open-source software. Not because it will save us, because it makes us pure, or because it solves the problem by itself.

What would it look like to choose tools that reflect your values? Free software is one place to start. Try it and see what you notice.

Then show up and do the work.

Comments

The content of this field is kept private and will not be shown publicly.