AI, power, and why open institutions matter

Software and AI are never neutral; they can quietly lock in existing systems of control and power. To avoid surrendering autonomy to a few proprietary platforms, we need open, community‑governed institutions that keep AI accountable and share its benefits more fairly.

Share
AI, power, and why open institutions matter
Photo by Shivam Kumar / Unsplash

When Ben Werdmuller writes that “one size fits none” and argues we should “let communities build for themselves” he’s talking about more than product strategy. He’s talking about power.

His post really got to me as I have been thinking a lot about how software and AI work. And how, if we’re not deliberate, AI and AI platforms will quietly hard‑code existing structures of control and power into the way our societies work in ways that were not as obvious in the past for enterprise software.

Ben's recent post of that title traces a line from his work on Elgg - one of the earliest open‑source social networking platforms - to today’s world of large language models and agentic coding assistants. Along the way, he makes a claim that resonates very strongly with my own concerns about AI and digital governance: platforms, apps, and models are never neutral. They encode power relations in their defaults, their data, and their design choices.

If we want AI to support more democratic and equitable futures, we can’t just bolt governance on at the edges. We need institutions - and, crucially, open institutions - that help share power, much as trade unions and friendly societies did in the late nineteenth and early twentieth centuries.

Platforms are frozen politics

Ben’s core critique of mainstream platforms is blunt:

dominant social media and app ecosystems have been built by small, relatively homogeneous teams, mostly in Silicon Valley, who unintentionally hard‑code their own cultural, political, and economic assumptions into the software that billions of people use.

From seemingly mundane choices - what a “friend” is, how identity works, what gets surfaced in feeds - to consequential ones like reporting tools and enforcement, these design decisions reflect a particular worldview. They are opinions about how social life should be organised, expressed as code.

This isn’t an abstract worry. Facebook’s failure to understand and respond to local context in Myanmar, for example, has been widely documented as contributing to the spread of hate speech and incitement against the Rohingya. That’s what it looks like when a one‑size‑fits‑all platform, optimised for engagement and growth, is dropped onto a fragile political context without meaningful local governance or accountability.

The same pattern holds in smaller ways every day: when moderation systems embed majority norms and marginalise minorities; when recommendation engines quietly steer attention and, with it, advertising revenue and political influence; when identity systems force people into categories that don’t fit.

These are questions about power - who sets the rules, who gets heard, who can be excluded - long before they’re questions about “features.”

AI turns the dial up on embedded power

Layer AI into this stack and the stakes rise again.

Ben’s essay notes that large language models and other generative systems are increasingly being used as engines for building new software - from prototypes to production code - through what he and others call “agentic engineering patterns.” The cost of going from idea to working app has dropped dramatically for skilled developers using these tools.

But the models themselves come with their own politics:

  • They’re trained on vast datasets whose composition is opaque, riddled with historical biases and asymmetries of representation.
  • They’re built and controlled by a small number of firms, often closely entangled with governments, militaries, and law enforcement.
  • Their alignment and safety layers encode particular normative choices about what counts as “harmful” or “acceptable” speech and behaviour.

If we simply plug these systems into the next generation of platforms and apps, we risk re‑embedding those power relations even more deeply, in more places, at greater scale.

The ability to generate code quickly doesn’t automatically democratise development if the underlying models, data, and infrastructures are controlled by the same concentrated actors.

When your national AI strategy runs through Silicon Valley

We’re already seeing the geopolitical implications of this concentration of power. For many countries, especially those outside the traditional centres of tech production, access to advanced AI now largely runs through a handful of US‑based corporations that control the leading models, cloud infrastructure, and application ecosystems. This was brought home vividly when Iran attacked data centres across the Persian Gulf recently.

Governments that want to deploy AI in public services, education, health, or defence often find themselves dependent on foreign vendors for both capability and policy direction, raising uncomfortable questions about sovereignty, strategic autonomy, and the long‑term costs of ceding such a critical layer of digital infrastructure to a small set of firms headquartered in another jurisdiction.

The more core functions of government, industry, and civic life become entangled with these platforms, the harder it becomes to chart an independent course - or to insist on local values and regulations - without risking exclusion from the AI capabilities that everyone else is using.

Open source as an institutional choice, not a licence badge

This is why open source matters so much in the AI era - not as a branding exercise, but as an institutional design decision.

Ben’s own history with Elgg is instructive here. Elgg wasn’t just “some code on GitHub”; it was a consciously open‑source social networking platform that:

  • Was adopted and adapted by universities, NGOs, and independent communities around the world.
  • Allowed local admins and developers to extend or change functionality, instead of waiting for a centralised product roadmap.
  • Created the conditions for community norms and governance structures to be reflected in the software itself, not just in a terms‑of‑service document.

This is what open source does when it’s taken seriously:

  • Transparency: People can inspect what the system does, how it makes decisions, and where their data goes.
  • Forkability: If power is abused, or priorities diverge, the code can be forked and the community can walk away, taking its infrastructure with it.
  • Shared stewardship: Maintenance, improvement, and governance can be distributed across contributors, institutions, and jurisdictions.

Those are institutional properties, not just licensing details. They change the balance of power between users, developers, and owners.

In AI, the same logic applies. Open models, open data governance frameworks, and open‑source tooling can:

  • Give communities genuine leverage over the systems they depend on.
  • Enable independent scrutiny and auditing of risks and biases.
  • Reduce lock‑in and create room for alternatives that reflect different values and power arrangements.

That doesn’t mean everything should be open in a naive sense - there are real concerns around misuse, privacy, and safety. But treating open ecosystems as a pillar of AI governance, rather than an afterthought, is one of the few concrete ways we have to rebalance power.

Trade unions, friendly societies, and the politics of infrastructure

We’ve been here before, in a different guise, as I mentioned in my previous post.

In the late nineteenth and early twentieth centuries, industrial capitalism massively tilted power towards employers, landlords, and capital owners. In response, workers built institutions that changed that balance:

  • Trade unions used collective bargaining, strikes, and political organising to win higher wages, safer workplaces, and legal protections.
  • Friendly societies and mutuals provided sickness benefits, funeral insurance, and basic welfare in the absence of state provision, funded and governed by their members.
  • Co‑operatives created alternative economic spaces with shared ownership and democratic control.

These weren’t “services” handed down from above. They were collective infrastructures built from below that forced power to be shared more widely - in workplaces, in communities, and eventually in law.

They also often relied on open‑ish infrastructures of their time:

  • Public meeting halls, pamphlets, and newspapers.
  • Shared rules and constitutions that could be copied, adapted, and re‑used.
  • Networks of organisers who carried ideas and practices between places.

The point is not to romanticise unions or co‑ops; they were contested, imperfect, and sometimes exclusionary. The point is that they changed the structure of power by building and owning institutions, not just by asking for better behaviour from those already in charge.

AI‑era unions of code?

So what might the AI equivalent of a trade union look like?

Ben’s essay hints at some possibilities when he talks about using agentic coding tools and open protocols to let communities build custom platforms that reflect their own norms and needs. If we take that seriously and add an explicit focus on power, we start to see some institutional patterns:

  • Worker‑governed AI platforms: Instead of a generic corporate productivity suite quietly adding surveillance‑heavy “AI features,” unions or professional associations could sponsor and govern open‑source tools that implement their own rules around monitoring, data retention, and decision‑support. The “agentic” piece is that much of the bespoke functionality can now be built and iterated more quickly with LLM‑assisted development.
  • Community‑controlled recommendation systems:
    Local media co‑ops, cultural organisations, or municipalities could run their own recommendation engines - using open models they can inspect and adapt - to surface news, events, and resources that serve their public mission, not an ad‑sales target.
  • Civic data and model trusts: Communities could pool data and negotiate jointly with AI developers through data trusts or co‑ops, setting terms for how their data is used to train models and what they get back - whether that’s money, access, or governance rights.

In each case, open source, open protocols, and open governance are not optional extras; they’re the mechanisms that make power‑sharing real rather than symbolic.

Sharing power in the stack

If we accept that power is embedded all the way down - in code, data, interfaces, and infrastructures - then sharing power means working at all of those layers:

  • At the code layer, open‑source implementations and open standards make it possible to contest and replace systems that concentrate power.
  • At the data layer, community ownership, consent frameworks, and data trusts can give people collective leverage over how their information trains and tunes models.
  • At the model layer, diverse, values‑aligned models - including small, community‑specific ones - can reduce reliance on a handful of proprietary systems
  • At the institutional layer, unions, co‑ops, public bodies, and civil‑society organisations can own and govern the platforms and services built on top of that stack.

That’s a lot harder than writing another set of AI ethics principles. But history suggests that without this kind of institutional work, power will remain where it is - or become even more concentrated as AI scales.

Building the next generation of power‑sharing institutions

The optimistic reading of Ben Werdmuller’s “one size fits none” argument is that we now have more technical capacity than ever to build pluralistic, community‑defined infrastructures. Agentic coding tools and open social protocols lower the barrier to creating alternatives. The lesson from past struggles is that we also need the institutions - the unions, mutuals, co‑ops, and clubs of the AI era - to ensure those alternatives actually redistribute power.

That’s the work in front of us:

  • To insist that AI and platforms are political, not neutral.
  • To treat open source as a governance choice, not just a development model.
  • To build and back institutions that can negotiate, contest, and, when necessary, walk away.

Trade unions didn’t eliminate exploitation, but they changed the terms of engagement. In the same way, AI‑era institutions rooted in openness and collective governance won’t magically fix power imbalances in the digital economy - but they might give us some leverage.

One size fits none” is, in that sense, both a design principle and a political project.