Part II: Building Technologies of Liberation
From Resistance to Renaissance: How Communities Are Recoding Their Digital Destiny
Editor’s Note: Reclaiming the Future
Last week, we traced how historical exclusion has been encoded into our digital systems. From AI-driven lending discrimination to predictive policing, we saw how today’s technologies don’t just reflect inequality—they accelerate it.
But recognizing injustice is only the first step. Just as The Year of Reset explored how true transformation requires both understanding and action, technological transformation demands the same. Resetting our relationship with technology means moving beyond critique toward building something new.
Part 2, “Building Technologies of Liberation,” shifts the focus from analysis to action. Across the world, communities are not just resisting harmful technologies but actively designing alternatives. From Indigenous data sovereignty initiatives to platform cooperatives reclaiming economic power, from open-source environmental justice tools to community-controlled internet infrastructure—liberatory technology is already being built.
Technology has never been neutral. It has always reflected the values of those who create and control it. The question before us is clear: Will we continue to encode systems of exclusion, or will we reimagine technology as a tool for liberation?
🚀 As you read Part 2, consider:
What does truly liberatory technology look like?
How can we ensure that the tools we build serve justice, not control?
What role do technologists, policymakers, and everyday people play in shaping this future?
📚 If you missed Part 1, you can read it here.
The future isn’t written in code—it's written by us. Let's build.
From redlined neighborhoods to biased algorithms, from segregated schools to digital tracking, from slave patrols to predictive policing—we’ve traced how technology amplifies historical patterns of exclusion. These systems share more than just their outcomes; they share a logic of control that has evolved from explicit to encoded, from visible to invisible, from human decision to algorithmic prediction. At each stage, claims of objectivity mask the same fundamental project: containing Black mobility and perpetuating racial hierarchies.
But in mapping these patterns, we’ve also discovered possibilities for intervention. The very technologies that hide systemic racism behind mathematical models also make bias more traceable, more challengeable, more susceptible to collective resistance. Communities are already forcing transparency in predictive systems, demanding control over surveillance technologies, and building alternative models prioritizing liberation over control.
Part 2 examines how to dismantle these digital architectures of oppression and how to build technologies of liberation in their place. Through case studies of successful resistance and innovation, we explore how learning engineers, technologists, and communities are already developing new approaches: ethical frameworks that center justice over efficiency, design practices that amplify rather than suppress community knowledge, and data systems that serve collective liberation rather than control. These aren’t just theoretical possibilities. They are emerging practices that point toward a different technological future.
Resistance and Reimagining: From Bans to Alternatives
The movement to transform technology from a tool of oppression into an instrument of liberation isn’t theoretical—it’s already happening. Across the globe, communities and technologists are resisting harmful systems and building new ones that serve justice rather than control.
Consider facial recognition technology. When San Francisco banned government use of facial recognition in 2019, it wasn’t just rejecting a problematic tool but asserting community control over surveillance infrastructure. This victory inspired similar bans in Boston, Portland, and cities across the United States. More importantly, it demonstrated that technological “progress” isn’t inevitable; communities can decide which technologies serve them and which don’t.
But resistance goes beyond bans. In Detroit, the Our Data Bodies project helps residents understand, challenge, and reshape how their personal information is collected and used. By conducting “data justice audits” and developing community-centered privacy guidelines, they create models for how technology can protect rather than surveil marginalized communities. Similar initiatives in London’s Brixton neighborhood and Rio de Janeiro’s favelas are developing local alternatives to top-down “smart city” surveillance.
Learning engineers are also reimagining their role. The Algorithmic Justice League combines technical audits with advocacy and art, exposing bias in AI systems while showcasing alternative approaches. Their work on facial recognition bias led major tech companies to pause or abandon their systems—proving that technical expertise can serve accountability rather than automation. Meanwhile, the Design Justice Network is developing new principles for technology design that center the expertise of marginalized communities rather than treating them as subjects to be studied or problems to be solved.
These resistance movements extend beyond institutional challenges to reimagine the very foundations of technological power. Indigenous data sovereignty initiatives across Australia, New Zealand, and North America assert control over how their communities’ information is collected, stored, and used. The CARE Principles for Indigenous Data Governance (Collective benefit, Authority to control, Responsibility, Ethics) offer an alternative to Western data practices that have historically extracted knowledge from communities without benefit or consent.
Worker-led movements are similarly challenging surveillance from below. Amazon warehouse workers have organized to expose and resist productivity tracking algorithms that enforce dangerous speeds and deny basic human needs. Gig workers across Brazil, India, and South Africa are building cooperative platforms owned by drivers rather than corporations. These efforts demonstrate that those most impacted by technological control can lead in developing alternatives.
Meanwhile, community technologists are creating open-source tools that serve liberation rather than surveillance. From encrypted communication platforms developed by and for activists to community-owned internet infrastructure in Detroit and Harlem, these projects prove that technology can be governed by and for the communities it serves. The Movement for Black Lives’ technology working groups are developing security tools designed to protect organizers and communities from state surveillance.
These diverse approaches—bans, audits, alternative design principles, data sovereignty, worker resistance, and community-owned infrastructure—share a common thread: they shift power over technology from corporations and state institutions to the communities most impacted by technological harm. They demonstrate that the question isn’t just whether technology is biased but who controls it, who designs it, and whose interests it serves.
Beyond Bias Audits: Transforming Tech Governance
Corporate promises to “audit for bias” or “enhance diversity” in AI systems miss the fundamental issue: the problem isn’t just flawed algorithms—it’s who controls technology in the first place. While bias audits might catch discriminatory patterns in facial recognition or lending algorithms, they leave the same power structures that produced those systems intact. Real transformation requires shifting governance itself—from corporate control to community power. This means redefining who makes technology decisions, whose knowledge shapes its design, and whose interests it serves.
This shift is already happening. Indigenous data sovereignty movements aren’t just demanding better privacy protections—they’re asserting fundamental rights over how their communities’ information is collected, used, and shared. The First Nations’ CARE Principles (Collective benefit, Authority to control, Responsibility, Ethics) don’t simply add “cultural considerations” to Western data models—they challenge the entire paradigm of data ownership. Instead of treating data as a resource to be extracted, they propose alternative models based on collective rights, stewardship, and ethical responsibility.
Similarly, grassroots initiatives are moving beyond demanding “algorithmic accountability” to building new governance structures. In Barcelona, the DECODE project gives citizens direct control over their personal data through democratic “data commons” that serve community needs rather than corporate profits. In South Africa, the Our Data Bodies project isn’t just auditing existing systems—it’s developing community-controlled infrastructure for data collection and use, proving that surveillance isn’t inevitable when communities govern their own technology.
These shifts extend across the Global South. In Brazil’s quilombola communities—descendants of escaped enslaved Africans—tech justice organizers are developing community-led data governance models that resist state surveillance while preserving cultural knowledge. In Kenya, activists have successfully pushed for stricter regulations on biometric data collection, preventing mass exploitation under the guise of “security.” These movements aren’t just reacting to oppressive technologies—they are preemptively shaping policies, tools, and practices that serve their communities’ interests.
Learning engineers and technologists have an essential role in this transformation. Rather than simply implementing corporate ethics guidelines or conducting bias audits, they must help build governance structures that ensure public and community control. This means:
Moving from compliance to collective governance:
Instead of corporations “self-regulating” AI systems, communities must have real decision-making power over technology development and deployment.
This requires new participatory design models where impacted communities—not just tech firms—shape how AI systems are built and used.
Embedding protection into policy and practice:
From Kenya’s biometric laws to Indigenous data frameworks, communities create legal shields against technological exploitation.
These protections don’t just restrict harmful tech—they actively promote alternative models of development that center collective benefit.
Building infrastructure for community control: -
Technical tools must support rather than supplant community governance. -
This means developing platforms communities can truly own and control, not just systems they can audit or oversee.
For learning engineers and technologists, this requires designing with open-source transparency, ensuring communities have direct access to audit, modify, and govern their own tech.
It means building decentralized and federated systems rather than centralized control structures and developing tools that embed community values at the architectural level.
The path forward is clear: we must move beyond corporate AI ethics initiatives that merely tweak existing systems. Real transformation means building new structures of governance and control—ones that ensure technology serves the many rather than surveilling and extracting from the marginalized. The question isn’t just how to make AI systems less biased but how to fundamentally shift power over technology itself.
Building Liberatory Technologies: From Principles to Practice
Shifting governance is only the beginning. To transform technology from a system of control into a tool for liberation, we must reimagine who governs it and also how it is designed, built, and deployed. If we focus solely on governance while leaving the underlying technical structures intact, we risk replicating old hierarchies under new management. The next step is not just taking power over existing technologies but creating new systems that embed justice, autonomy, and collective care at every level—from infrastructure to algorithms, from design principles to real-world applications.
What does liberatory technology look like in practice? Consider how the Design Justice Network is reimagining AI development. Instead of starting with technical specifications or efficiency metrics, they begin by asking: Who is this technology for? Whose knowledge matters in its design? How will it impact the most marginalized? Their projects don’t just “include diverse perspectives”—they fundamentally reshape who has the power to define technological problems and solutions.
In Detroit, the Michigan Environmental Justice Coalition isn’t just fighting against environmental racism—they’re building community-owned air quality monitoring systems that generate data on their own terms. Their network of sensors, maintained by local residents and connected through mesh networks, provides independence from state and corporate monitoring systems that have historically undercounted pollution in Black neighborhoods. This isn’t just about collecting better data—it’s about communities controlling their own technological infrastructure.
This reimagining extends globally. In Mexico, Indigenous communities are developing their own telecommunications infrastructure through TIC AC, creating autonomous cellular networks that operate independently of corporate providers. In South Africa, the amaHlathi cooperative is building community-owned internet infrastructure using local knowledge systems to design networks that serve rural areas typically ignored by mainstream providers. These aren’t just technical projects—they’re examples of communities reclaiming technological sovereignty.
Learning engineers are also developing new tools that embed liberation into their core architecture. The Public Lab’s open-source environmental monitoring platforms don’t just make data collection more accessible—they reshape who gets to be a scientist and what counts as valid knowledge. The Digital Democracy project works directly with Indigenous communities to build mapping tools that protect land rights while preserving traditional ways of understanding territory and resources.
But perhaps most revolutionary are the emerging platforms for collective ownership and decision-making. Platform cooperatives like Driver’s Seat give gig workers control over their data and working conditions. The Economic Space Agency is building financial tools that enable communities to create their own economic systems outside of extractive banking. These projects demonstrate that technology can serve collective liberation rather than individual profit—if we design it intentionally for that purpose.
These examples aren’t just isolated projects—they represent an emerging blueprint for technological transformation. When Indigenous communities build autonomous networks, when workers develop cooperative platforms, when residents create their own environmental monitoring systems, they demonstrate that technology can be redesigned to serve liberation rather than control. More importantly, they show that this redesign must happen at every level: from the physical infrastructure that connects us, to the algorithms that process our data, to the governance structures that determine how these systems evolve.
For learning engineers and technologists, this means moving beyond asking, “How do we make this system less harmful?” to “How do we build systems that actively advance justice?” It means recognizing that every technical choice—from data collection methods to user interface design—either reinforces existing power structures or helps to transform them. When we design AI systems, we’re not just writing algorithms; we’re encoding values, embedding power relations, and shaping how communities can resist or reinforce oppression.
The path forward requires both technical expertise and political clarity. We need engineers who can build secure communication platforms for activists and robust data systems for community scientists. But we also need technologists who understand that no technical solution is neutral—that every platform, protocol, and program either serves liberation or maintains control. The question isn’t whether technology will shape power relations but whether it will democratize power or concentrate it.
These movements for liberatory technology show us that another digital future is possible—one where technology serves collective liberation rather than surveillance and extraction. The tools for this transformation already exist. The question is whether those of us building tomorrow's systems will have the courage to use them.
Rewriting the Future: From Digital Oppression to Technological Liberation
From slave patrols to predictive policing, from redlining maps to risk assessment algorithms, from colonial banks to digital financial exclusion—we’ve traced how technologies of control evolve but rarely disappear. Today’s AI systems and digital platforms aren’t neutral innovations; they’re the latest iterations of historically rooted systems designed to surveil, contain, and extract from marginalized communities. When facial recognition automates racial profiling or lending algorithms perpetuate redlining, they’re not glitching—they’re working exactly as their predecessors did, just with more efficiency and less visibility.
But alongside this history of technological oppression runs another story—one of resistance, reimagining, and rebuilding. From Indigenous communities creating autonomous networks to environmental justice activists developing their own monitoring systems, from platform cooperatives transforming the gig economy to community scientists redefining what counts as knowledge—people are not just fighting against harmful technologies but building liberatory alternatives. These aren’t isolated projects but seeds of a different technological future, one where communities control their own digital infrastructure, data, and technological destiny.
For learning engineers and technologists, this history and these examples of resistance demand more than just ethical guidelines or bias audits. They require us to fundamentally reimagine our role. Every line of code we write, every algorithm we design, every platform we build either reinforces existing power structures or helps to transform them. There is no neutral ground. The question isn’t whether our technical choices will shape power relations but whether they will democratize power or concentrate it, whether they will serve liberation or control.
The tools and knowledge for technological liberation already exist. Communities are already building them—from Indigenous data sovereignty initiatives to Black-led platform cooperatives, from feminist tech collectives to disability justice hackers. These movements show us that technology can serve not just one community’s liberation but help dismantle intersecting systems of oppression. The question isn’t whether change is possible but whether those of us designing tomorrow’s systems will have the courage to stop coding oppression and start engineering freedom.
This isn’t just about writing better algorithms or conducting more thorough audits. It’s about fundamentally reimagining technology’s role in society. Will we build systems that concentrate power or distribute it? That extract from communities or empower them? That surveil and control or protect and liberate? For learning engineers, technologists, and communities alike, the future depends not just on how we answer these questions, but on what we do—now—to build technologies of liberation. The time for neutral observation has passed. The moment for transformative action is here.