Part I: From Plantation to Algorithm: How Historical Exclusion Shapes Digital Inequality
Tracing the Ghost in the Machine: When Yesterday’s Barriers Become Tomorrow’s Code
Introduction
“From plantation to prison, from redlining map to mortgage algorithm, from slave patrol to predictive policing—oppression doesn’t end; it upgrades.”
“The future is not just written in code—it’s written by us.”
The Systems We Inherit, The Futures We Create. We are shaped by the stories we inherit. For many of us, our earliest lessons about justice, power, and redemption came from sacred texts, oral traditions, and ancestral wisdom—stories passed down to help us understand the world and our place in it. These narratives taught us that systems of oppression exist and that transformation is possible—that people can rise, that empires can fall, and that liberation is both a struggle and a promise.
But what happens when those old systems don’t disappear? What if, instead, they evolve—adapting to new tools, new rhetoric, and new technologies while preserving the same structures of power and exclusion? From plantation to prison, from redlining map to mortgage algorithm, from slave patrol to predictive policing—oppression doesn’t end; it upgrades.
This is the reality of our modern world. The same forces once controlled through chains and borders now govern through algorithms and automation. Yet alongside this evolution of control, something else has grown: resistance has adapted too, finding new tools and tactics for liberation.
This two-part series examines how history isn’t just remembered—it’s programmed into the systems we use today. Part 1, “From Plantation to Algorithm,” uncovers how exclusionary patterns have been hardwired into AI, finance, policing, and education. Part 2, “Building Technologies of Liberation,” moves beyond critique to explore how communities are rewriting the digital future—developing tools that empower rather than exploit.
Technology is not neutral. It has always reflected the values of those who build and control it. The future of technology—and the future of justice itself—depends on our choices today.
The systems we inherit are not inevitable. The future is not just written in code—it’s written by us. What patterns do you recognize in today’s world? What does a technology of liberation look like to you? Let’s begin.
Editor’s Note
Over the past week, we’ve explored the idea of reset—from personal transformation to systemic change. In The Year of Reset, we examined how true renewal requires understanding what we’ve inherited while imagining what we can create. Now, we turn that lens to our digital present.
As we trace these patterns of evolution and control, Part 1 examines how historical exclusion has been encoded into our technologies. From AI-driven lending discrimination to predictive policing algorithms, we uncover how systems that claim neutrality often amplify historical inequities.
Next week, in Part 2, “Building Technologies of Liberation,” we’ll explore how communities are actively rewiring these systems, developing tools that serve liberation rather than control.
🤔 As you read, consider:
Which patterns of exclusion do you see persisting in today’s technologies?
How might understanding these patterns help us build something new?
📚 If you haven’t read The Year of Reset, you can find it here.
Historical Foundation
They returned from war wearing medals and carrying dreams of prosperity. One million Black soldiers who had fought fascism abroad were now ready to claim their share of the American Dream. The GI Bill promised them education, housing, and economic security. But when they stepped into banks and universities, they faced a different battle—not with guns but with paperwork, policies, and so-called “neutral” assessments that somehow always reached the same conclusion: denied.
Across the Atlantic, their cousins in the Caribbean confronted their own barricades. From St. Kitts—where Columbus planted the seeds of colonial exploitation in 1493—to Jamaica and Trinidad, plantation economies had fueled Britain’s industrial rise. Yet, as these nations gained independence, colonial banks refused to finance Black farmers and entrepreneurs, ensuring economic liberation remained out of reach. From Milwaukee’s and Chicago’s redlined neighborhoods to restricted credit markets in Kingston and Basseterre, the same systems conspired to block Black mobility.
These exclusions did not end; they evolved. Today, when a young Black entrepreneur in Harlem applies for a business loan, an AI system scans her application. The algorithm—trained on decades of redlining data and racial exclusion—delivers its verdict: “High risk.” Denied. In Lagos and Accra, digital lending platforms built on Western banking models systematically undervalue African businesses, relying on credit-risk algorithms trained on Eurocentric financial histories that erase informal economies and local business practices.
This is not coincidence—it’s by design. A haunted infrastructure coded with the past, shaping the future. The systems we build today don’t just happen to reproduce historical inequities; they are programmed by the very data and decisions that created those inequities in the first place. From housing algorithms that echo redlining to educational technology that reinforces racial tracking, our so-called “neutral” systems continue the logics of racial capitalism and colonial control.
But recognizing these patterns reveals both warning and opportunity. This is a call to action for learning engineers, technologists, and all who shape tomorrow’s systems: Will we continue encoding historical inequities into future technologies? Or will we confront these ghosts and rewrite the systems that sustain exclusion?
Part 1 of this two-part series examines how patterns of inequity persist across time and space, how they manifest in modern technical systems, and what it will take to finally put these ghosts to rest. Our journey traces the visible and invisible architectures of exclusion—from redlined maps to biased algorithms, from colonial banks to digital platforms—and points toward the possibility of transformation.
The ghosts of slavery did not die with emancipation—they adapted. Jim Crow laws emerged like shadows stretching from the plantation, ensuring that formal freedom would never mean economic power. Black Americans fought and died in two World Wars, returning home with dreams of education and homeownership, only to find those shadows inscribed in the fine print of the GI Bill, in the red lines on housing maps, in the so-called “neutral” assessments that always reached the same conclusion: Not here. Not you. You need not apply.
Across the Black diaspora, colonial systems followed the same blueprint of exclusion. As Caribbean nations gained independence, global financial institutions replaced colonial administrators. The World Bank and IMF’s “development” policies—framed as economic science—became new tools of economic control, ensuring that financial hierarchies remained intact while claiming impartiality.
By the 1970s, civil rights victories had outlawed explicit discrimination, but the system simply changed shape again. Mass incarceration emerged as what Michelle Alexander calls “The New Jim Crow”—a facially neutral regime of control disproportionately targeting Black communities. The “War on Drugs” became a war on Black mobility, dismantling families and communities while insisting on color-blindness.
Now, we have entered the era of algorithmic Jim Crow. The same forces that once wore white hoods, then business suits, now operate invisibly, hidden in lines of code. Predictive policing algorithms intensify surveillance in Black neighborhoods from Baltimore to Brixton. Housing algorithms, trained on decades of exclusionary data, continue to redline in digital form. AI-driven educational tracking systems sort students along racial lines from American suburbs to South African townships while claiming mathematical objectivity.
But this latest evolution carries both greater danger and greater opportunity. Technology disguises systemic racism behind a veneer of mathematical neutrality, making bias more insidious—but also more traceable. If we can expose the historical data shaping algorithmic outcomes, audit these systems for encoded bias, and challenge their hidden logics, we gain new tools to dismantle old oppression—tools that transcend borders and binary code.
The responsibility is urgent. Every line of code written today either repeats history or rewrites it. Every algorithm designed either perpetuates exclusion or disrupts it. The question is no longer whether technology encodes historical injustice. The question is: Who will take responsibility for breaking the cycle?
Financial Technology: When Red Lines Become Risk Scores
Systems of exclusion do not disappear. They reengineer themselves, adapting to new tools and languages. Jim Crow laws gave way to redlining maps, financial restrictions, and now the cold calculations of algorithmic “fairness.” Across finance, education, and policing, the same logic persists: historical exclusions are converted into data points, and those data points dictate the future.
From Redlining Maps to Machine Learning Models
Consider how financial technology reproduces historical barriers. In American cities, AI-driven lending algorithms, trained on decades of redlining data, still mark predominantly Black neighborhoods as “high risk.” The same zip codes that federal housing maps once colored red now receive low “risk scores” from machine learning models that claim to measure only objective financial factors. When residents of these neighborhoods apply for loans, automated systems—like their predecessors in banks decades ago—deliver the same decision with the same devastating impact: Denied.
The pattern is not confined to the United States. In Jamaica, Ghana, and Nigeria, mobile lending platforms built on Western banking models systematically undervalue local businesses and informal economies. Algorithms trained on Euro-American financial histories fail to recognize the susu systems of West Africa, the partner savings models in Jamaica, or the microloan networks that sustain entire economies. Instead, AI models impose Western financial metrics—ones that have long excluded Black borrowers.
When Bias Becomes Self-Fulfilling
But these algorithmic assessments do not just mirror historical inequality; they magnify it. A neighborhood flagged as “high risk” by an AI model experiences compounding economic harm: falling property values, inflated insurance premiums, and financial disinvestment. In the Global South, when mobile banking platforms undervalue traditional credit systems, they push entire communities toward Western financial models that have historically extracted wealth from Black populations. The same colonial logic that once denied loans to Black farmers and entrepreneurs now operates through seemingly neutral lines of code.
Breaking the Cycle: Who Will Audit the Algorithms?
Yet this technological evolution also creates new possibilities for intervention. Unlike the redlining maps of the 1930s, today’s algorithms are not etched in ink—they can be audited, tested, and rewritten. Learning engineers, policymakers, and activists must ask: Who designs these models? Who audits them? Who ensures that the ghosts of financial exclusion are not coded into the digital economy?
The question is not whether financial algorithms encode historical bias. The question is: Will we recognize these patterns in time to break them?
Educational Technology: When Tracking Goes Digital
The ghosts of segregated schoolhouses have not vanished—they’ve been encoded into algorithms. In the 1950s, Black students faced separate and unequal schools. Today, AI-powered “personalized learning” platforms sort students into digital tracks that mirror historical patterns of exclusion. In contrast, predatory student loan algorithms ensure that access to higher education remains deeply unequal. The language has shifted from “separate but equal” to “differentiated instruction” and “risk-based lending,” but the outcomes remain disturbingly familiar.
Consider how educational technology reproduces these barriers. In American schools, “adaptive learning” algorithms, trained on decades of biased testing data, track students into different educational pathways. The same students whose grandparents were denied advanced courses now face AI systems that declare them “not ready” for accelerated content. Meanwhile, algorithmic lending models, claiming to assess “educational investment risk,” disproportionately burden Black students with higher interest rates and stricter terms, ensuring that educational debt becomes another form of economic extraction.
This digital sorting extends globally. In South Africa, where apartheid’s Bantu Education Act once enforced racial hierarchies through separate school systems, AI-driven assessment tools—built on Western educational standards—systematically undervalue indigenous knowledge systems and local learning practices. In Brazil, algorithms trained on São Paulo’s elite private schools fail to recognize the pedagogical innovations of community schools in favelas. Across India, “intelligent tutoring systems” cannot comprehend the sophisticated oral traditions and collaborative learning models that have sustained rural communities through generations of colonial opposition. When these communities seek educational funding, they face the same biased algorithms that once redlined their neighborhoods.
But these algorithmic assessments don’t just reflect educational inequality—they automate and accelerate it. When an AI system labels a student “at risk,” it triggers a cascade of reduced expectations and limited opportunities that can last generations. In the Global South, when educational platforms enforce Western pedagogical models while international lending institutions impose strict conditions on educational funding, they create a double bind: accept digital colonization or face technological exclusion.
Yet this technological evolution also creates new possibilities for intervention. Unlike the rigid tracking systems of the past, today’s educational algorithms can be examined, challenged, and redesigned. Learning engineers and educational technologists face a critical choice: Will we continue coding historical inequities into educational futures? Or will we build technologies that recognize, value, and amplify diverse ways of knowing, learning, and succeeding? This means actively auditing AI systems for encoded bias, developing new assessment models that value indigenous knowledge systems, and ensuring that predictive analytics don't simply automate historical exclusion.
The responsibility extends beyond just improving algorithms. We must ask: Who defines educational success? Whose knowledge is valued? How can technology serve liberation rather than limitation? These questions demand answers—and action—from those shaping tomorrow’s learning systems.
Policing Technology: From Slave Patrols to Digital Surveillance
The algorithms that track students in classrooms mirror those that track communities on streets. Both systems translate historical patterns into predictive models that justify intervention and control. When an educational AI labels a student “at risk,” that digital mark doesn’t just stay in the classroom—it feeds into broader systems of surveillance, reinforcing a cycle of criminalization. From biometric school security systems to predictive policing tools that target entire neighborhoods, the same biased data that restricts educational opportunities becomes a blueprint for law enforcement. The result? A digital pipeline from the classroom to the prison cell.
The ghosts of slave patrols have found new vessels in predictive policing algorithms. Where overseers once monitored plantation boundaries, AI-powered cameras now scan “high-crime areas”—a technical euphemism for Black neighborhoods. The same logic that powered Jim Crow surveillance and stop-and-frisk policies now operates through facial recognition systems and risk assessment scores. Only now, the targeting of Black communities happens through seemingly objective data points: arrest histories shaped by over-policing, “suspicious activity” reports influenced by racial bias, and crime predictions based on historical patterns of discriminatory enforcement.
This algorithmic criminalization extends globally. In London’s predominantly Black neighborhoods, the Metropolitan Police’s gang matrix algorithmically labels young people as “potential gang members” based on social media activity and neighborhood demographics. Across Brazil’s favelas, predictive policing systems trained on decades of military-style enforcement justify continued surveillance of marginalized communities. In China, facial recognition systems originally tested on Uyghur communities are now being exported worldwide, spreading technologies of racial profiling under the banner of “smart policing.”
But these systems don’t just reflect historical patterns of control—they accelerate them. When ShotSpotter devices overwhelmingly deploy in Black neighborhoods, they create a self-fulfilling prophecy of increased police presence and arrests. When COMPAS algorithms label Black defendants “high risk” at twice the rate of white defendants, they ensure that historical incarceration patterns become future sentencing guidelines. When facial recognition systems misidentify Black faces at 10 to 100 times the rate of white faces, they automate the racial profiling that has long plagued law enforcement.
The technological infrastructure of surveillance grows more sophisticated daily. Police departments now employ AI systems that scrape social media posts, track cell phone locations, and monitor protest activities—all trained on data that reflects decades of discriminatory policing. Biometric databases collect DNA samples, fingerprints, and facial scans predominantly from overpoliced communities, building digital profiles that follow people from arrest to trial to release. Each new technology promises objectivity while encoding old biases into new forms of control.
Yet this technological evolution also reveals new possibilities for resistance and reimagining. Unlike the shadowy surveillance of the past, today’s policing algorithms can be audited, challenged, and dismantled. Communities from San Francisco to London have successfully banned facial recognition technologies, demonstrating that these systems aren’t inevitable. Grassroots organizations have exposed biased algorithms, forced transparency in predictive policing programs, and demanded community control over surveillance technologies. Each victory proves that the digital architectures of oppression can be dismantled—if we choose to confront them.
Learning engineers and technologists face a fundamental choice: Will we continue developing tools that automate historical patterns of oppression? Or will we actively work to build technologies that serve justice rather than control? This means embracing abolitionist approaches—designing systems that prioritize community well-being over surveillance, that support restorative justice over predictive control, that redirect resources from digital policing to community-led safety initiatives. Instead of automating carceral logic, we can develop technologies that support mental health response, harm reduction, and neighborhood conflict resolution.
The path forward demands that we ask: Who benefits from these systems of surveillance? Whose safety is prioritized, and whose is compromised? How can technology serve liberation rather than containment? These questions require those building tomorrow’s systems to confront their role in either perpetuating or disrupting centuries of technological oppression. The future of justice hinges on how we answer—and whether we dare to reimagine technology as a force for liberation rather than control.
Your piece last week beautifully explored themes of resetting and liberation through inner transformation - spiritual work. How might the technology world embrace a spiritual reset?