Editor’s Note: Power Beneath the Surface
Over the past weeks, we’ve looked at what it means to intervene in unjust systems and what happens when those interventions fade. “Intentional Interventions” traced how small nudges and systemic shifts can help students navigate barriers not of their making. “The Hollowing” offered a quieter, deeper reckoning—exploring what we lose, spiritually and morally, when exclusion becomes embedded in institutional routines.
This week, we turn our attention to the infrastructure behind the interventions—the unseen architecture shaping who is visible, who is supported, and who is left behind. The Data Gatekeepers invites us to ask not only who benefits from equity efforts but also who controls the systems that determine which students matter in the first place.
From biased inputs to opaque algorithms, we explore how data systems can reproduce or resist inequality.
As AI becomes more central to admissions, advising, and evaluation, the question is no longer just whether our tools are fair, but whether they are accountable, and to whom.
Where gatekeepers once stood in visible positions, they now operate through code, through data structures, through algorithmic design that few can see and even fewer can question. This shift isn’t inherently liberating or oppressive. Its impact depends entirely on who controls these systems and whose values shape them.
As you read this week’s essay, consider: How might data governance models redistribute power rather than reinforcing existing hierarchies? What would it mean for communities to have meaningful control over the data that shapes their educational opportunities?
This essay expands our lens from frontline actions to the deeper systems behind them, from the moral weight of exclusion to the concrete ways data shapes opportunity. It prepares us to wrestle with the choices we face as designers, technologists, data scientists, educators, policymakers, and citizens.
Let’s begin.
— Dr. G
From Information to Power
In previous weeks, we’ve traced how educational barriers evolve. From explicit exclusion to systemic inequities, we’ve examined how algorithms not only encode but operationalize and scale these patterns in ways few students understand or control. We’ve explored how small nudges can help students navigate complex systems and how belonging interventions can transform educational experiences. As we’ve seen throughout this series, biased inputs create biased outputs. But now we turn to a deeper question: Who controls the data that shapes these educational pathways?
The stories of students like Arya, Jon, and K’Vonte reveal how access to information and supportive systems dramatically impacts their journeys. But behind these experiences lies a deeper structure of power: Who owns the data about students? Who designs the systems that interpret it? And most importantly, who benefits from the current arrangement?
As week 7 illustrated, even well-designed interventions can’t achieve their full potential if the underlying data systems reinforce existing hierarchies of advantage. The challenge isn’t just creating better interventions, but democratizing the very infrastructure that determines who gets which opportunities.
The Invisible Architecture of Control
The shift from human to algorithmic gatekeeping marks a profound transfer of power. Where once admission officers made decisions with visible bias, now algorithms make similar judgments with invisible bias, hidden behind a veneer of mathematical objectivity.
This power now resides in code rather than policy, in data structures rather than explicit rules. The gatekeepers haven’t disappeared. They’ve transformed.
Research shows that just as certain cultural knowledge confers advantage in traditional educational settings, algorithmic systems create their own forms of digital know-how — what some call “technological capital.” Students who know how to optimize their profiles and present themselves in algorithm-friendly ways gain advantage. This often reproduces existing patterns of privilege.
Yet most students have no idea how these systems evaluate them, what data is collected about them, or how this information shapes their opportunities. This opacity isn’t accidental. It reflects a deliberate concentration of power in the hands of those who design and control these systems.
The Corporation, The Institution, The Student
Educational algorithms are designed primarily by technology companies, implemented by institutional administrators, and governed by policies that rarely include those most affected. This creates a troubling power imbalance. Student data becomes a commodity extracted for institutional gain rather than a resource for student empowerment.
The teams building these technologies seldom reflect student diversity. A 2019 report by the AI Now Institute found that only 2.5% of Google’s workforce was Black, and women made up just 15% of AI research staff at Facebook and 10% at Google — disparities that remain largely unresolved today. This homogeneity shapes what problems are deemed worth solving and what “success” looks like in algorithmic design.
In modeling terms, homogeneity among developers narrows the variance of thought, experience, and assumptions built into algorithmic systems. This leads to poorly calibrated models for students whose data—or lives—fall outside dominant patterns. But the issue is more than technical: it’s about the stories, needs, and aspirations that get left out entirely when diverse lived experiences aren’t part of the design conversation.
The same predictive model produces dramatically different outcomes depending on institutional resources. At well-funded universities, analytics might trigger additional supports for struggling students. At resource-poor institutions, the same flags might lead to course restrictions or even push-out. The technology amplifies existing institutional inequities.
Perhaps most troubling are the “data deserts” that make certain students invisible to algorithms altogether. When systems train on historical data from predominantly white, middle-class students, they struggle to accurately assess students from different backgrounds. The algorithms don't just make wrong predictions for these students, often from communities with limited historical representation in higher education. They operate with insufficient context, effectively erasing their potential and unique paths to success. Many admissions algorithms still incorporate variables like legacy status, extracurricular activities, and zip codes — factors that correlate strongly with historical patterns of privilege.
Alternative Models: From Extraction to Stewardship
Alternative models for governing educational data are emerging — models that treat data not as a resource to extract but as a shared responsibility to care for. The fundamental question is one of control. Should data primarily serve corporate interests, institutional needs, or the communities most affected by its use?
This question becomes even more urgent when we consider the financial incentives at play: the educational technology market exceeded $250 billion in 2024, with algorithmic decision-making systems representing one of its fastest-growing segments.
Indigenous data sovereignty offers powerful alternative principles. The CARE Principles (Collective Benefit, Authority to Control, Responsibility, Ethics) reimagine who controls and benefits from data. Rather than treating data as a resource to extract, these principles center community ownership and benefit.
Design justice approaches center marginalized communities in technological design. This approach rethinks design processes, centers people who are normally marginalized, and uses collaborative practices to address the deepest challenges communities face.
Practical tools like algorithmic impact assessments provide frameworks for evaluating systems before implementation. These assessments ask crucial questions about who benefits, who might be harmed, and how impacts will be measured and addressed.
These alternative approaches share a commitment to democratic control of technology. They embody the idea that those most affected by technological systems should have meaningful input into their design, implementation, and governance.
From Individual Privacy to Collective Data Rights
Traditional approaches to data governance focus almost exclusively on individual privacy, giving users some control over what information they share. But this framework misses the collective dimension of data’s power. Even if individual students can opt out of certain data collection, the patterns extracted from their peers still shape the algorithms that determine their opportunities.
A more empowering approach would recognize collective data rights — the idea that communities, not just individuals, have interests in how their data is used. This might mean:
Student unions having seats on data governance boards
Community representatives participating in algorithm audits
Shared ownership models where the value generated from student data flows back to those communities
The transition from data extraction to data stewardship requires us to ask not just “How do we protect individual privacy?” but “How do we ensure that data serves the communities it comes from?”
Reclaiming the Future of Opportunity
As educational pathways become increasingly filtered through AI systems, the question is not just whether algorithms are fair, but whether they are accountable — to students, to communities, and to justice itself.
The small decisions we make now about what data we use, whose stories we trust, and what success looks like will shape not just who gets into college but who gets to dream. The architecture of algorithmic systems being built today will determine educational pathways for future generations.
Where week 7 showed how interventions can help students navigate existing systems, today we recognize that true transformation requires changing who controls the underlying infrastructure of opportunity itself.
We now understand that the benefits of past equity policies weren’t distributed evenly. Even well-intentioned efforts often reinforced familiar hierarchies. As we shift from history to design, the next frontier isn’t just who benefits. It’s who decides.
The gatekeepers have changed. But the fundamental question remains: who controls the future of opportunity?
The answer depends not on technology itself, but on the values and priorities that shape its design and use — and whether those values reflect the communities these systems are meant to serve.
But what if the maps these systems provide are themselves distorted, reflecting historical inequities rather than true possibility?
Next week, we’ll explore how students navigate this terrain, where the most visible paths aren’t always the most attainable, and where even the most determined travelers can lose their way. More about this next week.
As always, many thanks for reading, for spending time, and for providing guidance in your comments.
Dr. G