To the Dreamers: AI Was Never for Us


EditorialTo the Dreamers: AI Was Never for Us

When OpenAI was founded in 2015, it promised something radical: to ensure that artificial general intelligence (AGI) would benefit all of humanity.

It began as a nonprofit, anchored in principles of transparency, safety and equity. It claimed to be building not just advanced technology, but a new kind of institution, one that could serve as a moral counterweight to the unchecked profit motives of Big Tech. But by 2019, that promise began to unravel. OpenAI restructured as a “capped-profit” hybrid entity, allowing early investors to earn up to 100x their contribution. The mission remained in name, but not in structure. The commitment to public benefit was replaced by market calculus. What had been framed as a global good became, functionally, a private asset.

The AI Dream That Wasn’t Ours

Still, the dream persisted. When ChatGPT was released in late 2022, it was greeted not as a corporate product, but as a revelation. Across industries, disciplines and media platforms, it was heralded as a democratizing force: a tool that could level playing fields, expand access to knowledge and correct long-standing inequities. Headlines asked whether AI could fix education, reduce bias or serve as a new civil rights tool. From conferences to classrooms, the message was clear, this was designed for everyone.

But for many of us, that story never felt true. Or rather, it felt like a performance that was crafted to obscure how these systems were built, whom they were built for and what they extract in the process. The deeper I looked, the more I realized that those of us who believed in the liberatory potential of AI had been invited into an illusion: one that borrowed the language of equity while operationalizing something else entirely.

This is not a tale of broken technology. It is a tale of misplaced trust. And of the dreamers, educators, ethicists and advocates who believed that AI could serve justice, found out that, from the start, it was never really meant for us.

The Idealism Machine

We were invited to believe in the promise. OpenAI pledged to benefit “all of humanity.” Anthropic branded itself a “public benefit corporation,” dedicated to developing safe, steerable AI. DeepMind’s founders claimed they wanted to “solve intelligence” to “solve everything else.” These were not just technical ambitions; they were philosophical assertions. And they landed. Hard.

In higher education, instructors began conversations on AI adoption before fully interrogating its origins. In conferences, panels convened to celebrate the inclusive promise of machine learning, even as companies refused to disclose basic information about how models were built. Meanwhile, the average user, understandably, assumed these systems learned from controlled inputs and clean data, not oceans of scraped material from Reddit, Wikipedia and the web’s darkest corners. A 2023 Scientific Reports study found that users consistently underestimated how these models learn from implicit, biased or problematic data.

AI was marketed as neutral, scientific and liberatory. We were told it was the great equalizer. But what if that idealism was never grounded in reality?

The Evidence Never Promised Equity

The truth is stark: there is little empirical evidence that contemporary AI systems were designed to uplift the marginalized or promote global equity. Quite the opposite. Large language models (LLMs) like GPT-4, Gemini and Claude are built on data pipelines that reflect the dominant cultures, institutions and economies of the Global North.

According to the recent study "Bias and Fairness in Large Language Models," these models exhibit racial and gender bias at scale. MIT Technology Review reported that outputs from GPT-3 showed “clear, measurable patterns of bias,” including stereotyped depictions of Black people and women. Google’s Gemini, Perplexity and Claude, for instance, have been found to lean politically and demonstrate skewed historical representations, further complicating the idea that AI is merely reflecting data neutrally.

And when it comes to mitigation? Developers know how to conduct fairness audits. They know how to diversify datasets and implement opt-outs for data scraping. But they often choose not to. In 2023, OpenAI quietly ceased disclosing what data GPT-4 was trained on, citing “competitive threats,” and at the same time, effectively silencing public scrutiny.

Why? Because what is rewarded is not fairness. It is speed, scale, investor return and dominance.

This is not a moral revolution. It is a marketing one.

Data Colonialism, Rebranded

The deeper irony of today’s AI revolution is that many of the companies building the most influential systems are engaged in a form of digital colonialism. These technologies do not merely reflect existing inequalities, they actively reinforce them through systematic data appropriation and globally unbalanced infrastructure.

For centuries, imperial powers justified resource extraction under the banner of progress. Today, that same logic underpins AI development. Data has become the new raw material being quietly harvested from users’ photos, writing, clicks and digital identities — mostly from social media. This form of “techno-colonialism” disproportionately targets the Global South, where data is often extracted without consent, and AI products are deployed without cultural adaptation, contextual accountability or meaningful governance.

The cycle is as familiar as it is invisible. The Global South supplies the training data and hosts the carbon-intensive server infrastructure, while the profits consolidate in Silicon Valley, Cambridge and corporate boardrooms. What emerges is not a neutral system, but an extractive model that mirrors and magnifies longstanding global asymmetries. These systems deepen epistemic dependency while exporting one worldview as if it were universal.

As geopolitical powers race to deregulate the digital sphere, the divergent models that are rights-based, market-driven and authoritarian reveal the urgent need for a binding global framework on data ethics and digital sovereignty.

This is not the emancipatory future we were promised. It is the colonial past, reengineered and now hidden behind a silicon mask.

We Fell for It

And here is where it gets personal: I wanted to believe. I teach these tools. I write about them. I advocate for ethical use. I told myself that AI — if done right — could help close educational gaps and amplify marginalized voices. That it could serve as a lever for justice rather than just another machine of extraction. But lately, I’ve come to see that what I was sold was not a vision — it was a veneer.

Because the more I studied, the more I saw a disturbing pattern: credible warnings about harm have been dismissed, and those who raise red flags have often faced professional retaliation.

Silencing the Critics

Consider Timnit Gebru, co-lead of Google’s Ethical AI team, founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR) and co-founder of Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI. In 2020, she was fired after she co-authored a landmark paper highlighting the profound societal risks of large language models: their environmental cost, their role in spreading misinformation and their amplification of racist and sexist bias.

The study cited research by Strubell et al., which showed that energy consumption and carbon footprint have been exploding since 2017, as models have been fed more and more data. This research also found that training a single model using a neural architecture search (NAS) method could emit over 626,000 pounds of carbon dioxide, which is the equivalent of five average American cars over their entire lifetimes. Even training BERT, Google’s own language model, generated 1,438 pounds of CO2, which is roughly the same as a round-trip flight from New York to San Francisco. And these figures reflect just one training cycle; in practice, models are trained and retrained many times over.

Instead of responding to the substance of these findings, Google terminated Gebru’s employment. Her removal was not a matter of disagreement over research methods, to many, it was a message: those who challenge the trajectory of AI development do so at their own peril.

Two Faces of AI Leadership

Elsewhere, Mira Murati, OpenAI’s chief technology officer, has publicly acknowledged AI’s “existential risks” and the urgency of regulation. Yet under her leadership, OpenAI continues to deploy increasingly powerful systems with little transparency about training data, model limitations or societal consequences.

Public caution is voiced at the podium, while private acceleration continues behind closed doors. The tension is strategic. Ethical concern becomes part of the brand narrative, rather than a basis for institutional restraint.

Consent by Design — or Deception?

Meanwhile, Meta has quietly normalized data extraction as a default feature of its AI ecosystem. Its tools are designed to collect user data (text, photos, interactions) for model training, often without meaningful consent. In 2023, Meta claimed it does not use data without permission, yet simultaneously stated that its models are trained on “publicly available information, licensed data and information from Meta’s products and services.” In practice, this means that any user who has accepted Meta’s privacy policy has, by default, authorized the use of their personal data and has effectively handed it over without conditions.

Opting out is technically possible, but intentionally difficult. Users must navigate obscure forms, unclear settings and layers of legal language. As Colorado Legal Journal reported, these “dark patterns” are not accidental. Instead they are carefully designed mechanisms of friction, intended to wear down resistance and protect corporate data pipelines.

Together, these cases reveal a pattern: institutions that market responsibility while engineering opacity. A surface-level commitment to ethics conceals the deeper logic of AI development, one that is driven by extraction, enclosure and profit.

This is not empowerment. It is exploitation. These tools were never meant to uplift the public. They were built to collect our data, our labor, our language, our likeness and all under the guise of innovation. So no, we were not wrong to hope. But we were wrong about who AI was designed to serve. And if we are to reclaim these tools for the public good, we must first admit: we fell for it.

Related Article: Higher Education’s AI Dilemma: Powerful Tools, Dangerous Tradeoffs

You Can Also Expose Your Maker

And that’s the paradox: AI reflects its creators. It encodes their choices, blind spots and ambitions. These models are mirrors. They reflect what they were fed, how they were built and whom they were built for. Which means they expose their makers, not just our biases.

They show us what happens when innovation is uniquely driven by the thirst for money and dominance. When ethics are outsourced. When we mistake product launches for public good.

That line “you can also expose your maker” has echoed in my head for months. Because we keep romanticizing AI as though it descended from the clouds, bearing gifts. But it is not divine. It is deliberate. If we are to reclaim AI as a tool that serves the many, not just the powerful, then the first step is to stop dreaming. Or rather, to wake up.

Yes, I am still a dreamer. I believe in technologies that heal, not harm. I believe in data dignity, consent and collective intelligence. But the world I want will not be handed to us. It must be built intentionally, urgently and by those willing to confront the truth of what we’re up against. And that begins by admitting we fell for it. Now, we can do something about it.


Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *