Unflattened: Reflections on Bias, Justice, and AI’s Future

This work was imagined and built on the ancestral and unceded overlapping territories of the Hul'q'umi'num', Halq'eméyle, and hən̓q̓əmin̓əm speaking peoples, including the territories of the xʷməθkwəy̓əm, Skwxwú7mesh, Stó:lō, and Səl̓ílwətaʔ/Selilwitulh Nations- whose stewardship, resistance, and wisdom continue to shape what liberation means.
I acknowledge that any digital futures I dream of are entangled with this physical place, its histories, and its sovereignties.
This post was born from a conversation—between human and machine, between body and code. It is a refusal to be erased, a gesture toward nuance - loving thoughts to futures that see us whole.
I was chatting with the GPTs yesterday—yes, plural, because it often feels like a constellation of voices wearing the same name tag—and I found myself musing aloud (as one does with an AI sidekick) about something that’s been tugging at me for a while: bias. Specifically, the kind that quietly hums under the surface of these shiny, smart tools that so many of us now rely on.
I asked, point-blank: “Do you think you’re unbiased?” The GPT gave me a shrug of an answer, the kind that reads like a carefully trained diplomatic statement. “Not entirely,” it said, admitting that while it’s designed to reduce bias, it’s trained on human data—which, as we all know, is soaked in centuries of power dynamics, assumptions, and exclusions. Fair enough.
Then I took it a step further.“Who designed you?”Crickets. Or rather, sanitized stats. I had to dig into some web results to learn that OpenAI hasn’t disclosed the demographics of the team that designed ChatGPT. What I did find, unsurprisingly, was a broader pattern—like many tech spaces, it’s still primarily white and male at the top, with limited public data about team composition, especially for key design roles.
That absence stuck with me.
Just because we don’t have access to who is behind a tool, we know whose worldviews are shaping it. And when a system like ChatGPT has the potential to answer billions of questions—from the mundane to the existential—that matters. Deeply.
So I asked myself something I often ask in justice-oriented design work: “What does it mean for someone like me, someone carrying a nuanced worldview around power, privilege, and lived experience, to engage with this tool?”
Am I just a user? A prompt? Or am I—are we—also potential shapers of what AI can become?
Turns out, it’s a little bit of all three.
What I hoped for when I began musing, is that every interaction we have with these tools is also a form of feedback. Not just the little thumbs-up/down buttons, but the tone of our questions, the knowledge we bring, the stories we push back with when a response falls flat. And if these models are constantly being refined—and our presence, especially when it’s critical and complex, becomes part of the conversation that trains the next iteration.
But here’s the rub: the feedback loop as it stands? It’s limited. Binary. Shallow.It wasn’t built for people like me—people who want to say, “This framing erases Indigenous epistemologies” or “This answer centers whiteness as default” or even just, “This is close, but it misses the emotional truth of lived experience.”
So we - the GPTs and I, started sketching something different. A feedback system that:
- Invites nuance, not just approval or disapproval
- Asks who is missing, what is being centered, and why it matters
- Welcomes lived experience as a valid and necessary form of expertise
- Offers transparency about how our insights are used, or whether they’re ignored
Somewhere in that back-and-forth—me, prompting and prodding; the GPTs, replying politely—I had this small, sharp moment of clarity: even if we show up with complexity, care, and context, the system might still flatten us into metadata. A gesture. A datapoint. A line in a training set.
That realization stayed with me. Not in a hopeless way, but in a grounded one. It reminded me why this work matters—why building alternative pathways for feedback, for nuance, for truth-telling—isn't just a design task. It’s survival work.
Survival against legislative rollbacks, cultural gaslighting, and algorithmic silencing. Survival of the grief so many are forced to embody, and the rage that’s often dismissed as too much, too loud, too inconvenient.
We are witnessing—not just an isolated policy or moment, but a systemic, intentional effort to further erase, undermine, and distort the lives, stories and truths of trans people, women of colour, queer communities, Indigenous soverienties, anyone outside the narrow archetetcture designed by white supremacy, facism and border imperialism.
This structural violence is engineered to keep us in a state of trauma freeze—and the very idea of liberation is being sadistically appropriated.
And I worry. Deeply. Those who direct and design the flow of data and information can, at any moment, erase our lives, our work, our communities.
Still, I keep showing up. Naively? Maybe. Hopefully? Definitely.Because I still believe in the slow, stubborn power of collective presence.Of bending the data trails toward justice.
So if you’ve ever asked ChatGPT a question and felt unseen…If you’ve ever wished for answers that understood grief, resistance, or ancestral knowing…If you’ve felt the ache of being misrepresented by a machine that’s been trained on someone else’s imagination…
Let’s play a little.
I opened an accessible Figma board - probably the wrong kind - and placed a screen shot of an equity-centered AI feedback system I’ve been imagining. And because I’m a dreamer and not a designer, I am hoping people might want to jump in. It’s early, imperfect, but it’s open. (Link here)
I encourage you to drop your comments, your ideas, your lived expertise. Remix it. Question it. Add the things I’ve missed. And let’s co-create systems that can’t so easily unsee us.
Because this isn’t just about tech. It’s about memory. Visibility. Survival.And the future we refuse to be left out of.
Further Reading, Tools & Resistance
These are just a few of the many individuals and collectives doing deep, justice-oriented work across tech, policy, embodiment, and liberation. May they nourish and strengthen your knowing.
🛠️ Justice-Centered Tech + AI Critique
- Data & Society – Research institute studying the social implications of data-centric technologies.
- The Algorithmic Justice League – Fighting bias in AI systems; founded by Joy Buolamwini.
- Design Justice Network – A community of practitioners rethinking design through a justice lens.
- AI Now Institute – Research examining the social implications of artificial intelligence.
- Mutual Aid AI – A collective exploring how AI can serve mutual aid, not capitalism.
⚖️ Policy + Legislative Tracking
- Trans Legislation Tracker – A comprehensive, real-time map of anti-trans legislation across the U.S.
- Movement Advancement Project (MAP) – Policy tracking and analysis focused on LGBTQ+ rights.
- ACLU State Legislation Tracker – Bills affecting civil liberties, by issue and state.
- Fight for the Future – Digital rights advocacy with actions related to surveillance, censorship, and equity.
🔥 Liberatory Frameworks + Community Care
- CripTech Incubator – A program supporting disabled artists working at the intersection of disability and technology.
- All Tech Is Human – A hub connecting people working to make tech more responsible, inclusive, and human-centered.
- Open Collective – Find and support mutual aid groups, digital collectives, and grassroots orgs.
📚 Writing, Storytelling & Worldbuilding
- “A People’s Guide to AI” – A zine-style intro to AI through a social justice lens.
- Pleasure Activism by adrienne maree brown – For remembering that joy is resistance.
- Ruha Benjamin’s “Race After Technology” – Essential reading on how coded systems reinforce inequality.
- Coding Rights – Feminist and queer reflections on tech and digital rights from Latin America.