Acronyms that stress us out: AI and DEI

Don’t feel like reading right now, listen to the recorded version of this article above!

When the two acronyms AI and DEI are mentioned in the same sentence, the focus of conversation is usually around two areas:

  • Machine learning and the ways they replicate discrimination that exist in the world: The basic gist here is that machine learning is a type of AI (artificial intelligence) that allows computers to learn and improve from data without explicitly being programmed. This means if I give a computer millions of pictures of dogs, eventually, it will be able to identify a dog in a video, and maybe it’ll be able to draw a dog, tell me a story about a dog, maybe even write a script for a movie from the perspective of a dog. The challenge is that because we are “teaching” computers through existing data, these data sets are riddled with our own biases and discriminatory frameworks. Eg feeding it pictures of ‘a doctor’ and all the doctors are men and white, so eventually, it cannot compute a doctor being anything other than male and white.

  • AI and its impact on the environment: This one is particularly difficult to stomach because we are living in a climate crisis, and AI requires an incredible amount of energy and water to function. AI tools (like ChatGPT or image generators) run on massive computer servers that require constant electricity and cooling. Every time we use them, we’re tapping into those servers—which need huge amounts of energy and water to stay running and not overheat. That’s why AI has a growing carbon footprint, especially as use scales. This heavy resource need feeds into an already delicate global warming and climate change situation. Climate change does not affect us equally, the most marginalized - globally are getting and will continue to bear the brunt of the deterioration of the planet. So AI and its impact on the environment quickly becomes a significant talking point in a conversation about AI and DEI.

These are all incredibly critical conversations, made even the more urgent by the rapid onslaught of artificial intelligence in everything we do. For that exact reason however, we at QuakeLab feel there is a huge unmet need to have a more robust conversation about the intersection of AI and inequity, without it amounting to an abstinence outcome. 

Why?

Try and think of five things you did today, and we can tell you how AI has been integrated into those five things with or without your knowledge. If you sent a text message that included predictive text, that’s AI. If you called a bank, or any company that uses a call center, you likely interacted with AI. Used any kind of digital map - you guessed it, AI was likely involved. 

Laying out all the ways that AI is exasperating existing inequities or creating new ways we oppress the most marginalized - and then leaving us with the call to action of “just don’t do it”, feels insufficient and unproductive. 

So what we’re going to try to do, is have a more robust conversation about how AI interacts with DEI, without evangelizing (AI is the future and you need to love it) or creating a zero sum game (stop using AI or you’re the problem). We’ll work on landing at a point where we can show how AI has the potential and/or is already restructuring the systems that are a critical part of how we think about inequity, and how the restructuring of these systems and elements of society can harm or serve us. The goal here is to have a broad conversation about AI and equity that leaves us with a dynamic and not nihilistic view of what comes next. We won’t be able to cover everything, but we’ll touch on a few key areas:

  1. AI and young people: The kids are alright?

  2. AI and large scale analysis

  3. AI and accessibility: A double-edged tool

AI and young people, specifically students 

Catherine (also known as @CatGPT on social media) shared some powerful insights about how students are being forced to choose between not using AI—specifically ChatGPT—or using it to manage increasingly overwhelming workloads. She notes that this isn’t a simple choice, but an effort to resist what might be “the greatest temptation to ever exist.” And we’d take it a step further: the student experience is not linear or uniform. For many students—especially those with disabilities or those experiencing financial precarity—AI tools can be vital. These tools offer them the ability to prioritize their health, manage time more effectively, and maintain income needed for survival. Yet the dominant response to student use of AI has largely been a moralistic one: just don’t use it.

Catherine also warns that young people, still developing critical skills like analysis and reasoning, are now at risk of bypassing those skills altogether—because AI tools are not just accessible, they’re deeply compelling by design. This risks widening the privilege gap even further. The students most able to “resist” AI are often those who already have access to tutors, flexible schedules, and support systems. Over time, this means those same students will be better prepared for a workforce that still rewards independent problem-solving and critical thinking, while others fall further behind.

But the answer isn’t blanket bans, like those some learning institutions have imposed. Instead, we need to critically assess who is using these tools and why—and build infrastructure that meets those needs. What supports are missing that push some students toward AI use out of necessity? What alternatives can be offered? And just as importantly, how can we incorporate AI into learning in a way that develops—not replaces—core skills? We even have the opportunity to use AI itself as a teaching tool: helping students learn to identify bias in its outputs and reflect on the limits of automated knowledge. That’s the kind of critical thinking that will actually prepare them for the world AI is shaping.

AI and large scale analysis

The idea that machine learning is replicating the worst of human behaviour is a fact, but it is not a stagnant one. Moreover, this replication is not a sin of the tool but it’s creators. Perhaps there now presents an opportunity to decipher what happens when the data we feed computers can help us correct the very thing we are worried about, and use its ability to work through huge volumes of data to design better. There is a larger conversation to be had here about who gets to own these tools, but barring these roadblocks, we can envision an opportunity to collate all the data we have available to us through the incredible work done by activists, lawyers, DEI professionals etc. This data can be critical in identifying patterns within systems, policies, and legislation that will result in inequitable outcomes for specific people groups. The ability to assess policies before they are enacted, legislation before it is passed, models and frameworks before they are used could open up the ability for all of us to build equity as a technical skill. Ensuring that human beings are involved in aspects of this process would be critical, but it would mean everyone from small non profits to individuals would have a powerful ally in supporting their efforts to design more equitable processes and outcomes in their world

AI and accessibility: A double-edged tool

AI presents an opportunity to revolutionize accessibility. Tools like speech-to-text, real-time captioning, AI-driven sign language interpretation, and screen readers are already making a tangible difference for disabled individuals. AI-powered personal assistants can support individuals with cognitive disabilities, providing reminders and assisting with daily tasks. However, accessibility isn’t just about availability—it’s about design. Are these tools affordable? Are disabled communities involved in their development? Are they deployed equitably?

Even the AI tools that claim to increase accessibility often reinforce exclusion. For example, automated captioning tools regularly fail for people with non-standard speech (eg. accents or stutters), and AI-driven communication aids often aren’t trained on non-Western languages or dialects. Tools built without direct community input risk offering a narrow idea of what accessibility means—solving for the “average” disabled user while ignoring those at the margins.

AI-driven accessibility tools have immense potential, but they must be intentionally designed to ensure they are not reinforcing digital divides. Investing in accessible AI that is community-driven and open-source can ensure that the technology serves those who need it most, rather than merely being a corporate afterthought.

How to use AI: Moving with the current, not getting swept under

Once again, AI is rapidly becoming an unavoidable part of the way we work, hire, educate, and make decisions. But stepping into this world without critical thought means getting swept under by the current. A good way to do that is giving equal priority between what AI can do and how we should use it. Here’s how we can do this:

  1. Be self-reflective: What problem are we solving? Before implementing AI, ask: What problem are we actually trying to solve? In the mid to late 2000s, everyone and their grandma wanted an app. But good consultants would push back: What is the need? The tool should always come after identifying the need. The same applies to AI. Many institutions integrate AI simply because it exists and is marketed as “efficient.” But efficiency at the cost of equity is a dangerous trade-off. Instead of chasing technology for the sake of it, we must critically assess whether AI addresses a core issue or simply creates new ones. Here's an example, your company wants to incorporate AI tools into its hiring, specifically in reviewing resumes. Why? Can you clearly articulate the problem this tool would be solving? If the answer is speed, efficiency, the ability to cut down human labour, then continue that line of questioning: Are you understaffed? Is there a reason you need to hire at a breakneck speed? How is this prioritizing of speed over quality going to hurt or help long term retention? ‘Why’ is a powerful question, and 'the need comes before the tool' will always be a useful North Star.

  2. Embrace regulation and accountability AI should not operate in an unregulated vacuum. Governments, organizations, and communities must demand accountability. Policies that require bias audits, transparency in AI decision-making, and public input on AI systems are necessary to prevent technology from reinforcing systemic inequities. Accountability shouldn’t stop at audits. We need community oversight of AI tools—particularly those used in public services, education, healthcare, and policing. This means giving affected communities the power to pause, veto, or redesign systems before they go live.

  3. Loudly and continuously advocate for high quality data. AI is only as good as the data it is trained on. This is a major problem when we consider that Canada has historically been terrible at collecting disaggregated data, particularly on race, disability, and other identity factors. The result is that we are forced to rely on datasets that are not relevant to our own context, perpetuating data gaps that AI will then absorb and reinforce. And here’s what we have to remember: data gaps are the disappearance of entire populations. If a system isn’t trained on data that reflects you, you do not exist in its decision-making framework. AI doesn’t “fail” to recognize you—it has no reference for you at all. The need for better, more intentional data collection is not just an academic exercise; it is about ensuring that AI-driven decision-making does not systematically erase entire communities from opportunity, access, and protection. This is a serious design problem: we need to build infrastructure that prioritizes disaggregated, high-quality data collection as a core function, not an afterthought.

This is the conversation QuakeLab wants to have: not about rejecting AI, but about demanding and designing AI that ensures these tools are built to serve, not to exclude. While we’re at it, maybe we can find ways to leverage these tools without boiling our planet. If we use QuakeLab’s Equity Architecture as a lens, we can see how each pillar plays a role in shaping our relationship to AI: academic research helps identify systemic patterns of bias in algorithms; activism holds developers and governments accountable when AI harms marginalized communities; and professionalized equity work is how we actually build new systems inside organizations—through change management, audits, governance, and co-design. We need all three pillars active if AI is going to serve equity rather than undermine it.

Next
Next

Workplace equity in Canada: A deep dive into benefits, accommodations, and career mobility