We’re all just making it up: How all good systems start

Every organization has a process for understanding what is working and what is not. Most of the time, that process takes the shape of an evaluation, an audit, or a performance review. These are the moments when teams pause, assess progress, and decide what to change. But if you look closely, you will notice that the processes we use to assess our work are rarely examined with the same scrutiny that we apply to the work itself.

A report is written, data is analyzed, recommendations are made, and everyone agrees that the next step is implementation. The pattern repeats, and yet many of the same issues persist. The reason is that the real work happens long before a report is drafted or a dataset is reviewed. It happens in how we define what is worth assessing, how we decide what data counts as meaningful, and how we design the systems that generate it. The outcomes that come later are shaped by those early decisions.

Most organizations treat analysis as neutral, technical, and objective. But analysis is always interpretive. It reflects the assumptions, priorities, and blind spots of the people and processes that create it. When a policy or product underperforms, the instinct is to look at the numbers, as if the data itself holds the explanation. More often than not, the data is performing exactly as designed. The limits are built into the framework that produced it.

Consider a city that measures road maintenance by how long it takes to fill potholes. That metric might show improvement every year, but it ignores the fact that sidewalks, curb cuts, and crosswalks deteriorate faster and receive less attention. To a wheelchair user, a stroller-pushing parent, or a delivery worker, the city’s claim of efficiency looks very different. The system isn’t broken. It is operating according to its own definition of success, one that never included those users in the first place.

Or think about a health organization evaluating its patient satisfaction scores. If the survey only captures English-language responses, the resulting data will validate the quality of care for some while obscuring the experiences of others. The assessment is precise, but not accurate. The design of the data collection tool determined what could be known about the system.

When we design or redesign anything—a process, a program, a policy, a service—there are questions we ask automatically. How much will it cost. How quickly can it be done. Is it safe. Will it meet our compliance and performance standards. These are all reasonable questions. They help us make responsible, defensible decisions. But they are incomplete. The way we define cost, safety, and efficiency often assumes a single type of user, a single context, and a single perspective on success.

If we want to build systems that perform accurately across different users and conditions, we need to expand the kinds of questions we ask.

  1. Who is this system designed for, and who does it currently exclude.

  2. What information do we consider reliable, and who generated it.

  3. How do we define success, and who benefits most from that definition.

  4. What does efficiency look like, and who pays the cost when that efficiency fails.

  5. How do we define safety, and whose experience of safety is being prioritized.

When those questions are missing, analysis becomes repetition. The same problems are studied, the same findings confirmed, and the same solutions proposed. Each new cycle of evaluation seems to produce insight, but the systems underneath remain unchanged. The point of building equity into analysis is not to make it more compassionate or values-driven. It is to make it more technically accurate.

Equity is a framework that helps identify where assumptions are driving error and where design choices are creating barriers that appear inevitable but are not. When equity is part of the technical process, analysis becomes a tool for precision rather than confirmation. It helps us understand not only what our systems are producing, but also why they are producing those outcomes.

In education, for example, many assessment frameworks measure success by test scores and graduation rates. Those are clear, standardized metrics, but they tell us very little about how students experience learning or whether systems are designed for consistent access and support. When schools apply an equity lens to analysis, they start by asking which groups are being measured, what is being measured, and who is defining success. The resulting data becomes more accurate because it reflects a more complete picture of how the system functions.

In business, a company might track performance by profitability and productivity. Those measures seem neutral, but without equity built into the analysis, they conceal how policy design, workload distribution, or technology access might be affecting outcomes across teams. Adding equity to the assessment process doesn’t slow it down or make it sentimental. It makes it better. It sharpens understanding and reduces blind spots.

Good assessment and good design are not opposites. They depend on one another. Without equitable analysis, design becomes guesswork. Without equitable design, analysis becomes pattern recognition. Both produce systems that perform predictably but not always effectively. The goal is to build systems that are both stable and adaptive, capable of functioning accurately across a range of users and conditions.

At QuakeLab, we spend a lot of time thinking about how to make that shift tangible. This month, we built a diagnostic that helps organizations see how they define cost, safety, efficiency, risk, performance, and impact, and whether equity is built into those definitions. It is a short, structured reflection that reveals how your systems perform under complexity and how close they are to professionalized equity practice.

The goal is not to score high or low, but to understand the design logic that shapes your work. Because before you can build a better system, you have to understand how you analyze the one you already have.

When organizations build equity into their analytical and design processes, their assessments become more useful. The recommendations that follow are more grounded in reality, the policies that emerge are more durable, and the systems they build perform better across contexts. In other words, the work of analysis becomes what it was always meant to be: the work before the work.

If you would like to explore how your organization defines cost, safety, efficiency, risk, performance, and impact, you can access QuakeLab’s Design Integrity Diagnostic. Submit your email below to access the tool and begin your own reflection.

Next
Next

Equity in Beta: What Happens When Policies Assume Stability?