Let’s shift the fundamental unit of analysis — from the unit to the household, and from price to fit.
Most municipal housing analysis is organized around a single question: how many units exist or need to exist at each price point? The AMI bands are the answer categories. Goals are set in those terms, progress is measured in those terms, and funding tools are designed around them. It is a tidy, internally consistent system with one significant flaw: it was designed for federal program administration, not local housing strategy. HUD needed a standardized eligibility threshold applicable uniformly across hundreds of cities. Most city councils inherited that threshold as their primary analytical lens — but a city council is not HUD. It governs a specific place with specific people, a specific labor market, specific geography, and a specific housing stock. None of that specificity survives the translation into AMI bands.
The better question is: which households need what kind of unit, in what location, and where do those matches currently fail?
When analysis is organized around households rather than units, several things change simultaneously.
The policy target becomes a person, not a number. A statement like “we need X units at 50% AMI” is a production target. A statement like “the people staffing our hospitals, schools, and restaurants cannot afford to live in the community they serve, and nothing in our housing pipeline reaches them” is a policy problem. One is easier to deprioritize. The other is harder to look away from.
Location becomes a first-order variable rather than an afterthought. The AMI framework treats a qualifying unit anywhere in a city as equivalent to a qualifying unit near transit, employment, schools, and childcare. They are not equivalent. For a household without reliable transportation, the location difference is the difference between a functional housing situation and one that fails on every practical dimension despite meeting the price threshold. When analysis starts with the household, proximity to what that household actually needs becomes part of the definition of adequacy — not a secondary consideration.
The gaps become visible in a way aggregate counts conceal. A jurisdiction can technically meet regional allocation targets while systematically failing specific household types, specific neighborhoods, or specific life circumstances. The households most likely to fall through are those who sit between program thresholds — earning too much for subsidized programs, too little to compete in the market. They are invisible not because the data doesn’t exist to find them but because the analytical framework isn’t organized to look for them.
The definition of success changes. Under the unit-and-price framework, success is production — units built at the right price point. Under a household-and-fit framework, success is matching — households that were previously unserved finding housing that works for their actual situation in a location that supports their actual life. That is a harder standard and a harder outcome to measure. It is also closer to what housing policy is actually meant to accomplish.
This reorientation has a practical consequence for how limited policy tools get deployed. If the question is “how many units at X% AMI do we have,” the answer drives toward production targets and subsidy programs. If the question is “which household types are unmatched and why,” the answer might point toward zoning reform, transit investment, bridge programs for transitional need, or data infrastructure that makes the mismatch visible enough to act on. The tool should follow the diagnosis. In most council chambers today, the tools are derived from a federal eligibility framework being applied to a local problem that framework was never designed to diagnose.
The AMI number is not the goal. It is one input among several that together determine whether a real household can live in a community. Keeping that distinction clear is the beginning of more honest — and more effective — local housing policy.
