The aggregate health score was one of the big ideas of the CS software era. Roll up all your signals — product usage, support tickets, NPS, engagement, login frequency, whatever else you're tracking — into a single number that tells you how healthy each account is. Green, yellow, red. Glanceable. Scalable. A dashboard your VP can screenshot for the board deck.

It also might be the thing most quietly destroying your ability to actually understand your customers.

Here's the problem with aggregate health scores. They average out the signal.

A customer who logs in every day but never gets past the same three features looks healthy. A customer who had one breakthrough month and has been declining ever since looks healthy if you're averaging across their lifetime. A customer who is using the product in completely the wrong way — technically active, headed toward churn — looks healthy until the cancellation email arrives.

The score tells you a number. It doesn't tell you what's happening.

Worse, it creates a false sense of coverage. If every account has a health score, the implicit assumption is that every account is being understood. But a score derived from activity metrics is not understanding. It's a proxy for understanding, and a lossy one. The things that actually determine whether a customer succeeds — whether they're using the product in ways that produce their specific desired outcome, whether the right people are engaged, whether they've hit the milestones that predict expansion and advocacy — those things almost never show up cleanly in an aggregate score.

The CS software era created this artifact for a reason. When you have hundreds or thousands of accounts and a small team, you need some way to prioritize. The health score was the answer the tooling gave us. And it works well enough that most CS teams never questioned whether it was the right answer.

It's not.

The right move is to back into it from the other direction. Start with the outcome. What does a successful customer actually look like at month three, month six, month twelve? What did they do? What did they adopt? What results did they see? What milestones did they hit?

Then measure whether your current customers are on that path. Not whether they're active. Not whether they opened your last email. Whether they're doing the specific things that your successful customers did at this stage of their journey.

That's a very different signal. It's harder to build. It requires you to actually know what success looks like for your customers, which turns out to be a harder question than most CS teams have sat with honestly.

But it's the signal that means something. A customer on the success path who hits a usage dip is a different situation than a customer who was never on the path and has been declining for two months. The aggregate score treats them the same. The outcome-based signal treats them completely differently — because they are completely different.

The health score isn't going away. It's useful as a rough triage layer. But if it's the primary way your team understands account health, you're flying on instruments that weren't built for the terrain you're navigating.

Back into the usage path that produces successful outcomes for your specific customers. Define what meaningful progress looks like at each stage. Measure that. Build your early warning system around deviation from the success path, not deviation from average activity.

The score tells you something went wrong after it already went wrong.

The signal tells you it's about to.