← Back to blog
·5 min read

Why engineering dashboards don't change behavior

The dashboard graveyard

Every engineering org has the same story. Someone buys LinearB, Swarmia, Jellyfish, or Haystack. The first two weeks are exciting — DORA metrics, cycle time charts, deployment frequency graphs. Week three, half the team has forgotten the URL. By month two, only the person who bought it still logs in, and even they're checking it less.

This isn't a failure of any specific product. It's a failure of the delivery model. Dashboards are pull-based: they have the data, but you have to go get it. Engineers and engineering managers are already juggling GitHub, Linear, Sentry, Slack, and whatever else. Adding another tab to check is adding friction, not removing it.

Data without delivery is just noise

The metrics these tools surface are genuinely useful. Knowing that your cycle time is creeping up, that PRs are sitting unreviewed for three days, or that sprint commitments are consistently missed — that's valuable information. The problem is timing and context.

A chart showing your average PR review time is 2.3 days is interesting in a retrospective. But it doesn't help you at 10am on Tuesday when a specific PR has been open for three days and the person who should review it is overloaded with four other reviews. The aggregate metric is correct. It's just not actionable in the moment.

Push beats pull

The alternative is push-based delivery: bring the insight to the person who needs it, at the moment they need it, in the tool they're already using. Instead of a cycle time chart, a Slack message: "hey, this PR has been open for three days and no one's looked at it — want me to assign someone?"

This is harder to build than a dashboard. You need to understand not just what the metrics are, but when they matter, who they matter to, and how to communicate them without being annoying. A dashboard can show everything and let you filter. A proactive system has to decide what's worth surfacing and what isn't.

The threshold matters enormously. Too many alerts and you're Slack noise — teams mute you in a week. Too few and nobody remembers you exist. The sweet spot is 3-5 proactive messages per week that are genuinely useful. That requires judgment, not just thresholds.

The real problem is cross-tool

Most dashboard tools connect to one or two data sources. They show you GitHub metrics or Linear metrics, but they don't connect signals across tools. A PR sitting open for five days is a GitHub signal. The ticket being marked done while that PR is unmerged is a Linear signal. The Sentry error that spiked right after the last deploy is a Sentry signal. No single-tool dashboard sees all three.

The most useful insights come from correlating these signals: this error started after that deploy, which was this PR, which was supposed to fix that ticket. An engineering manager who's been on the team for years can connect these dots from memory. A dashboard that only sees GitHub can't. A system that watches all three tools simultaneously can.

What actually works

The teams that get value from engineering intelligence tools share a pattern: the tool meets them where they already are. Not in a new tab. Not in a weekly email that nobody reads. In Slack, during the workflow, with specific context about specific work.

This is why Gary lives in Slack instead of behind a login page. Not because dashboards are bad — the data they surface is real. But because the best insight in the world is useless if nobody sees it. The delivery model is the product.

Gary catches things like this for your team