Note: At Haystack we're re-thinking how metrics fit into engineering culture. What supports strong engineering culture and what hurts it. We put a lot of thought into what goes into Haystack - this is one example.

Some common questions we get asked:

Can I see...

all my teams on one graph?

all members in a table?

who has the most reviews?

who has the fastest Cycle Time?

We get these questions frequently enough that we decided to write about it. It's worth an explanation why such a (seemingly) simple feature like comparisons doesn't exist in the app.

Here's why we don't have comparisons

Although we all have the best intentions, simple features like this are a slippery slope. Many of us are trying to figure out where we can improve and where we can help our team the most effectively.

Often that starts with who we can help as a starting point.

So it all makes sense why we get these feature requests so often. And trust us when we say - we've been tempted.

The unfortunate reality is that when presented with a side by side comparison of teams, projects, or members - we can't help but compare what we're seeing. This puts our teams at odds with one another.

When we compare teams we are:

  1. Establishing unhealthy (and unfair) competition
  2. Evaluating all teams/members against the same yardstick
  3. Not focusing on our core goal - improvement

Establishing unhealthy (and unfair) competition

When we compare teams/members against one another we are establishing an unhealthy pattern in our team culture. As any engineer will tell you, all work is different. Should the Cycle Time (or Throughput) of a kernel engineer be compared to that of a frontend engineer?

The simple answer is no.

Evaluating all teams/members against the same yardstick

All teams shouldn't be measured against the same yardstick. As we mentioned above, each team is different. You'll see this if you go to your Trends > Throughput page. We don't give you a generalized 'healthy area'. Rather we calculate it for every team and project independently. There's a reason for this.

Each team should be evaluated independently.

Not focusing on our core goal - improvement

All this leads us to our final reason for not introducing comparisons into Haystack. We believe teams should be evaluated based on their own levels of improvement. Each team should be improving and compared to their own history rather than other teams.

When we focus on cross-team comparisons, we're missing our core focus - is each team improving?

So what should we do then?

When we attempt to get a view of teams side by side we should ask ourselves - what are we really hoping to learn? Can this be done by looking at our teams individually?

Of course, this takes time and we get that. But we believe it's time well spent and will give you a better view on how your teams are performing. By focusing on each team independently, you'll be able to foster a much healthier culture of improvement and protect our teams from unfair comparisons or rankings.

Trust us, your engineers will thank you.

Not Satisfied?

Neither are we. We're constantly iterating and improving Haystack - trying to design the best of both worlds.

If you have ideas on how to improve Haystack or introduce organization-wide metrics without unhealthy comparisons then let's talk it out. Message our founders at [email protected] or via our chat in the bottom right corner.

Did this answer your question?