Skip to main content
All CollectionsBest Practices
Software Engineering Metrics FAQs
Software Engineering Metrics FAQs

Some FAQs around how Haystack focuses our software engineering metrics on improving Developer Experience and engineering efficiency.

Sagar Shewarmani avatar
Written by Sagar Shewarmani
Updated over 2 years ago

At Haystack we're re-thinking how metrics fit into engineering culture. What supports strong engineering culture and what hurts it. We put a lot of thought into what goes into Haystack - these are some examples.

Contents

Why don't you track JIRA?

At Haystack, we believe it is essential that metrics are strong indicators of real-world performance and that they can be effectively improved by engineering teams.

Many times, software engineering teams will bring in Haystack as they have read the research behind the North Star metrics we track (in books like Accelerate). Soon product/delivery teams will then want to track other metrics they feel are important to them. At Haystack, we want to make sure we only present the metrics that are healthy and productive for teams to track and improve against. That means we've so far focussed on engineering metrics instead of product/delivery metrics.

Metrics from project management tools like JIRA are subject to great amounts of uncertainty. Even in instances where engineers are perfectly timed at pressing the "Start Work" and "Done" buttons, the data is still subject to excessive variance. In fact, many of our customers come to us looking for git metrics having been frustrated with the metrics available from project management tools.

Secondly, the metrics that are desirable to track are often counter-productive. For example, in the past we've heard requests to track Lead Time, the time from a ticket going into backlog to it being completed. The reality is that in a truly agile team, the ticket is only prioritised when the business priority of that task outweighs others in backlog. When the ticket was filed is simply irrelevant to that process.

Instead, Change Lead Time allows a team to measure the time from first commit to pull request merged. By shortening this time, teams aren't just able to improve Developer Experience and that bottlenecks to delivery are removed, but they are able to ensure that the team has a faster feedback cycle to ship and re-prioritise new work, allowing for short-feedback development cycles where ever greater business value is delivered.

Nevertheless, we do understand there are instances where it's productive to track engineering metrics external to Haystack. For example; teams may use systems like JIRA or Zendesk for tracking bugs. Having appropriate metrics and dashboards from your support system is important to measure such metrics (for example, to identify the number of bugs per team or the prioritisation of which bugs get worked on first). Likewise, some teams use JIRA or PagerDuty as part of their incident management process, so extract data like Mean Time to Recovery from such tools.

These are healthy and appropriate uses of auxiliary data that can compliment Haystack. At Haystack, we're looking into how we can better track metrics like number of bugs or mean time to recovery, if you're interested in tracking these in Haystack itself - then please reach out to us so we can ensure that we build this functionality right.

Can I compare teams/individuals in Haystack?

Some common questions we get asked:

Can I see...

  • all my teams on one graph?

  • all members in a table?

  • who has the most reviews?

  • who has the fastest Change Lead Time?

We get these questions frequently enough that we decided to write about it. It's worth an explanation why such a (seemingly) simple feature like comparisons doesn't exist in the app.

Here's why we don't have comparisons

Although we all have the best intentions, simple features like this are a slippery slope. Many of us are trying to figure out where we can improve and where we can help our team the most effectively.

Often that starts with who we can help as a starting point.

So it all makes sense why we get these feature requests so often. And trust us when we say - we've been tempted.

The unfortunate reality is that when presented with a side by side comparison of teams, projects, or members - we can't help but compare what we're seeing. This puts our teams at odds with one another.

When we compare teams we are:

  1. Establishing unhealthy (and unfair) competition

  2. Evaluating all teams/members against the same yardstick

  3. Not focusing on our core goal - improvement

Establishing unhealthy (and unfair) competition

When we compare teams/members against one another we are establishing an unhealthy pattern in our team culture. As any engineer will tell you, all work is different. Should the Change Lead Time (or Throughput) of a kernel engineer be compared to that of a frontend engineer?

The simple answer is no.

Evaluating all teams/members against the same yardstick

All teams shouldn't be measured against the same yardstick. As we mentioned above, each team is different. You'll see this if you go to your Trends > Throughput page. We don't give you a generalized 'healthy area'. Rather we calculate it for every team and project independently. There's a reason for this.

Each team should be evaluated independently.

Not focusing on our core goal - improvement

All this leads us to our final reason for not introducing comparisons into Haystack. We believe teams should be evaluated based on their own levels of improvement. Each team should be improving and compared to their own history rather than other teams.

When we focus on cross-team comparisons, we're missing our core focus - is each team improving?

So what should we do then?

When we attempt to get a view of teams side by side we should ask ourselves - what are we really hoping to learn? Can this be done by looking at our teams individually?

Of course, this takes time and we get that. But we believe it's time well spent and will give you a better view on how your teams are performing. By focusing on each team independently, you'll be able to foster a much healthier culture of improvement and protect our teams from unfair comparisons or rankings.

Trust us, your engineers will thank you.

Not satisfied with these answers? Provide us feedback.

Neither are we. We're constantly iterating and improving Haystack - trying to design the best of both worlds.

If you have ideas on how to improve Haystack or introduce organisation metrics without unhealthy comparisons then let's talk it out. Message our founders at [email protected] or via our chat in the bottom right corner.

Did this answer your question?