As an engineering team, having great process and technology helps you deliver value to your business quicker. Businesses have metrics like revenue to track how well they perform, but how can you do the same for your engineering team?

Engineering teams often risk measuring the wrong things. Focusing on overly local metrics (like, say, lines of code changed per Pull Request) could lead to you making changes that degrade overall engineering team performance at the expense of a meaningless local optimisation.

First and foremost, Haystack lets you track "North Star" metrics to get a global picture of how your engineering team is performing. They track the entire picture of your engineering team, whilst still being metrics that are in the control of engineering leaders. These metrics have been demonstrated to reflect an engineering team's ability to deliver business value. Indeed, if you've read books like "Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations" you may already be familiar with the power of these North Star metrics.

Getting the right measurements is one thing, but you also need to be able to drive improvements to these metrics. Haystack allows you to measure the "Cycle Time Metrics" that form the "Cycle Time" North Star metric to identify if slowdowns are occurring in the development process or the code review process. This review time can then be further subdivided into "Review Time" metrics to help diagnose where slowdowns are occurring in the code review process (formed of First Response Time, Rework Time & Idle Completion Time).

To prevent burnout, Haystack also provides "Throughput" data, to identify to help identify a healthy pace for your engineering team and preventing developers from getting burnt out.

"Leading Indicators" can help give an early indication of trends that could impact your North Star objectives. These early indicators are vital both in helping drive improvements quickly, and at stopping poor practice from becoming a systematic issue. Haystack allows you to prevent Technical Debt before it happens by alerting on these risk factors.

North Star Metrics

North Star metrics give the team high-level goals on which to align. These north star metrics act as beacons - highlighting potential problems as they occur.

Change Lead Time

Change Lead Time is one of the Four Key DevOps Metrics defined in the Accelerate book and in Google’s State of DevOps reports.

It measures the most predictable and easy to optimise part of the development lifecycle - first commit to deployment. Whilst this is also the most accurate part of the development lifecycle to measure, tools like Haystack will also provide you options to gain additional accuracy from these measures, by setting global filters and being able to exclude draft Pull Requests from these calculations.

This measure provides you an insight into how you can best improve the most repeatable part of the software development pipeline to allow your team to become an elite performer. By being able to take features as quickly as possible from developers to real-world users you can deliver value quicker, get feedback that shapes your product roadmap quicker all whilst the new automation will allow you to ship software more reliably than ever before.

Indeed, Google’s State of DevOps Reports (in research led by Dr Nicole Forsgren) has shown that companies that do well under these DevOps metrics have a 50% higher market cap growth over 3 years.

Change Lead Time is also fully within the control of the engineering team, meaning that it is the right metric for the engineering team to set their own goals around and use to measure their own performance.


Definition: Time from First Commit to Deplyoment.

Average: The average Haystack team has a Cycle Time of less than 3 days.

Haystack allows you to dive much further to find bottlenecks. In the Haystack Dashboard, you can drill down to understanding where time is spent during the engineering process, including:

Further Reading:

Deployment Frequency

Deployment Frequency helps identify the rate at which you are delivering new business value to your customers. Smaller deployments have less risk of going wrong and provide an opportunity to deliver value to your customers in shorter iterations, allowing you to learn quicker.


Definition: What percentage of deployments require a hotfix.

Average: Most teams deploy at least once a day.

Further Reading:

Change Failure Rate (CFR)

Change Failure Rate tells us the percentage of deployments that created a production failure - this tells us how robust and reliable our deployment process is.


Definition: Percentage of deployments which caused a failure in production.

Average: Typically less than 15%.

Further Reading:

Throughput (Detecting Burnout)

Throughput gives us a sense of the team's bandwidth. It gives us a picture into how much work we can typically accomplish. Teams should aim for consistent throughput.

Definition: Number of merged pull requests

Average: The average Haystack team has a weekly Throughput of at least 3 pull requests per member

Haystack provides an overview as to what a team's Throughput is and dynamically calculates a Healthy Area. Team's which exceed the healthy range risk burnout. Haystack can even send you notifications when your team is at risk of burnout using the Weekly Recap notifications.

Below you can see an example of a team's throughput dropping below the Healthy Area in the week following a dramatic increase in throughput. This is a clear sign of team burnout.

Detect Software Engineer team burnout with Throughput metrics from Haystack

Haystack additionally provides a set of drill-in metrics for Throughput:

  • Average Throughput: Average Pull Requests merged in a given timeframe

  • Merge Rate: Merged Pull Requests as a percentage of all open Pull Requests

  • Close Rate: Closed Pull Requests as a percentage of all open Pull Requests

Leading Indicators

Leading Indicators provides an insight into some rule-of-thumb metrics that we have found generally help drive North Star metrics. In other words, we know that improving these metrics helps drive improvements in North Star metrics. Conversely, when these metrics degrade, they provide a risk factor that your North Star metrics may soon worsen too. These leading indicators provide real-time, fast-turnaround insight into how to improve and maintain your North Star metrics.

Leading Indicators only form a small part of the way you manage risk as an Engineering Manager. Using Haystack's notifications feature, you can receive proactive daily health checks that will warn you on multiple other critical risk factors, from Pull Requests being merged without review to Pull Requests stuck in back-and-forth discussion.

Pull Request Size

Pull Request Size gives us a sense of how large or complex pull requests are. Teams should aim to work in small batches and maintain an average pull request size of less than 200 lines of code.

Definition: Number of lines changes per pull request

Average: The average Haystack team has an average pull request size of 200 lines of code.

Work In Progress

Work in Progress (WIP) gives us a sense of how many outstanding pull requests are actively being worked on across the team. Teams should aim to reduce WIP per member and maintain an average of no more than 2 pull requests per member.

Definition: Number of open pull requests

Average: The average Haystack team has an average WIP less than 2 per member.

Weekend Activity

Weekend Activity gives us a sense of how many members feel the need to work over weekends. This gives us a sense of how overworked the team is. Teams should aim to reduce weekend work.

Definition: Sum of github activities (commit, comment, push) over the weekend

Average: The average Haystack team has less than 1 weekend activity on average.

Did this answer your question?