What Is DORA?
DORA stands for DevOps Research and Assessment. It is an influential research program, now operated by Google, that aims to understand high performance in the context of software development and delivery. By analyzing data from thousands of software professionals and IT organizations, DORA has identified key metrics that correlate with successful DevOps practices. These metrics help organizations measure their development and operational performance, providing a benchmark to strive for or maintain.
Understanding the importance of these metrics is crucial for any organization looking to improve its software delivery performance. By focusing on the areas highlighted by DORA, companies can work towards achieving higher efficiency, better quality products, and more satisfied customers. This focus on continuous improvement and measurement is at the heart of the DevOps philosophy, making DORA’s findings a valuable resource for teams and organizations worldwide.
This is part of a series of articles about developer experience.
The DORA Metrics and How to Calculate Them
There are four key metrics that DORA identifies as indicative of high performance in DevOps. For each of the metrics, Google shares benchmark numbers in its Accelerate State of DevOps report.
Deployment Frequency
Deployment frequency refers to how often a team deploys code to production. High-performing teams deploy code frequently, as this enables them to deliver value to their customers more quickly. Regular deployment also helps to reduce the risk associated with each deployment, as smaller, more frequent deployments are easier to test and troubleshoot than large, infrequent ones.
Google’s benchmark numbers:
- Elite performers: Multiple times a day
- High performers: Once a week to once a month
- Medium performers: Once a month to once every six months
- Low performers: Less than once every six months
How to calculate this metric:
The deployment frequency metric can be calculated by dividing the total number of deployments by the total time period under consideration.
For example, if a team deployed 30 times over a month, the deployment frequency would be calculated as 30 deployments divided by 30 days, equating to a deployment frequency of once per day.
Mean Lead Time for Changes
The mean lead time for changes is the average amount of time it takes for a code commit to be deployed to production. This includes the time taken for code review, testing, and any other steps in the deployment pipeline. A shorter mean lead time indicates a more efficient and streamlined process, as it shows that changes can be implemented quickly and effectively.
How to calculate this metric:
Mean lead time for changes is calculated by averaging the total amount of time it takes for all changes (from commit to deployment) during a specified period. To compute this, you sum up the total lead time for all changes that were deployed in the period and divide by the number of changes.
For example, if an organization made 50 changes in a month, with a total lead time of 1000 hours from commit to deployment, the mean lead time for changes would be 1000 hours divided by 50, resulting in an average lead time of 20 hours per change.
Change Failure Rate
The change failure rate refers to the percentage of deployments that result in a failure, such as a service outage or a functionality issue. A lower change failure rate indicates a higher quality of code and a more reliable deployment process. It also reflects a team’s ability to prevent and catch issues before they affect end-users.
Google’s benchmark numbers:
- Elite performers: 0-15%
- High, medium and low Performers: 16-30%
How to calculate this metric:
Change failure rate is calculated as the percentage of deployments that fail in a given time period, out of the total deployments. To calculate it, you count the number of deployments that resulted in a failure and divide it by the total number of deployments, then multiply by 100 to get a percentage.
For example, if there were 100 deployments in a month and 15 of those deployments failed, the change failure rate would be (15/100) * 100% = 15%.
Mean Time to Recovery
The mean time to recovery is the average amount of time it takes for a team to recover from a failure or outage. This includes the time taken to identify the issue, develop a fix, and deploy the fix to production. A shorter mean time to recovery indicates the team’s ability to respond quickly and effectively to issues, minimizing the impact on end-users.
Google’s benchmark numbers:
- Elite performers: Less than one hour
- High performers: Less than one day
- Medium performers: One day to one week
- Low performers: Over six months
How to calculate this metric:
Mean time to recovery (MTTR) is calculated by averaging the total time taken to recover from all failures during a specified period. This involves summing up the total recovery time for all incidents and dividing by the number of incidents.
For example, if an organization experienced 10 incidents in a month, with a total recovery time of 40 hours, the mean time to recovery would be 40 hours divided by 10, resulting in an average recovery time of 4 hours per incident.
Why DORA Metrics Matter for DevOps
DORA metrics provide valuable insight into a team’s performance and are becoming an important tool for continuous improvement in the world of DevOps.
They allow teams to benchmark their performance against industry standards and identify areas where they are excelling or lagging. By doing so, DevOps teams can prioritize areas for improvement and track their progress over time.
DORA metrics also promote a culture of transparency and accountability within teams. They make it clear what is expected of each team member and provide a tangible measure of how individual contributions impact the team’s overall performance.
Lastly, these metrics provide a common language for discussing performance and improvement. This facilitates better communication between team members and stakeholders, leading to more effective decision-making.
Implementing DORA Metrics in Your Organization
Here are a few steps you can take to start using the DORA metrics in your DevOps team.
1. Baseline Current Performance
First, you need to understand where you stand currently—your baseline. This provides a clear picture of your organization’s current operational efficiency and effectiveness.
To establish your baseline, start collecting data for each of the DORA metrics. This might involve tracking and recording the frequency of deployments, the time taken for changes from commit to deployment, the time it takes to restore service after a failure, and the number of changes that result in failure.
2. Define Improvement Targets
Improvement targets are the goals you want to achieve in terms of improving your DORA Metrics. When defining your targets, be realistic. You should consider your organization’s particular circumstances and constraints.
For instance, an organization with a mature DevOps culture might have different targets compared to a company that’s just starting out with DevOps. Your targets should be specific, measurable, achievable, relevant, and time-bound (SMART).
3. Foster a Culture of Continuous Improvement
DevOps relies on a culture of continuous improvement. This means that everyone in the organization is committed to constantly seeking ways to improve their work and the overall performance of the organization.
Creating a culture of continuous improvement requires strong leadership and a clear vision for improvement. It also requires a commitment to learning and development, and an environment that encourages feedback and open communication. You can use the DORA metrics to clarify to teams what continuous improvement means and how to quantify their success.
4. Adopt Tools to Automate the Tracking of DORA Metrics
Leverage tools and platforms that can help automate the tracking of these metrics. This saves time and resources, while also helping ensure accuracy and consistency in your data. For instance, you can use deployment automation tools to track the frequency of deployments. You can also use issue tracking systems to track the time to restore service. Many DevOps tools are adding specialized features that display relevant DORA metrics.
5. Consider Integrating Additional Metrics to Complement DORA Metrics
While DORA metrics are useful for measuring DevOps performance, they are not the only metrics you should consider. There are several other metrics that can complement DORA metrics and provide a more holistic view of your organization’s performance.
These might include metrics related to customer satisfaction, business value delivered, operational efficiency, and team morale, among others. By integrating these additional metrics, you can gain a more comprehensive understanding of your organization’s performance and identify areas for improvement that might not be captured by DORA metrics alone.
Software Documentation for Devops Teams with Swimm
Swimm’s knowledge management tool for code solves the challenges of documentation for dev teams. By treating software documentation like code, documentation and code are created and maintained together.
- Teams streamline documentation, sharing knowledge across teams and repositories.
- All documentation is saved as code so that your docs are easily read as Markdown files within the codebase and are reviewed on Git.
- Swimm’s IDE plugins with VS Code and JetBrains make documentation incredibly easy to find – right next to the code that the docs actually relate to.
- Swimm’s powerful code-coupled editor helps engineers create and edit docs quickly with slash editor commands and all the capabilities of rich text, Markdown, and live code snippets. The editor is available both from Swimm’s Web App and Swimm’s IDE plugins.
- Docs always stay up to date with Swimm’s patented Auto-sync feature.