In March there were 175 alerts generated, resulting in 21 regression bugs being filed on average 5.4 days after the regressing change landed.
Welcome to the March 2022 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by a review of the year. If you’re interested (and if you have access) you can view the full dashboard.
- All alerts were triaged in an average of 1.1 days
- 96% of alerts were triaged within 3 days
- Valid regressions were associated with bugs in an average of 2.4 days
- 83% of valid regressions were associated with bugs within 5 days
- 8% of regression bugs had the culprit bug corrected
Regression culprit accuracy
This month a new metric is being reported in the sheriffing efficiency section, which relates to the accuracy of our identification of culprit regression bugs. When a sheriff opens a regression bug, the ‘regressed by’ field is used to identify the bug that introduced the regression. This is determined by the sheriffs and informed by our regression detection tools. Sometimes we get this wrong, and the ‘regressed by’ field is updated to reflect the correct culprit. The new metric measures the percentage of regression bugs where this field has been modified, and we’ve established an initial target of <15%. This isn’t a perfect reflection of accuracy, and for several reasons won’t be used as a sheriffing KPI at this time. We believe this metric can be improved by working on our sheriffing guidelines around identifying culprits, but also by improving our test scheduling and regression detection algorithms.
Summary of alerts
Each month I’ll highlight the regressions and improvements found.
- 😍 6 bugs were associated with an improvement
- 🤐 10 regressions were accepted
- 🤩 5 regressions were fixed (or backed out)
- 🤥 0 regressions were invalid
- 🤗 0 regressions are assigned
- 😨 1 regression is unassigned
- 😵 2 regressions were reopened
Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.
I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.
The dashboard for March can be found here (for those with access).