TLDR: The Focus Meter turns delivery reliability, timeliness, rework, and escaped defects into a single 0–100 score that measures both individual and team performance to drive predictable delivery, higher quality, and fair incentives. It’s designed to be hard to game, adjustable to business priorities (velocity/flow/quality), and implementable across GitLab, Jira, or Azure DevOps via standardized ticket data and automated reporting.
Executive Summary
The Focus Meter is a unified performance framework that converts commitment reliability, timeliness, rework, and escaped defects into a single objective score used to drive predictable delivery, higher quality, and fair compensation. By combining individual and team results, it prevents gaming, reinforces collaboration, and ensures rewards reflect both personal execution and collective success. Adjustable weights allow leadership to emphasize velocity, workflow efficiency, or quality as priorities evolve, while the model’s grounding in behavioral psychology ensures that rewards are timely, motivating, and sustainable. By encouraging contributors to operate within the second standard deviation of performance, high output without burnout, the Focus Meter builds a healthy, accountable, and continuously improving engineering culture.
1. Purpose
The Focus Meter establishes an objective, quantitative framework for measuring engineering and QA productivity, delivery predictability, and product quality. It unifies commitment reliability, timeliness, rework load, and escaped defects into a single weighted metric used at both the individual and team levels. This framework supports transparent performance evaluation, data-driven coaching, and incentive-based compensation tied to measurable outcomes.
2. Benefits
- Predictable delivery through measurable commitment and completion behavior.
- Increased quality by monitoring rework and escaped defects.
- Reduced subjective performance evaluations by using numeric indicators.
- Fair and consistent compensation incentives tied to output quality and reliability.
- Automated reporting that avoids overhead for engineering managers.
- Alignment between development, QA, and organizational goals.
3. Scope and Applicability
- All software engineers, QA engineers, SDETs, and contributors participating in sprints, iterations, or monthly cycles.
- All GitLab, Jira, Azure DevOps, and other ticketing system work tracked through issues, story points, and status transitions.
- All stories, bugs, tasks, test plans, and regression work.
The Focus Meter is calculated per individual and per team. Team results influence total bonus allotment. Individual results determine personal payout within that allotment.
4. Definitions and Terminology
- Story Points: Numeric estimate representing the effort of a work item.
- Commitment Set: Work items assigned to a contributor at sprint or iteration start.
- Completed Items: Work items moved to Done within the period.
- Due Date: The defined sprint, iteration, or deadline date of a work item.
- Rework: Any reopened item or new bug created before release that is directly linked to the work item.
- Escaped Defects: Production bugs linked to features completed within the defined window.
- Focus Meter: Weighted composite score from 0 to 100.
- Weight Presets: Configurable emphasis on delivery, flow, or quality.
5. Metric Structure
The Focus Meter is calculated using four components:
- Commitment
- Timeliness
- Rework
- Escapes
FocusMeter = 100 × (wC × Commitment + wT × Timeliness + wR × Rework + wE × Escapes) ÷ (wC + wT + wR + wE)
6. Components
6.1 Commitment
Measures how much of the committed workload was completed.
Commitment = completed story points ÷ committed story points
- Cap at 1.1 to allow modest reward
- Otherwise cap at 1.0
6.2 Timeliness
Measures completion before due dates or sprint end.
Timeliness = on time story points ÷ all completed story points
LateFactor = max(0, 1 − days late ÷ grace period)
- Weighted timeliness may be used
6.3 Rework
ReworkRate = rework story points ÷ all completed story points
Rework = 1 − min(ReworkRate ÷ Rmax, 1)
6.4 Escapes
EscapesRate = number of escaped bugs ÷ story point sum for release window
Escapes = 1 − min(EscapesRate ÷ Emax, 1)
7. Weight Presets
Velocity preset
- Commitment 4
- Timeliness 3
- Rework 2
- Escapes 1
Flow preset
- Commitment 3
- Timeliness 4
- Rework 2
- Escapes 1
Quality preset
- Commitment 2
- Timeliness 2
- Rework 3
- Escapes 3
8. Individual and Team Calculation
Individual metrics apply per contributor. Team metrics aggregate all contributors.
9. Compensation Framework
- Allocate base grant using seniority weights.
- Apply team factor (team focus meter ÷ 100).
- Apply individual factor (individual focus meter ÷ 100).
10. Governance and Responsibilities
- CTO: Owns framework, reviews trends, ensures fairness.
- Engineering Managers: Validate mapping of fields and labels.
- PMs: Ensure due dates and commitments are accurate.
- QA Leads: Ensure consistent bug linking and attribution.
- DevOps or Data Engineering: Maintains nightly calculation job.
- Finance or HR: Applies payout rules.
11. Process Workflow
- Ensure fields are standardized.
- Run nightly calculation.
- Publish results.
- Apply compensation formulas.
12. Success Metrics
- Reduction in rework rate.
- Reduction in escaped defects.
- Improved commitment reliability.
- Improved on-time delivery.
- Reduced performance variance.
13. Psychology of a Reward System Built on the Focus Meter
13.1 Immediacy of Reward
- Frequent scoring cycles reinforce habits.
- Micro rewards (Bonusly, HeyTaco) strengthen motivation.
- Clear link between behavior and reward.
13.2 Resistance to Gaming
- Team × individual factor prevents gaming.
- High performers cannot carry chronic underperformers.
- Team incentives prevent selfish optimization.
13.3 Adjustable Weights
- Weights shift with business priorities.
- Allows seasonality: velocity, flow, or quality emphasis.
13.4 Maximizing Compensation
- Clear path to earning rewards.
- Transparent formula increases motivation.
13.5 Bell Curve Performance
- 2nd standard deviation = sustainable excellence.
- 3rd standard deviation = burnout or understaffing risk.
- 1st = underperformance or unclear expectations.
13.6 Team Behavior Shaping
- Peer accountability increases.
- Teams unblock each other.
- High performers elevate the group.
Appendix A: Example GitLab API Queries
curl --header "PRIVATE-TOKEN: <token>" "https://gitlab.example/api/v4/projects/<id>/issues?labels=sprint-24" curl --header "PRIVATE-TOKEN: <token>" "https://gitlab.example/api/v4/projects/<id>/issues/<issue_id>/resource_state_events" curl --header "PRIVATE-TOKEN: <token>" "https://gitlab.example/api/v4/projects/<id>/issues/<issue_id>/links"
Appendix B: Example SQL Queries
SELECT assignee, SUM(CASE WHEN completed_date <= due_date THEN story_points ELSE 0 END) AS on_time_sp, SUM(story_points) AS total_completed_sp FROM completed_items WHERE sprint_id = :sprint GROUP BY assignee; SELECT assignee, SUM(CASE WHEN reopened = TRUE THEN story_points ELSE 0 END) AS rework_sp, SUM(story_points) AS total_sp FROM completed_items WHERE sprint_id = :sprint GROUP BY assignee; SELECT feature_owner AS assignee, COUNT(*) AS escaped_bugs, SUM(feature_story_points) AS feature_sp FROM escaped_defects WHERE defect_created_date BETWEEN :start AND :end GROUP BY feature_owner;
Appendix C: Implementation Examples
1. Common implementation pattern
Regardless of platform (GitLab, Jira, Azure DevOps), the pattern is:
- Standardize data
Every ticket has: assignee, type (story, bug, test), story points, status, closed date, due date or sprint, and links to related bugs. Status transitions are recorded so you can detect reopens. - Extract metrics per timebox
For each sprint or month, compute for each user and for each team: Commitment: committed vs completed story points. Timeliness: on time vs all completed story points. Rework: story points in reopened items vs all completed SP. Escapes: count of production bugs per story point completed in prior N sprints. - Store and calculate
Either directly in the platform’s analytics (GitLab Insights, Jira dashboards, Azure DevOps Analytics views), or in a small warehouse (Postgres, BigQuery etc) fed by their APIs. Apply Focus Meter formula per individual and per team. - Visualize
Dashboards with: Focus Meter by person and team per sprint. Trend lines for each component. Outliers to talk about in 1:1s and retros.
2. GitLab implementation
2.1 Data model and conventions
In GitLab you mainly use:
- Issues
- Labels for type: type::story, type::bug, type::qa.
- Labels for sprint: sprint-2025-11-24 or similar.
- Story points: weight or a custom field if you are on GitLab Ultimate.
- Due date: GitLab built-in due date, or use milestones as iteration boundaries.
- Issue events
- State changes to detect reopened issues (Closed then Reopened).
- Issue links
- For mapping escaped bugs to their source stories.
Recommended label conventions:
- Reopened: rework or auto-detect via events.
- Production bug: env::prod or escaped.
- Story, bug, qa types via type::story, type::bug, type::test.
2.2 Calculating metrics
Approach 1: Use GitLab Analytics plus API
- Define “committed set”
All issues with sprint label and assigned to dev at sprint start. You can snapshot assignment at sprint start in a simple script. - Nightly script (Python or Node)
Use GitLab REST API:- List issues for a sprint and project:
/projects/:id/issues?labels=sprint-XYZ - Fetch events for each issue to detect reopens:
/projects/:id/issues/:issue_iid/resource_state_events - Fetch links to map bugs to stories:
/projects/:id/issues/:issue_iid/links
- List issues for a sprint and project:
- For each dev and sprint:
- Commitment: sum of SP completed vs SP committed.
- Timeliness: closed date versus due date or sprint end.
- Rework: issues that have a Closed then Reopened event or have a rework label.
- Escapes: count production bugs linked to stories completed in recent sprints.
- Store results
Write a CSV or JSON per sprint into a repo, or into a small DB. Optionally create synthetic “Metrics” issues to surface numbers in GitLab itself. - Visualization in GitLab
Use GitLab Insights or custom charts:- One chart for Focus Meter per team per sprint.
- One chart per component (Commitment, Timeliness, Rework, Escapes).
Approach 2: Push to BI
- If you have GitLab data mirrored into a warehouse (through Fivetran, Meltano, or custom ETL), write SQL views that:
- Join issues, labels, events, and links.
- Materialize per sprint/per user Focus Meter rows.
- Use any BI tool on top.
3. Jira and Confluence implementation
3.1 Data model and conventions
Use Jira fields:
- Issue type: Story, Bug, Task, Test.
- Story points: custom field Story Points.
- Sprint: Jira Software sprint field.
- Due date: Jira Due Date field or treat sprint end as due date.
- Status: configured workflow with statuses that clearly indicate Done.
- Reopened: either the “Reopened” status or a transition back from Done.
Recommended additional:
- Label for production bugs: env-prod or escaped.
- Issue links: is caused by or relates to from bug to story.
3.2 Calculating metrics with Jira
Option 1: Jira Cloud with the Analytics / Data Lake
- If you have Jira Cloud with Analytics:
- Use the Atlassian Data Lake or the Jira REST API to pull: Issues with fields: assignee, story points, status, sprint, due date, resolution date.
- Issue change logs (to detect reopen events).
- Issue links between bugs and stories.
- Write SQL in the Analytics workspace (or your warehouse) to:
- Define committed set: issues in sprint with assignee at sprint start.
- Compute commitment, timeliness, rework, escapes per user and sprint.
- Apply the Focus Meter formula.
- Build a Jira dashboard:
- Use “External gadgets” or Atlassian Analytics charts.
- Show Focus Meter by team and by user.
- Link to Confluence page with detailed explanation and compensation rules.
Option 2: API plus external DB
- Use the Jira REST API:
- JQL for sprint and project, for example
project = XYZ AND sprint = 123 AND type in (Story, Bug, Task)
- With each issue:
- Read changelog to find when it was set to Done and if it was reopened.
- Read custom field for story points.
- Read link info to count escapes.
- Store in a DB, then:
- Create a focus_meter table with columns:
sprint_id, user_key, team, commitment, timeliness, rework, escapes, focus_score.
- Expose charts in your BI layer.
- Embed charts into Confluence via iframes or macros.
4. Azure DevOps implementation
4.1 Data model and conventions
Use Azure DevOps Boards:
- Work item types: User Story, Bug, Task, Test Case.
- Story points: Story Points or Effort field.
- Iterations: ADO Iterations for sprints.
- Areas: for teams or subsystems.
- States: New, Active, Resolved, Closed, Reopened, etc.
Recommended:
- Tag production bugs: Tag ProdBug or a custom field.
- Use links: “Related” or “Caused By” between bugs and stories.
4.2 Using Analytics Views and OData
- Create Analytics Views in Azure DevOps:
- One view for Work Items with fields: Work item ID, Type, Story Points, State, Area, Iteration, Assigned To, Created Date, Closed Date, Due Date (if used).
- One view for Work Item Revisions or History for state transitions.
- One view for Work Item Links to find escapes.
- Connect with Power BI or direct OData:
- Use the OData feed from Azure DevOps Analytics.
- In Power BI:
- Load Work Items, Revisions, and Links views.
- Use Revisions to detect reopened items (Closed then Active again).
- Use Links and ProdBug tag to find escaped defects.
- Build measures in Power BI.
- Focus Meter measure (example):
- Implement your formula as a DAX measure:
Focus = 100 * (wC * Commitment + wT * Timeliness + wR * ReworkScore + wE * EscapesScore) / (wC + wT + wR + wE)
- Implement your formula as a DAX measure:
- Dashboards:
- Per team and per individual: Focus Meter by iteration.
- Component breakdown.
- Publish to ADO dashboards via Power BI integration.