Scrum Master Tasks – handover

The following tasks were practised with two scrum teams working on a variety of web and app based projects in the public sector. These teams are 2 years into their journey and followed the scrum framework and ceremonies they were collocated and ran sprint planning as two teams – which had it’s pros and cons. This document formed part of the handover when I moved on to pastures new, and was a snapshot of our understanding and the process we followed. The team released quarterly and had the remnants of a waterfall process. They had high technical excellence with continuous integration and 70% automated (unit/front-end) test coverage. Features were tested twice once in a code integrated environment without user data then every 2 months in regression with data loaded from production, which led to delays in quality feedback. An infrastructure uplift project – to make daily data integrations possible – was held up in the IT infrastructure department for 9 months (so As SM I felt partly to blame for this!). We had a large 5M line Open Source codebase with many independent features being developed simultaneously with up to 6 product owners at any one time across two teams – 9 developers per team, embeded testers however operations/infrastructure support was still in a separate department.

Daily tasks

  • Stand-Ups – 3 questions or walk the board, keep an eye on capacity (this was made visible at the stand ups on a monitor beside the boards) and work in progress and look for opportunities for collaborative work. If all items aren’t going to be completed look to guarantee the ‘Must haves’ by not starting on the Should’s and Could’s until later sprints.
  • Encourage the team to collaborate on completing the Must’s. It’s better to get fewer completed items in the sprint than many nearly completed items, to help flow items to other stages/people in the process.
  • A sprint is only really failed if a “Must have” for the sprint isn’t complete. We front loaded the must haves into the earlier sprints to protect them towards.
  • Prioritise removing blockers for the team and chase issues raised by the developers with the other IT support teams, if the developers can raise a request in a support system they should do this first before the SM chases. This should involve face to face with infrastructure, data admin, operations, support to ensure they have enough timely information to make a start and resolve the request.
  • Wait till sprint planning before items are bumped to the following sprint for transparency in the whole team. 

Weekly tasks

  • Setup release planning meetings, sizing meetings, sprint retrospectives for the whole scrum team and an additional separate developer retrospective if required, sprint planning meetings and ensure demo/review meetings are in place, before the planning meeting. Ideally hold the demos a day before to give the POs time to reflect and fully understand what they see at the demo so they can provide considered feedback.
  • Capacity task planning – asking developers to estimate tasks in hours for the sprint and allow them to commit to the work (or not).
  • Identify blockages, impediments and queues and send issues that can’t be resolved by the team to Dev Manager/POs/Chief PO.
  • Pick one improvements per sprint –  keep a backlog of improvements visible on the wall progress improvements within the team and up the chain when necessary.
  • Align process and process documentation with other teams.
  • Improve quality of handover information between all team members, encourage the initial handover face to face to help reduce context switching and improve on the quality of information.
  • Create catalysts for knowledge sharing and pair working / encourage on demand f2f code reviews.          
  • PO responsible for updating Acceptance Criteria and Story detail
  • Check that the new project feature titles are written in a way most people will understand using ‘domain’ language. Use user stories for the features.
  • Try to keep meetings with the developers to the morning so they have a clear uninterrupted afternoon to focus on development.
  • Weekly Reports
    • Save a copy of the backlog in Excel just for your own change records.
    • Send defects report (as below)
    • Ask stakeholders to come to the board

Sprint – fortnightly tasks

Typical fortnightly scrum ceremonies – it is unlikely a single Scrum Master will be available for all the demo/review meetings. Backlog refinement meetings will occur every other week with development teams and POs as required so worth having in the diary. Essential that  devs, testers and POS do attend.
The development team might have set-up their own development interest meeting to improve on developer/tester processes – SM can attend but not to lead this meeting.
These standard Scrum meeting/ceremonies are strictly in the following order to ensure we get timely and considered feedback.
  1. Feature demo and review and backlog refinement for the next sprint, this is necessary to do a day before the planning to allow time for the POs to feedback on the newly demonstrated features – ideally adding feedback to the existing acceptance criteria in PBIs or creating new PBIs for the next sprint.
  2. Sprint planning part 1 The Priority Meeting (confirm whats in the next sprint, and release forecast. This is the trading meeting where POs trade PBIs with themselves and move PBIs above and below the red line forecast. We also have a short 20 mins retrospective with the POs at this meeting.
  3. Sprint planning part 2 The How Meeting – mainly with the developers if there is just one PO then they should also be there to add info (for these teams this is done in step 1 due to so many POs (eight for two teams – I know!).
    1. Sprint retrospective, begins with what was done in the last sprint and what wasn’t. And suggestions for improvements the team or organisation can make.
    2. Capacity for developers is 5.5 hr/day (75%) , to encourage time for personal development, innovation, pairing and making room for unplanned defect work or consultation/analysis.
    3. Capacity for lead developers 5 hrs/day (65%) to allow time for peer reviews, training, personal development, knowledge sharing, innovation, new project support and unplanned defects. 
Team Utilisation – based on historic data for teams on project work
Not factoring in the team tasks below will cause project to overrun with maintenance and SPOF issues
  • Typical Scrum Team (5 devs, 2 testers) utilization for project work is wrongly assumed to be good if it’s 100%
    • Unless the team is operating as a dedicated new feature team (this is advisable for some complex projects), most Scrum teams (and this is typical in the industry) have other commitments outside new projects – they support existing projects, incrementally enhance ongoing features, defects, consult on future project design and spend time sharing and sharpening their skills and methods.
    • New project work utilization for such teams is around 70% (not including a contingency for developers leaving, new team configurations)
    • Need accepted norms for continuous improvement and time on tasks to remove single points of failure
    • If these steps are in place there is no need for an additional large ongoing training budget
Scrum Teams developing both new and supporting / enhancing existing products 
% – based on available data
37h working week / person
1. Defects and Defect Support
2. Pipeline future project analysis consultation, preparing L0, L1 designs
3. Pairing, knowledge sharing, code review, removing SPoF
4. Learning new improved techniques / methods / automation / remove tech debt
5. Departmental meetings
6. Existing feature incremental enhancements
7. New project focused development including sprint planning, reviews, demos, retrospectives
Expert developers will do more of 1,2 and 8 (3&4 might also  be mainly new project work. Other developers will do more of 4 & 6 so do less new feature development. So ~70% team utilisation on new projects
Dedicated feature dev mode the project utilization is around 80% (not including a contingency for developers leaving, new team configurations) – the expert developers in this team are required to consult on future project analysis and critical defects. Defects can be the hardest code to fix and requiring unique experts often found in this team, perhaps the only people in the organisation to understand ‘core’ system code enough to resolve defects in the code, this knowledge needs sharing so requires time from this team. The remaining teams are most often less experienced developers so need additional support from this feature team.
Scrum Teams ‘dedicated’ to new development work
% – based on available data
h/ 37 hour working week / person
1. Knowledge sharing on non-project work, such as defects and enhancements, removing SPoF
2. Pipeline project analysis consultation, preparing L0, L1 designs
3. Pairing, knowledge sharing, code review, removing SPOFs
4. Learning new improved techniques / methods / automation
5. Departmental and cross team meetings
6. New project focused development inc sprint planning, reviews, demos, retrospectives
You could also include 5% for poaching key members for short special projects or a team member leaves (-20% of team capacity lost (% dependent on team size) until replaced/re-trained).
And consider un-allocating another 5% to provide breathing space for improvement and innovation.

Kanban and Scrum Board Protocols

Applies to Kanban boards used in service teams working on support and defect work. Scrum boards for teams working on development projects and Kanban boards used for future project pipeline work.
  • The team name on the board should be consistent with the team name in tracking tool, unless the board reflects multiple teams. 
  • When two teams share a board need to include ‘&’ between the two team names to show that they are working together. Example:  Flamingo & Indigo
The name of the teams permanent Scrum Master
  • Ideally it should be another Scrum Master, however if a Scrum Master isn’t available to 
  • cover then it can be a team member (but they have to know how to manage the day to
  • day scrum activities).
  • There should always be a ‘go to’ Scrum Master, so that the team have someone to go to should they need impediments removed
(Scrum boards only)
Must include the following:
  • Sprint number (e.g. Sprint 5 of 8 or Sprint 62)
  • Sprint dates (e.g. 4th June – 17th June 2015)
  • Sprint focus (description)
  • Business owner information (Product Owners)
  • The planning schedules need to show who is and who isn’t available in the team – using standard notation
  • The schedule needs to show a minimum of 2 weeks so needs printing weekly and updated with pen daily.
  • The definitions of the statuses need to be visible on the board at all times
  • When the RAG Status is either Amber or Red, there must be a corresponding post-it note in the blocked section outlining what is causing the blocker
(Kanban board only)
To be used for:
  1. Something that is stalled by other priorities (e.g. has to be put on hold), or
  1. A challenge by the Scrum Master/Team on the validity of the request
  • Everything in the car park must be dated (the date it entered the car park)
  • Everything in the car park must show an owner
For Scrum boards:
  • To show all Product Backlog Items (PBI) that are proposed candidates for the next sprint
  • These should be the top priorities from the full backlog list that are not yet ‘Approved’
For the Kanban boards:
  • To show small changes that are yet to be assigned to a team member
For Scrum boards:
All items in this section must be:
  • Prioritised
  • Estimated/sized
  • Clearly defined user story details including acceptance criteria
  • Meet the definition of ready
  • Logged in TFS/Jira (and at the ‘Approved’ status )
For Kanban boards:
  • Logged in TFS/Jira and assigned to a team member and “about” to be worked on
For Scrum boards:
  • Items can be either listed at ‘user story level ‘or ‘task level’
  • Items will always progress through the following stages: New, In Progress, 
  • Ready for Test, In Test and Done (visible progress must be seen)
For the Kanban boards:
  • Must show progress in a quantifiable way e.g. % complete, by days, by person, by stage
  • To show when a user story/task has been approved but cannot move into committed or committed but cannot be progressed or completed
  • Everything blocked must be dated (the date it became blocked)
  • Everything blocked must show an owner and the person who needs to take action on it
(Scrum board only)
Every board must show the burn down for the current sprint and must be updated on a daily basis
All post-it notes on the board should contain the following information:
  • The Ticket, SR or IR number
  • A description
  • Additional information like priority or story points can also be added if relevant
All items that are put into either the ‘Car Park’ or the ‘Blocked’ area of the board must also have the appropriate coloured index sticker attached to it to highlight who the issue is being escalated to (and for ease of identification). 

Maintaining the Scrum board 

The team creates post-its and updates the board as and when tasks are completed and at Scrum Standups
SM creates the leave calendar on the board for two sprints in advance for convenient reference.
SM adds the burndown to the board every other day and keeps the end of desk monitor displaying the sprint view and burndown.

Release Planning 

  • Setup release planning session before Sprint 1 for that release.
    • POs create the Epics backlog for high level requirements – ordered by value.
    • Order all PBIs related to Epics in the same order as above
    • Assign all the PBIs to the next release (if theses are sized a red line estimate will be immediately apparent – the release bottom line!)
    • Try with the Three Amigos to size or estimate at least 90% of items before doing a release forecast (highlight the un-sized items in the report)
    • Create a release ‘red-line’ forecasting what is likely to be in and out of the release (do this weekly thereafter)
  • Create initial release forecast  (this is in the same format as the Sprint report below)  (see Creating release burn-down)
    • Report to contain release burn-down, team velocity over 6 sprints
    • Team capacity (reduced availability) for next 3 weeks
    • At risk items with explanation
    • Items that are not ‘approved’ for development in the upcoming 2 spirits, items that require more info from POs before they can be developed.
    • Calculate Metrics based on the TFS Release Iteration rather than using dates in TFS.

Sprint Reports

    • Send a sprint report the day after Sprint Planning to summaries the last sprint progress  – New defects worked on, completed/in-completed/not-started/blocked items – and forecast progress towards the release  (resetting the red line) and explaining any new priority changes (using the promoted and demoted tags for the release)  – send to LTS, Development Manager and Developers
    • Provide the weekly defect report to LTS and developers asking POs to priorities the new issues (assigning to low will remove the defects from the future reports, – they will remain in  the TFS backlog at the bottom.)
We have found that using one view in TFS and one report in Excel provides a consistent view on all projects when a chief product owner needs to be aware of the whole backlog and relative priorities.
Running the Backlog report
This report is at the Story / feature level (tasks are not in this report) we use the principle of visualizing all the team’s work commitments (from Lean Manufacture) – essential for reliable delivery forecasts.


  1. Arrive in Service Manager and automatically copied to TFS / Jira
  2. Triage by Support Members or PO and status set to ‘Approved’ in TFS once verified as repeatable 
  3. Testers triage Defects that are over a day old and not set to ‘Approved’ liasing with support (ideally face to face) to work out why its not Approved. Approved means it is re-creatable and with enough info for a developer to work on.
  4. Send out a list of medium and high defects to the product owners over the past week asking them to prioritize based on Severity L/M/H. And meet face to face with testers to confirm what the impact of the issue is and if there is a workaround.

Definitions of Done at each development stage

The Centre for Agile has drafted this baseline DoD
In Progress
Ready for Test
In Test
  • Acceptance criteria/definition to story updated if any changes during development
  • Code written
  • Story adheres to appropriate style Guide (eg. UI style, accessibility standards)
  • Unit tests written
  • Tested by developer against acceptance criteria
  • Code/unit tests peer reviewed
  • CI Build successful
  • Test scripts written
  • Test scripts peer reviewed
  • L2/AiS documentation updated
  • Appropriate test environments ready. This might include:
  • Application smoke tested
  • Test data created/updated
  • Schema updated
  • All acceptance criteria tested and passed
  • All test scripts executed and passed. This might include:
  • Style guide
  • Accessibility requirements
  • Device testing
  • Product Owner has signed off and accepted the story
Release testing also included performance / load testing not in the definition for each PBI above. This definition is currently being updated in Agile CofE there is also a developer definition of code quality being worked on.

Tags used in TFS

We use various tags to identify, report and manage areas of work
Code is complete reviewed and ready for test (the Test sub-task = RFT)
Item has moved in priority since the release planning meeting prior to Sprint 1.
Waiting for info
Information has been requested by the PO or other team, IT are now waiting for this information.
More info required
IT are requesting more information from a PO
There is a dependency in development or testing with another team
Waiting for Upstream
A feature or defect fix has been submitted to the Open source community,
 we are waiting for it to be reviewed/approved and made available to pull into our codebase.
This is probably a technical debt item
A necessary enhancement to an existing feature
We are pulling updated code from the community
CI Defect
A defect automatically found by the automated test server. This is of high 
priority to fix as it affect features due to be released and other developers.
Work to automate testing, build, delivery or anything else to increase speed and consistency.

Data integrity checks

Its worth running standard queries or reports to identify wrongly entered out outdated information in your agile tracker such as TFS or Jira
Release Metrics Velocity Charts
Know your teams velocity for forecasting 

Release Metrics

The most accurate data is based on the Release Iteration Path as this is work for the release only.
Panda/CatJam – 2015 Release iteration
Develop -ment Bugs
Acceptan patches
Points on PBI/Defect
December 7 Sprints – core 
30 (15%)
September 5 Sprints
28 (34%)
June 8 Sprints core-merge
76 (48%)

Estimates versus actual accuracy

Create a report to work out the teams actual versus estimated time. This figure should be around 90% under estimated, research shows this is a healthy level of optimism (reference needed)

Managing the backlog release planning and forecasting

Example backlog (in TFS)
Note that within each project the priorities are in rough MoSCoW order. 
These are some minimum tasks I used when handing over to a new Scrum Master.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s