Imagine gathering requirements as like fishing. You don’t want to catch every single fish in one haul as the processing of that would mean that some fish would go off. Focus on catching the big fish first by using a wide net, then later you can use a finer net to catch the smaller fish. This is the same with requirements as with the big requirements you can use them to understand the product direction better.
Definition: Product Backlog refinement is the act of adding detail, estimates, and order to items in the Product Backlog.
Length: “Refinement usually consumes no more than 10% of the capacity of the Development Team” (1)
People: Product Owner, Development Team
Outcome: Product Backlog items become more transparent through refinement and the top items are deemed ‘Ready’ for a Sprint (i.e. can be ‘Done’ by the team within one Sprint)
Benefits
Product Backlog Refinement Benefits (2, page 103, Mastering Professional Scrum)
Increased Transparency – adding details for what you plan to deliver, and your progress
Clarification of Value – outcomes of what you are trying to achieve become clearer, and helps the team to build the right thing
Breaking things into consumable pieces – increases flexibility for the team to meet ‘Done’ in a Sprint
Reduction of dependencies
Forecasting
Incorporation of learning – gained learning is incorporated into the product
Facilitation Techniques
Fishbowl collaboration (3)
Three – five people collaborate around a whiteboard (the fish in the bowl)
Others in the room may observe but are not allowed to speak
If someone from outside the bowl wishes to speak they can dive into the bowl which bounces out one of the fish already in the bowl
People can leave anytime if they are not finding it valuable
This technique keeps the conversation small and productive
Three Amigos and BDD
Backlog Refinement is attended by someone to represent each discipline in rotation (i.e. Three Amigos of Product, Development, Testing)
Behaviour Driven Development is used to focus the conversation on the behaviour of the User Story
Use the question of ‘Which features in our product backlog are needed in order to have a successful product release?’
Test the list generated against the question ‘If we delivered all of out must-do PBIs except this one, would we still have a successful product release?’
‘Same-size’ items (all PBIs are roughly the same size)
‘Right size’ items (at least one item can be delivered in a Sprint)
Probabilistic Estimating (1, page 110)
Monte Carlo is an example of probabilistic estimating
Uses historical data with a statistical sampling method
Output is a range of possible future outcomes with a confidence level on that range
Embraces uncertainty of predicting the future
Planning Poker
HOW (3, page 130)
Each person had a deck of cards with the fibonacci numbers on them
Item to be estimated is brought to the table
Everyone pulls the card they think represents the right amount of effort
If everyone is within 2 cards of each other, add them all up, take the average and move on
If people are more than 3 cards apart, the low and high ends talk through their reasons, then everyone does another round
Benefits
‘take a broad array of opinions, attempts to remove as much biais as possible, and with informed, yet anonymous, statements, narrows down opinions into a generally accepted estimate’ (3, page 129)
Avoids bandwagon effect (everyone jumping on an idea because they thought everyone else was on board) (3, page 125)
Avoids halo-effect (when one characteristic of something influences how people perceive other unrelated characteristics (3, page 126)
Estimation Caution
Asymmetric nature of software estimation errors (2, page 203)
‘ There are known knowns. There are things we know we know. We also know there are known unknowns. That is to say. We know there are some things we do not know. But there are also unknown unknowns. The ones we don’t know we don’t know’
Former Secretary of Defence Donal Rumsfield
Estimations are more likely to take longer than expected than they are to take a shorter amount of time than expected (i.e. they are asymmetric)
Developers estimate based on known knowns, and sometimes known unknowns.
Some error comes from misunderstanding these
Most comes from unknown unknowns
Larger tasks are more likely to have more unknown unknowns so the risk of rapidly increasing is much higher
Cone of uncertainty shows that the likeliness of under as over-estimation is symmetric
Stories express something that will be valuable to a user (2)
Benefits of User Stories
They act as a placeholder to remind us to have conversations rather than follow written requirements contracts
They encourage interaction through the conversations
They are comprehensible by all, from devs and throughout the business, and have no technical jargon
They are the right size to be able to plan and estimate
When working iteratively we don’t have to write all stories upfront and can refine as we go along the project
To make clear the purpose
Example from Star Trek if he wants the date to be auto-logged rather than saying it out loud – is it to save him time or for audit logging? One is a lot stricter needs than the other! (4)
Focus on who it benefits
Example as a commuter, I want a car to drive to work. Very different if the commuter is a surburban commuter vs a farmer in South Dakota (4)
Splitting Stories
Vertical Slicing
Always slice stories from the point of view of our customer
All stories must have value, which can be to our customer directly or indirectly
The technical layers that make up the value are to be kept together (like the layers of a birthday cake) as one layer on its own does not provide value
Splitting stories into the technical layers can lead to defining the solution and restricting abilities to creatively iterate
Use these scenarios to break the story down into smaller pieces of value
Cake Metaphor (3)
Cake Metaphor Doodle
To split the story down you could break it down horizontally and take each ingredient in turn
But if you slice horizontally you will just get egg and not know if the cake ingredients all work together to provide value (it could taste awful all together!)
Instead, bake a cupcake!
You get all the ingredients but a smaller size to check the recipe works
A Cupcake Doodle!
User Story Construction
The Three C’s
CARD
Description that defers the details
As a [user type]
I want [functionality]
So that [benefit]
CONVERSATION
Verbal conversations are best
Highlights that we don’t know all the detail
Reminder of conversations we have had, work that has been done, any wider context
Confirmation
Acceptance Criteria
Must be matched for a story to be considered done
User point of view only
No design specifications unless the user wants them
Good User Story Guidance
INVEST Criteria
INDEPENDENT
Stories shouldn’t have to be completed in a specific order and should not rely on another story to be started
This is not always achievable
Common examples are when you need something to exist before you can build upon it or when one story reduces the complexity of another making it seem logical to do them in a specific order
How do we respond to this?
Don’t put dependent stories in the same sprint to keep the flow
Join together dependent stories
NEGOTIABLE
Stories are not contracts – they are short descriptions of functionality
Open to discussion with the team
Simplify, alter, add to in whatever way is best for the goal and the product
VALUABLE
Valued to user/ customer
Technology assumptions have no value to the customer!
ESTIMABLE
Good size
Just enough information, but not too much to become confusing
If an investigation is needed before estimation, use a spike
SMALL
Lower Limit is coffee break size, i.e. big enough that you deserve a coffee break for finishing
Scrum team will highlight size issues in refinement
TESTABLE
Language is important
It must be specific, unlike “A user never has to wait for it to load”
“It takes two seconds max to load in 95% of cases” is much better
Acceptance Test Writing Guidance (2)
Always add tests as long as they add value
Capture assumptions
Provide basic criteria for when story is Done
Questions to ask
What else do devs want to know?
What am I assuming?
What can go wrong?
Circumstances where story might behave differently
Usability
Stress
Performance
Stories should be closed, not open ended
Use language to ensure that the stories have a definite closure
Continuous jobs are bad (e.g. managing an ad)
Instead use closed actions like “review responses” and “edit response”
User Story Card Example(3)
User Story Card Doodle
Story reference number
Author of the story
Value/ importance
Status/ release
Size/ estimate
Date created
Dependencies
Metrics (if relevant)
Description
Story title (this is the only mandatory field)
User Story Smells
Too Small
EVIDENCE: Frequent need to revise estimates depending on order
FIX: Combine the stories together
Not Independent
EVIDENCE: Difficulty in planning the iteration and lots of swapping stories in and out
FIX: Re-split the stories
Gold Plating
EVIDENCE: Extra wow factor being added that isn’t in the story
FIX: Increase the visibility in the Daily Scrum
Too Much Detail
EVIDENCE: Too long discussing the story
FIX: Strip the story back to its basics and focus on the user point of view
Trouble Prioritising
EVIDENCE: Stories cannot be prioritised logically
FIX: Stories may need to be broken down smaller or re-written from a value point of view
Using the practice of Behaviour Driven Development with a subset of the team representing all skill sets to refine user stories
Overview
Towards the end of 2018 I went to a workshop at Agile Leicester on Behaviour Driven Development (BDD) and Three Amigos. This article gives some references about where you can learn more about these techniques and then continues through the introduction of this technique to a team and the results that were achieved. I gave a talk back to the Agile Leicester community on this subject in early 2019 and the picture above is of that talk.
Motivation
The stories we were using were closer to contracts than stories
Research
I went to a workshop about BDD and Three Amigos at Agile Leicester given by Stuart Day.
We were keen on using the brilliant minds of all the team members to create the vision and to be a part of creating the solution rather than developing exactly what was written.
Our stories had lengthy acceptance criteria that weren’t focused on the user needs and stipulated exactly what the solution should be leaving no space for creativity.
There was not enough room to question, influence, and negotiate in the stories.
Experiment Hypotheses
By using BDD and three amigos we would:
Focus our communications on the value to a user of a feature and the behaviours that would help us ultimately achieve that value
This would support us in negotiating and sharing solution ideas, moving away from it being pre-decided
Spread the understanding of the story through the team and enable empowerment
This would allow the team to make decisions on the solution with the Product Owner
In depth behaviour discussions would enable pragmatic approach to MVPe
This would allow us to deliver smaller increments of working software to allow feedback
Method
To achieve our hypotheses we had to change how we worked.
Story Writing
First thing to change was how we wrote stories. I worked with our Product Owner to refocus the stories back on what the User wanted and only that. No more design specifications, no more interaction specifications that a user doesn’t need to gain the value. We stripped them right back to CARD and CONFIRMATION all from the user’s point of view. (see Ron Jeffries explain CARD, CONFIRMATION, and CONVERSATION here)
Three Amigos
We changed how we talked through stories as a team. We previously had backlog refinement where the Product Owner would present to the team the stories and then we would move on when everyone understood.
We started with four hour-long Three Amigos sessions per sprint to refine the stories ready for the next sprint.
The team would decide who turned up from each specialty (i.e. who for the QA amigo and who for the Developer amigo) and the Product Owner would always be there, sometimes with a stakeholder if it made sense.
BDD
We used the acceptance criteria as a guide to writing the scenarios as Stuart demonstrated in his talk. We talked through each user-driven acceptance criteria and created all the behaviour scenarios that supported that confirmation.
Results
1 – Focus our communications on the value to a user of a feature and the behaviours that would help us ultimately achieve that value
Changing how we wrote the stories brought the focus back to WHY we were developing the story in the first place and the unnecessary words were removed
Our Product Owner has felt that these discussions help to marry the user need and the technical depth behind the behaviours
Originally this did take up more time for her as instead of just writing her story and presenting it out to the team she had to spend time creating it with the team
But previously a change to a story would be very time-consuming and this made it more tempting to resist the change. Now change happened naturally
Overall, time was saved
2 – Spread the understanding of the story through the team and enable empowerment
Collaboration with people of all different disciplines shone a light on different options and things that previously may not have been thought of
For example a QA amigo may think about all broader scenarios like ‘what should the behaviour be if…’
A developer amigo might be able to see that a solution is going to be slow and take a lot of power to achieve
Existing behaviour was talked through to share knowledge of the product in general
When we started using BDD we only talked through the behaviour that would change with this story
We learned that omitting existing behaviour from our discussions was not the best approach as the team members who hadn’t touched that part of the product before didn’t know how this new story would impact what was already there
If we felt that the existing behaviour was something we needed to consider as part of the story then we created scenarios
Talking through the language in the scenarios all together boosted our shared understanding
We had plenty of conversations about what certain words meant to make sure we were all using common language
We used our user roles to make the scenarios relatable, defined terms, and debated grammar
Some of this was too far admittedly and one of the things we learnt is to not waste time straying from the goal of three amigos on a quest for perfection
3 – In depth behaviour discussions would enable pragmatic approach to MVPe
By splitting stories into scenarios we could see a bit clearer on their size
For example if we found one had a lot of scenarios we could group them together and there our story was split by functionality just as simple as that.
Or we could cut some out that maybe weren’t vital to the user. These scenarios could come later.
We did also learn that BDD scenarios don’t work for all types of features for example one with a specific, non-negotiable set of rules set by a regulator. Scenarios are good for in general what happens if a rule is followed or broken but not needed for the actual rules.
Close
All in all using BDD and Three Amigos achieved the three hypotheses that we set out to achieve. There are many more benefits cited from using this technique, including improvements to quality and documentation, but as we weren’t measuring ourselves against them I haven’t included it in this article.
It also goes to prove that Agile community events are wonderful places to learn and I am extremely grateful for them (hence the cheesy slide of my thanks in the header picture).
Extensions
To keep working with and improving. Will update with any new challenges or tips. Let me know how you have found using BDD and Three Amigos in the comments below.
To encourage creativity and ownership from the team in the product
Research
Articles on benefits
Rob van Lanen’s paper on unlocking intrinsic motivation through FedEx Days
Johan den Haan’s 10 reasons why you should organise a FedEx Day
Facebook’s features that have resulted from Hackathons
Method
Preparation for the day
Agree a time frame with the team and management
Agreed a time frame in between two sprints (we timed this with an awkward dates for having a planning session I think because a bank holiday had offset us)
Organise a review session to show each other our innovations
Ensure someone from seniority was present to see the innovations
Structured similar to the first half of the Sprint Review
Establish rules
it couldn’t be something we were already planning to do and had to be new
There had to be something to show at the end (i.e. the FedEx delivery must arrive on time)
On the FedEx Day
Meet at the start of the day to refresh the purpose
Participants self-organised into teams
Teams each agreed a direction
Results
Creativity and Innovation Boost
Enjoyment from the team members on getting to work on something they either enjoyed or were passionate about
The business appreciated the ideas that had been created and put them on the backlog for further investment, or for further investigation where a new product was involved
After this event there were more creative questions on the solutions and the features and more suggestions
Team Cohesion
The team separated into a few smaller teams to work on their projects – mostly by time zone for ease
The team learned that for people who have specialist skill sets in the team it may be harder to join in, so this needs special attention
Delivering Incrementally for Fast Feedback
Everyone had something to show for the review
The business and the team asked for this to become regular
We have multiple teams delivering towards the same business goals. They often weren’t fully aware of what the other teams were achieving and missed the opportunity to ask questions.
Research
Articles
There is a similar idea to this but demonstrating by feature developed rather than by team in the Nexus Framework (1, pg 67)
Each team doing their own end of iteration/ release reviews.
Unsure approach from teams – especially those who had worked on features they didn’t feel have a ‘wow’ factor
Some team members said they felt they had already completed a review of their work for iteration or release and were not confident any further review would be
Trial Method
Each team having a ‘stall’ at the Science Fair (we had 6 teams in total holding stalls)
Stalls set out around the edges of the room with enough space for people to wander about and stand around each stall
Inviting all stakeholders from all teams and some in-business users
A number of rounds of ten minutes each were set up to allow the teams to also rotate and see each other’s stalls
An introduction was created to explain the format and summarise the features delivered in the interval length agreed
Results
Notes from setup
We took the idea to schedule these at regular intervals that suited the iteration or release cycle of all of the teams involved as they work to different cadences
Setup time in the room was important so that when people entered we were ready to go
Talking through the features that were delivered at the start felt like a waste of time as everyone could then talk through them with each team. This took time away from actual conversations so we decided not to keep it in the next one and bring our objectives wall into the room instead for people to refer to if wished.
Stall Activity
Each team had someone viewing their demo at almost every ’round’ as we called them
Each team felt they got value out of it as they were able to have more in depth conversations and ask more questions about the features from other teams than they usually feel able to in the team specific reviews.
The teams who were concerned on repetition of reviews and their stall not having exciting enough features had ample interest, questions, and feedback for us to repeat this Science Review format again
Stakeholders and in-business users who would usually only attend the review sessions for specific features broadened their knowledge to the work from other teams
Lessons Learned
Introduction to the features at the start is unnecessary – this has now been replaced with bringing the feature board into the room
Worthwhile start to improving the cross-team knowledge sharing and communication. It did highlight how difficult it is for each team to keep up with and understand the work of 5 other teams whilst also maintaining their own work.
Requests were made from all to make the event more ‘jazzy and exciting to attend with an extension to more people included in this. Biscuits have been suggested
This review format allowed a different type of conversation to a solo team review which I believe was because there were less people at one time and so questions of more personal interest seemed appropriate. This is why I don’t believe it felt more repetitive
There are more interested people in the features than you immediately think of within the business
Extensions to try
Invite more people and advertise as an event around the business for whoever wants to attend.
Consideration on whether making it a competition for best stall would create a brighter atmosphere.
The team that I trialed this with were previously using the method to take the average velocity of the last 3 sprints, catering for anticipated holidays.
Method
Take it to the team and talk through research
Input velocity data from previous sprints
Use the predictions it comes out with to change how we forecast our sprints
Use the calculations to have a strong goal (one that we could have a better confidence in our forecast) and a stretch goal which we would get to if we could
Take it to stakeholders and talk through the trial
Results
Communication on our forecasts within the team
It enabled conversation over the forecast velocity as we could adjust and see the simulation run rather than be presented with a single number
A forecast is as such, a forecast, and using this tool brought us back to metrics being a tool rather than a target number to hit
The use of strong goals and stretch goals helped focus on something more realistic to achieve and therefore built confidence within the team
Transparency with Stakeholders
As we’d had more conversations as a team we could justify our forecasts when asked and had more confidence
The use of strong and stretch goals also helped manage expectations with stakeholders
Extension to try
Planning for a goal first and then checking how that goal would look in the simulation and what our chances were of achieving the goal
Then we could adapt our approach to the goal if it seemed unlikely for us inside a sprint