Posted in Scrum Add-ons

Minimum Viable Product (MVP)

MVP Definition

MVP Tests (2, pg 89)

  • MVP tests are everything that tests the hypothesis
  • MVP is reserved for the actual product

MVP Prerequisites

Steps to take before building an MVP (2, pg 87)

  1. Formed hypotheses about your target customers
  2. Formed hypotheses about their underserved needs
  3. Articulated the value proposition you plan to pursue so that your product is better and different 
  4. Identified the top feature ideas you believe will address those needs and broken them down into smaller chunks
  5. Prioritised those feature chunks based on ROI 
  6. Selected a set of those feature chunks for your MVP candidate, which you hypothesize customers will find valuable

MVP Guidance

MVP Questions (1, pg 77)

  • Most important question is ‘What’s the most important thing we need to learn next about this hypothesis?’
  • Next question is ‘what is the least amount of work we can do to learn that?’

MVP Functionality (2, pg 89)

An MVP should address all of the needs from Olsen’s hierarchy – it should not be just functional – it should be usable and delight the user 


References

  1. Lean UX by Jeff Gothelf
  2. Lean Product Playbook by Dan Olsen
Posted in Scrum Add-ons

Prioritising Problems

Guidance on Prioritising Problems

By Risk (1, pg 45)

  • Highest risk: value ratio gets top priority
  • Work on the riskiest ones first
  • Keep all on a the backlog 

Prioritisation Techniques

Importance vs Satisfaction Framework (2, pg 47)

This technique is used for evaluating potential product opportunities

Importance vs Satisfaction (2, pg 47)
  • Competitive example is Excel. High important to users and it is the leading example. So product in this space to compete needs to be something like the online sheet tools for collaboration
  • Opportunity example is Uber where user satisfaction was low with taxi service but their need for a solution is very important to them
Measuring Customer Value
  • When importance and satisfaction are both percentages:
Customer value delivered = importance x satisfaction

Opportunity to add value = importance x (1-satisfaction)
Customer Value Example
  • For orange it is:
    • Customer value delivered: 0.9 x (0.35-0.2) = 0.135 
  • For blue it is:
    • Customer value delivered: 0.6 x (0.9-0.6) = 0.18 
  • So blue created marginally more value than orange. 

The Kano Model (2, pg 64)

  • Performance needs – adding more will increase satisfaction. For example fuel efficiency on a car. Increasing this will always increase satisfaction.
  • Must-Have needs – they never increase satisfaction. They only decrease it if they are missing 
  • Delighter needs – the customer is not dissatisfied at all if they are missing as they are unexpected and result in very high satisfaction if they are included
Using the kano model to compare with competitors
FeatureCompetitor ACompetitor BYour Product
Must Have Feature 1Yes/ NoYes/ NoYes/ No
Must Have Feature 2Yes/ NoYes/ NoYes/ No
Must Have Feature 3Yes/ NoYes/ NoYes/ No
Performance Feature 1High/ Med/ LowHigh/ Med/ LowHigh/ Med/ Low
Performance Feature 2High/ Med/ LowHigh/ Med/ LowHigh/ Med/ Low
Performance Feature 3High/ Med/ LowHigh/ Med/ LowHigh/ Med/ Low
Delighter Feature 1Yes/ NoYes/ NoYes/ No
Delighter Feature 2Yes/ NoYes/ NoYes/ No
Kano Table

Return on Investment

Approximating ROI (2, pg 84)

References

  1. Lean UX by Jeff Gothelf
  2. Lean Product Playbook by Dan Olsen
Posted in Scrum Add-ons

Value Proposition/ Hypothesis

Assumptions

Definition: Assumptions are our best guess based on what we know today. They are also filled with risk. Your goal as Lean UX practitioners is to reduce risk. (1, pg 23) 

Four Big Assumptions (1, pg 24)

  • Business outcomes
  • Users
  • User outcomes 
  • Features 

Assumptions Workshop (1, pg 24-25)

Who

Whole Team

Preparation

Find out the following in reference to the problem statement

  • Analytics reports that show how the current product is being used 
  • Usability reports that illustrate why customers are taking certain actions in your product 
  • Information about past attempts to fix this issue and their successes and failures 
  • Justification from the business as to how solving this problem will affect the company’s performance 
  • Competitive analysis that show how your competition is tackling the same issue 
WOrksheet Questions

Business Assumptions

  1. I believe my customers have a need to: 
  2. These needs can be solved with: 
  3. My initial customers are (or will be):
  4. The #1 value a customer wants to get our of my service is:
  5. They can also get these additional benefits:
  6. I will acquire the majority of my customers through:
  7. I will make money by:
  8. My primary competition in the market will be:
  9. We will beat them due to:
  10. My biggest product risk is:
  11. We will solve this through:
  12. We will know we are successful when we see the following changes in customer behaviour:
  13. What other assumptions do we have that, if proven false, will cause our business/ project to fail:

User Assumptions

  1. Who is the user?
  2. Where does our product fir in their work or life?
  3. What problems does our product solve?
  4. When and how is our product used?
  5. What features are important?
  6. How should our product look and behave?
Process
  1. Give everyone the worksheet and ask them to answer the assumption questions individually about the problem statement
  2. Collect all the assumptions together
  3. Use these assumptions to form hypotheses

Hypotheses Creation

Hypothesis Statement Format (1, pg 30)

We believe [this statement is true]. We will know we’re [right/wrong] when we see the following feedback from the market: [qualitative feedback] and/or [quantitative feedback] and/or [key performance indicator change]

(1, page 30)

Hypothesis Driven Development (2, pg 87)

  • helps Scrum Teams to frame hypotheses and experiments with thinking about what they are trying to achieve and how they will measure it
  • helps teams to be mindful of assumptions
We believe [doing this feature] for [these personas] will achieve [this outcome]. We will know that this is true when we see [this measurement] changed.

Value and Growth Hypotheses (3, pg 61)

  • Value hypothesis: tests whether product delivers value to a customer (experiment, not survey) 
  • Growth hypothesis: tests how new customers will discover product/ service (find early adopters – people who need the product the most – they will be eager to feedback)

Experiment Stories (1, pg 127)

Contains
  • Hypothesis you’re testing or the thing you’re trying to learn
  • Tactic(s) for learning (e.g. interviews/ a/b testing) 
  • Who will do the work 
  • A level of effort estimate (e.g. story points) 
Example
We believe that asking new users MORE questions during registration will increase complete profiles. 
Tactic: landing page test & customer interviews 
Assigned to: UX, PDM
2pts

Hypothesis Creation Workshop (1, pg 32 – 43)

WHO

Whole Team

preparation

Must have a problem statement before this

process
  1. Brainstorm small outcomes that will lead to the big outcome(s) in the problem statement (e.g. what behaviours will predict more downloads?)
    • Vote all together on priority (have a decider in the room if necessary) 
  2. Create a personas
    • Validate these personas
  3. Create user outcome (assumption of what the user is trying to do) 
    • E.g. How does our product or service get the user closer to a life goal or dream?
  4. Brainstorm features to meet these user outcomes
  5. Assemble what you have created in steps 1-4 into the hypothesis table (below)
    • Fill in the gaps in the table as you find them
    • Between 7 and 10 rows on the chart is a good starting point
    • Ensure that the hypothesis you create out of this focuses on 1 feature (multi-feature hypotheses are hard to test)
We will achieve…if this user……can achieve……with this feature
[business outcome][persona][user outcome][feature]

References

  1. Lean UX by Jeff Gothelf
  2. Mastering Professional Scrum by Stephanie Ockerman and Simon Reindl
  3. The Lean Startup by Eric Ries
Posted in Scrum Add-ons

What is the Problem?

Problem Statement Formats

Existing product problem statement elements (1, pg 25)

  • The current goals of the product or system 
  • The problem the business wants addressed (i.e. where the goals aren’t being met) 
  • An explicit request for improvement that doesn’t dictate a specific solution 
[Our service/ product] is intended to achieve [these goals].
We have observed that the product/ service isn't meeting [these goals] which is causing [this adverse effect] to our business. 
How might we improve [service/ product] so that our customer are more successful based on [these measurable criteria]?

Template (1, pg 26) 

New product problem statement elements (1, pg 27) 

  • The current state of the market
  • The opportunity the business wants to exploit  (i.e. where the current solutions are failing)
  • A strategic vision for a product or service to address the market gap
The current state of the [domain] has focused primarily on [customer segments, pain points, etc] 
Our product/ service will address this gap by [vision/ strategy] 
Our initial focus will be [this segment]

Template (1, pg 27)

References

  1. Lean UX by Jeff Gothelf
Posted in Scrum Add-ons

User Stories

Stories express something that will be valuable to a user (2)

Benefits of User Stories

  • They act as a placeholder to remind us to have conversations rather than follow written requirements contracts
  • They encourage interaction through the conversations
  • They are comprehensible by all, from devs and throughout the business, and have no technical jargon
  • They are the right size to be able to plan and estimate
  • When working iteratively we don’t have to write all stories upfront and can refine as we go along the project

Splitting Stories

Vertical Slicing

  • Always slice stories from the point of view of our customer
  • All stories must have value, which can be to our customer directly or indirectly
  • The technical layers that make up the value are to be kept together (like the layers of a birthday cake) as one layer on its own does not provide value
  • Splitting stories into the technical layers can lead to defining the solution and restricting abilities to creatively iterate

Using Scenarios to Split Stories

  • See BDD and Three Amigos for writing stories with scenarios
  • Use these scenarios to break the story down into smaller pieces of value

Cake Metaphor (3)

Cupcake_parts
Cake Metaphor Doodle
  • To split the story down you could break it down horizontally and take each ingredient in turn
  • But if you slice horizontally you will just get egg and not know if the cake ingredients all work together to provide value (it could taste awful all together!)
  • Instead, bake a cupcake!
  • You get all the ingredients but a smaller size to check the recipe works
Cupcake
A Cupcake Doodle!

User Story Construction

The Three C’s

CARD
  • Description that defers the details
As a [user type]
I want [functionality]
So that [benefit]
CONVERSATION
  • Verbal conversations are best
  • Highlights that we don’t know all the detail
  • Reminder of conversations we have had, work that has been done, any wider context
Confirmation
  • Acceptance Criteria
  • Must be matched for a story to be considered done
  • User point of view only
  • No design specifications unless the user wants them

Good User Story Guidance

INVEST Criteria

INDEPENDENT
  • Stories shouldn’t have to be completed in a specific order and should not rely on another story to be started
  • This is not always achievable
  • Common examples are when you need something to exist before you can build upon it or when one story reduces the complexity of another making it seem logical to do them in a specific order
  • How do we respond to this?
    • Don’t put dependent stories in the same sprint to keep the flow
    • Join together dependent stories
NEGOTIABLE
  • Stories are not contracts – they are short descriptions of functionality
  • Open to discussion with the team
  • Simplify, alter, add to in whatever way is best for the goal and the product
VALUABLE
  • Valued to user/ customer
  • Technology assumptions have no value to the customer!
ESTIMABLE
  • Good size
  • Just enough information, but not too much to become confusing
  • If an investigation is needed before estimation, use a spike
SMALL
  • Lower Limit is coffee break size, i.e. big enough that you deserve a coffee break for finishing
  • Scrum team will highlight size issues in refinement
TESTABLE
  • Language is important
  • It must be specific, unlike “A user never has to wait for it to load”
  • “It takes two seconds max to load in 95% of cases” is much better

Acceptance Test Writing Guidance (2)

  • Always add tests as long as they add value
  • Capture assumptions
  • Provide basic criteria for when story is Done
  • Questions to ask
    • What else do devs want to know?
    • What am I assuming?
    • What can go wrong?
    • Circumstances where story might behave differently
    • Usability
    • Stress
    • Performance

Stories should be closed, not open ended

  • Use language to ensure that the stories have a definite closure
  • Continuous jobs are bad (e.g. managing an ad)
    • Instead use closed actions like “review responses” and “edit response”

User Story Card Example (3)

Story_Card
User Story Card Doodle
  1. Story reference number
  2. Author of the story
  3. Value/ importance
  4. Status/ release
  5. Size/ estimate
  6. Date created
  7. Dependencies
  8. Metrics (if relevant)
  9. Description
  10. Story title (this is the only mandatory field)

User Story Smells

Too Small

EVIDENCE: Frequent need to revise estimates depending on order

FIX: Combine the stories together

Not Independent

EVIDENCE: Difficulty in planning the iteration and lots of swapping stories in and out

FIX: Re-split the stories

Gold Plating

EVIDENCE: Extra wow factor being added that isn’t in the story

FIX: Increase the visibility in the Daily Scrum

Too Much Detail

EVIDENCE: Too long discussing the story

FIX: Strip the story back to its basics and focus on the user point of view

Trouble Prioritising

EVIDENCE: Stories cannot be prioritised logically

FIX: Stories may need to be broken down smaller or re-written from a value point of view


References

  1. The Scrum Guide
  2. User Stories Applied by Mike Cohn
  3. User Story Mapping by Jeff Patton
Posted in Scrum Add-ons

Hypothesis/ Solution Exploration Testing

"More than half of our ideas will deliver no value, we just don't know which half" - John Wanamaker (3, pg 78)

Product Testing Statistics

Experiment Statistics (3, pg 79)

  • 65% of features are rarely or never used
  • At Google and Bing, only 10 – 20% of experiments generate positive results (Harvard Business Review)
  • At Microsoft 1/3 have positive results, 1/3 have neutral results, 1/3 have negative results

Creating a Product Test

Focus on value (5, 78)

  • Get to the point – ignore the navigation/ log in etc if it does not help you determine the value 
  • Use a clear call to action – give the user a clear way to show that they value your solution like signing up for it 
  • Prioritise ruthlessly – don’t hold on to invalidated solutions
  • Stay agile – work in a medium that allows you to make updates easily as feedback comes in fast 
  • Don’t reinvent the wheel – use existing systems like email, forums etc to save work 
  • Measure behaviour – observe and measure what people do (behaviour trumps opinion here) 
  • Talk to users – understand why they are behaving this way

Truth Curve (5, pg 80)

The amount of effort you put into your MVP should be proportional to the amount of evidence you have that your idea is a good one

X axis shows the level of investment you should put into your MVP

Y axis shows the amount of market based evidence you have about your idea

Truth Curve (5, pg 80)

Product Testing Grids

Qualitative TestsQuantitative Tests
Marketing TestsMarketing MaterialsLanding Page/ smoke test
Explainer Video
Ad Campaign
Marketing A/B tests
Crowdfunding
Product TestsPaper Prototype
Wireframes
Mockups
Interactive Prototype
Wizard of Oz
Concierge
Live Product
Fake door/ 404 page
Product analytics and A/B tests
Qualitative TestsQuantitative Tests
BehaviouralUsability testingA/B testing
Analytics
AttitudinalUser InterviewsSurveys
Research Methods Framework (1, pg 230)

Qualitative Marketing Tests

Marketing Materials (1, pg 93)

  • To understand which benefits resonate with customers
  • To understand how they react to different ways of showing the benefits
  • Aim is to understand how they find the marketing material and why
  • Marketing material can be landing page, video, advert, email

Quantitative Marketing Tests

Landing page/ smoke test (1, pg 94)

  • Traffic is directed to a landing page
  • On this page they are asked to show interest (e.g. a sign up button, or a plans and pricing page)
  • There is no product yet
  • A ‘coming soon’ message is often displayed to those who show interest

Explainer video (1, pg 94)

  • Same as landing page
  • For products that are harder to explain on a landing page (e.g. dropbox)

Ad campaign (1, pg 94)

  • As adverts don’t allow you to display a lot, this is more appropriate for optimising customer acquisition and not product-market fit
  • Can advertise to different demographics to check hypothesis about target market
  • Measure clickthrough rate to measure which ads (and from which demographics) prove more successful

Marketing A/B testing (1, pg 94)

  • Test two alternative designs to compare how they perform
  • Run the tests in parallel with 50% of the traffic to each for simplicity

Crowdfunding (1, pg 94)

  • Advertising your product on a site like Kickstarter and asking people to pay for the product in advance of it being made
  • Set a minimum threshold for funding where you do not build your product before you have raised £X of funding
  • The donators to your product will then get your product once it has been built (i.e. pre-order the product with a discount)

Qualitative Product Tests

Paper Prototype (5, pg 89)

Pros
  • Can be created quickly
  • Easily arranged and rearranged 
  • Cheap and easy to throw away if you are wrong 
  • Can be assembled with materials already found in the office 
  • Fun activity many people enjoy
Cons 
  • Rapid iteration and duplication of the prototype can become time-consuming and tedious 
  • The simulation is very artificial, because you’re not using the actual input mechanisms (mouse/ keyboard/ touch screen etc) 
  • Feedback is limited to the high-level structure, information architecture, and flow of the product
  • Only useful with limited audience

Wireframes/ Mockups/ Interactive Prototypes (1, pg 100) (2, pg 124) (5, pg 89)

  • Demonstrate or show concepts to user to gauge their feedback (e.g. wireframes)
  • Have an ‘ask’ as a definitive pass-or-fail criteria
    • Commitment, monetary value, time, or another investment to show that they are interested
  • E.g. dropbox did a video of their concept (advert as if they had built it) to convince investors
  • Variations in interactivity and fidelity
  • (fidelity refers to how closely the artifact looks like the final product)
PRos
  • Provide a good sense of the length of the workflow
  • Reveal major obstacles to primary task completion 
  • Allow assessment of findability of core elements
  • Can be used relatively quickly 
cons
  • Most people will recognise that it is an unfinished product 
  • More attention than normal is paid to labelling and copy
Low_Fidelity_Prototype
Example of a Wireframe Sketch

Low fidelity prototype (4, page 49)

  • Start with the a persona
  • Draw the homepage and ask what actions the user wants to do from there
  • For each action draw a box (each box is a story)
  • Continue until the persona has completed their actions (including exploring edge cases) and then start with another persona
Low_fid_pro_2
Example of Low Fidelity Prototype

Concierge (2, pg 122)

  • Deliver the end result to your customer manually
  • Customer understands that it is being done manually and there is no appearance of a final solution to them
  • Conduct with just enough users as this is labour intensive

Wizard of Oz (2, pg 123)

  • Deliver the end result to your customer manually
  • Customer is not aware that it is manual behind the scenes and thinks they are using the end product
  • Tempting to leave them up as if successful you will get value from it, but it is expensive to run
  • Can be combined with A/B testing

Live Product (5, pg 89)

pros
  • Potential to reuse code for production 
  • The most realistic simulation to create 
  • Can be generated from existing code assets
cons
  • The team can be become bogged down in debating the finer points of the prototype
  • Time-consuming to create working code that delivers the desired experience
  • It’s tempting to perfect the code before releasing to customers
  • Updating and iterating can take a lot of time 

Quantitative Product Tests

Fake Door/ 404 page (1, page 100)

  • Good to test demand for a new feature
  • Include a link or button on the product to direct customers to a new feature
  • The link leads to a page saying it hasn’t been built yet and asking for why they would find this feature valuable
  • Overuse will make customers unhappy

Product A/B tests (1, page 100)

  • Used to compare performance of two alternative user experiences in your product

Qualitative Behavioural Tests

Usability testing

  • Online tools can be used to give a user a task and record them completing the task
  • Users are asked to talk through how easy it is to complete a task

Quantitative Behavioural Tests

A/B Testing

  • Two different versions of the product are shown to the user
  • Differences in behaviour are tracked (e.g. conversion percentage)

Analytics

  • Tracking on the product of the users behaviour
  • Data can then be analysed to see if hypothesis was acheived

Qualitative Attitudinal Tests

User Interviews

  • One-on-one interview with a user
  • Coming soon: Tips on User interviews

Quantitative Attitudinal Tests

User Surveys

  • Coming soon: Tips of User Surveys

References

  1. Lean Product Playbook by Dan Olsen
  2. Escaping the Build Trap by Melissa Perri
  3. Mastering Professional Scrum by Stephanie Ockerman and Simon Reindl
  4. User Stories Applied by Mike Cohn
  5. Lean UX by Jeff Gothelf
Posted in Scrum Add-ons

Product Metrics

Criteria for Good metrics 

Actionable (2, 143)

  • Demonstrate clear cause and effect
  • Understand how value was achieved (eg was it engineering or marketing)
  • Blame culture when metrics go down is avoided

Accessible (2, 143)

  • Everyone can get them
  • Allows metrics to guide as they are single source of truth
  • “Metrics are people too”
    • E.g. website hit is not as accessible as customer visiting site

Auditable (2, 143)

  • Data is credible to employees
  • Can pot check the data with real people to verify

Iterating Metrics

Analytics Metrics Loop
The Lean Product Analytics Process (1, page 260)

Product Metrics Structures

Pirate or AARRR Metrics

Originally by David McClure

Pirate Metrics
AARRR Metrics Framework (1, page 239)

Metrics

  1. Acquisition (prospects visit from various channels/ users find your product)
  2. Activation (prospects convert to customers/ users have their first great experience)
  3. Retention (customers remain active/ user return to your product)
  4. Referral (Customers refer prospects/ users recommend)
  5. Revenue (customers make your business money/ users pay for your product)

Benefits

  • Can calculate conversion through each step of the funnel (3, page 106)

Shortfalls

  • Does not consider user satisfaction (3, page 106)

HEART Framework

  • This framework is for a specific product or feature
  • Happiness (how satisfied the user is with the product)
  • Engagement (how the user interacts with the product)
  • Adoption (same as activation in Pirate Metrics)
  • Retention (same as Pirate Metrics)
  • Task Success (how easy is it for the user to complete the task)

Specific Metric Details

Retention Parameters

Retention Curves (1, page 243)

  • Days since first use does not start at 0 usually as this would be 100% and would alter the scale of the graph
  • Can use cohort analysis (i.e. plotting the retention rates of different user cohorts (groups) onto the same axis to see the difference in the retention parameters for the separate groups
Retention Curve
Retention Curve (1, page 243)
  • Parameter 1 to notice: The percentage where the graph starts on Day 1 shows the initial drop off rate
  • Parameter 2: Rate that the retention curve decreases from Day 1 value
  • Parameter 3: Terminal value for retention curves is where the retention flattens out. If it is 0% then your product will ultimately lose all of its customers

References

  1. Lean Product Playbook by Dan Olsen
  2. Lean Startup by Eric Ries
  3. Escaping the Build Trap by Melissa Perri
Posted in Scrum Add-ons

BDD and Three Amigos

Overview

Towards the end of 2018 I went to a workshop at Agile Leicester on Behaviour Driven Development (BDD) and Three Amigos. This article gives some references about where you can learn more about these techniques and then continues through the introduction of this technique to a team and the results that were achieved. I gave a talk back to the Agile Leicester community on this subject in early 2019 and the picture above is of that talk.


Motivation

The stories we were using were closer to contracts than stories

Research

I went to a workshop about BDD and Three Amigos at Agile Leicester given by Stuart Day.

Articles that summarise the practices:

Refresh on what stories actually are:

  • User Story Mapping – Jeff Patton
  • User Stories Applied – Mike Cohn

Jeff_Patton_Slide

Starting Point

  • We were keen on using the brilliant minds of all the team members to create the vision and to be a part of creating the solution rather than developing exactly what was written.
  • Our stories had lengthy acceptance criteria that weren’t focused on the user needs and stipulated exactly what the solution should be leaving no space for creativity.
  • There was not enough room to question, influence, and negotiate in the stories.

Experiment Hypotheses

By using BDD and three amigos we would:

  1. Focus our communications on the value to a user of a feature and the behaviours that would help us ultimately achieve that value
    1. This would support us in negotiating and sharing solution ideas, moving away from it being pre-decided
  2. Spread the understanding of the story through the team and enable empowerment
    1. This would allow the team to make decisions on the solution with the Product Owner
  3. In depth behaviour discussions would enable pragmatic approach to MVPe
    1. This would allow us to deliver smaller increments of working software to allow feedback

Method

To achieve our hypotheses we had to change how we worked.

BDD_and_Three_Amigos_Method

Story Writing

  • First thing to change was how we wrote stories. I worked with our Product Owner to refocus the stories back on what the User wanted and only that. No more design specifications, no more interaction specifications that a user doesn’t need to gain the value. We stripped them right back to CARD and CONFIRMATION all from the user’s point of view. (see Ron Jeffries explain CARD, CONFIRMATION, and CONVERSATION here)

Three Amigos

  • We changed how we talked through stories as a team. We previously had backlog refinement where the Product Owner would present to the team the stories and then we would move on when everyone understood.
  • We started with four hour-long Three Amigos sessions per sprint to refine the stories ready for the next sprint.
  • The team would decide who turned up from each specialty (i.e. who for the QA amigo and who for the Developer amigo) and the Product Owner would always be there, sometimes with a stakeholder if it made sense.

BDD

  • We used the acceptance criteria as a guide to writing the scenarios as Stuart demonstrated in his talk. We talked through each user-driven acceptance criteria and created all the behaviour scenarios that supported that confirmation.

Results

1 – Focus our communications on the value to a user of a feature and the behaviours that would help us ultimately achieve that value

  • Changing how we wrote the stories brought the focus back to WHY we were developing the story in the first place and the unnecessary words were removed
  • Our Product Owner has felt that these discussions help to marry the user need and the technical depth behind the behaviours
    • Originally this did take up more time for her as instead of just writing her story and presenting it out to the team she had to spend time creating it with the team
    • But previously a change to a story would be very time-consuming and this made it more tempting to resist the change. Now change happened naturally
    • Overall, time was saved

2 – Spread the understanding of the story through the team and enable empowerment

  • Collaboration with people of all different disciplines shone a light on different options and things that previously may not have been thought of
    • For example a QA amigo may think about all broader scenarios like ‘what should the behaviour be if…’
    • A developer amigo might be able to see that a solution is going to be slow and take a lot of power to achieve
  • Existing behaviour was talked through to share knowledge of the product in general
    • When we started using BDD we only talked through the behaviour that would change with this story
    • We learned that omitting existing behaviour from our discussions was not the best approach as the team members who hadn’t touched that part of the product before didn’t know how this new story would impact what was already there
    • If we felt that the existing behaviour was something we needed to consider as part of the story then we created scenarios
  • Talking through the language in the scenarios all together boosted our shared understanding
    • We had plenty of conversations about what certain words meant to make sure we were all using common language
    • We used our user roles to make the scenarios relatable, defined terms, and debated grammar
    • Some of this was too far admittedly and one of the things we learnt is to not waste time straying from the goal of three amigos on a quest for perfection

3 – In depth behaviour discussions would enable pragmatic approach to MVPe

  • By splitting stories into scenarios we could see a bit clearer on their size
    • For example if we found one had a lot of scenarios we could group them together and there our story was split by functionality just as simple as that.
    • Or we could cut some out that maybe weren’t vital to the user. These scenarios could come later.
  • We did also learn that BDD scenarios don’t work for all types of features for example one with a specific, non-negotiable set of rules set by a regulator. Scenarios are good for in general what happens if a rule is followed or broken but not needed for the actual rules.

Close

All in all using BDD and Three Amigos achieved the three hypotheses that we set out to achieve. There are many more benefits cited from using this technique, including improvements to quality and documentation, but as we weren’t measuring ourselves against them I haven’t included it in this article.

It also goes to prove that Agile community events are wonderful places to learn and I am extremely grateful for them (hence the cheesy slide of my thanks in the header picture).

Extensions

To keep working with and improving. Will update with any new challenges or tips. Let me know how you have found using BDD and Three Amigos in the comments below.

Posted in Scrum Add-ons

FedEx Day

Motivation

To encourage creativity and ownership from the team in the product


Research

Articles on benefits

  • Rob van Lanen’s paper on unlocking intrinsic motivation through FedEx Days
  • Johan den Haan’s 10 reasons why you should organise a FedEx Day
  • Facebook’s features that have resulted from Hackathons

Method

Preparation for the day

  1. Agree a time frame with the team and management
    1. Agreed a time frame in between two sprints (we timed this with an awkward dates for having a planning session I think because a bank holiday had offset us)
  2. Organise a review session to show each other our innovations
    1. Ensure someone from seniority was present to see the innovations
    2. Structured similar to the first half of the Sprint Review
  3. Establish rules
    1. it couldn’t be something we were already planning to do and had to be new
    2. There had to be something to show at the end (i.e. the FedEx delivery must arrive on time)

On the FedEx Day

  1. Meet at the start of the day to refresh the purpose
  2. Participants self-organised into teams
  3. Teams each agreed a direction

Results

Creativity and Innovation Boost

  • Enjoyment from the team members on getting to work on something they either enjoyed or were passionate about
  • The business appreciated the ideas that had been created and put them on the backlog for further investment, or for further investigation where a new product was involved
  • After this event there were more creative questions on the solutions and the features and more suggestions

Team Cohesion

  • The team separated into a few smaller teams to work on their projects – mostly by time zone for ease
  • The team learned that for people who have specialist skill sets in the team it may be harder to join in, so this needs special attention

Delivering Incrementally for Fast Feedback

  • Everyone had something to show for the review
  • The business and the team asked for this to become regular
Posted in Scrum Add-ons

Sprint Review ‘Science Fair’ style

Motivation

We have multiple teams delivering towards the same business goals. They often weren’t fully aware of what the other teams were achieving and missed the opportunity to ask questions.

Research

Articles

  • There is a similar idea to this but demonstrating by feature developed rather than by team in the Nexus Framework (1, pg 67)
  • Quick scrum.org summary is here

Starting Point

  • Each team doing their own end of iteration/ release reviews.
  • Unsure approach from teams – especially those who had worked on features they didn’t feel have a ‘wow’ factor
  • Some team members said they felt they had already completed a review of their work for iteration or release and were not confident any further review would be

Trial Method

  • Each team having a ‘stall’ at the Science Fair (we had 6 teams in total holding stalls)
  • Stalls set out around the edges of the room with enough space for people to wander about and stand around each stall
  • Inviting all stakeholders from all teams and some in-business users
  • A number of rounds of ten minutes each were set up to allow the teams to also rotate and see each other’s stalls
  • An introduction was created to explain the format and summarise the features delivered in the interval length agreed

Results

Notes from Setup

  • We took the idea to schedule these at regular intervals that suited the iteration or release cycle of all of the teams involved as they work to different cadences
  • Setup time in the room was important so that when people entered we were ready to go
  • Talking through the features that were delivered at the start felt like a waste of time as everyone could then talk through them with each team. This took time away from actual conversations so we decided not to keep it in the next one and bring our objectives wall into the room instead for people to refer to if wished.

Stall Activity

  • Each team had someone viewing their demo at almost every ’round’ as we called them
  • Each team felt they got value out of it as they were able to have more in depth conversations and ask more questions about the features from other teams than they usually feel able to in the team specific reviews.
  • The teams who were concerned on repetition of reviews and their stall not having exciting enough features had ample interest, questions, and feedback for us to repeat this Science Review format again
  • Stakeholders and in-business users who would usually only attend the review sessions for specific features broadened their knowledge to the work from other teams

Lessons Learned

  • Introduction to the features at the start is unnecessary – this has now been replaced with bringing the feature board into the room
  • Worthwhile start to improving the cross-team knowledge sharing and communication. It did highlight how difficult it is for each team to keep up with and understand the work of 5 other teams whilst also maintaining their own work.
  • Requests were made from all to make the event more ‘jazzy and exciting to attend with an extension to more people included in this. Biscuits have been suggested
  • This review format allowed a different type of conversation to a solo team review which I believe was because there were less people at one time and so questions of more personal interest seemed appropriate. This is why I don’t believe it felt more repetitive
  • There are more interested people in the features than you immediately think of within the business

Extensions to try

  • Invite more people and advertise as an event around the business for whoever wants to attend.
  • Consideration on whether making it a competition for best stall would create a brighter atmosphere.

References

  1. Nexus Framework by Kurt Bittner