Posted in Scrum Add-ons

Product Testing

“More than half of our ideas deliver no value, we just don’t know which half” – John Wanamaker (3, pg 78)


Product Testing Statistics

Experiment Statistics (3, pg 79)

  • 65% of feature are rarely or never used
  • At Google and Bings, only 10% to 20% of experiments generate positive results (Harvard Business Review)
  • At Microsoft 1/3 have positive results, 1/3 have neutral results, 1/3 have negative results 

Product Testing Grids

Qualitative Tests

Quantitative Tests

Marketing Tests

Marketing Materials Landing page/ smoke test

Explainer video

Ad campaign

Marketing a/b tests

Crowdfunding

Product Tests

Wireframes

Mockups

Interactive prototype

Wizard of Oz & Concierge

Live Product

Fake door/ 404 page

Product analytics & A/B tests

MVP Test Grid (1, pg 93)

Qualitative Tests

Quantitative Tests

Behavioural

Usability testing A/B testing
Analytics

Attitudinal

User Interviews Surveys
Research Methods Framework (1, pg 230)

Qualitative Marketing Tests

Marketing Materials (1, pg 93)

  • To understand which benefits resonate with customers
  • To understand how they react to different ways of showing the benefits
  • Aim is to understand how they find the marketing material and why
  • Marketing material can be landing page, video, advert, email

Quantitative Marketing Tests

Landing page/ smoke test (1, pg 94)

  • Traffic is directed to a landing page
  • On this page they are asked to show interest (e.g. a sign up button, or a plans and pricing page)
  • There is no product yet
  • A ‘coming soon’ message is often displayed to those who show interest

Explainer video (1, pg 94)

  • Same as landing page
  • For products that are harder to explain on a landing page (e.g. dropbox)

Ad campaign (1, pg 94)

  • As adverts don’t allow you to display a lot, this is more appropriate for optimising customer acquisition and not product-market fit
  • Can advertise to different demographics to check hypothesis about target market
  • Measure clickthrough rate to measure which ads (and from which demographics) prove more successful

Marketing A/B testing (1, pg 94)

  • Test two alternative designs to compare how they perform
  • Run the tests in parallel with 50% of the traffic to each for simplicity

Crowdfunding (1, pg 94)

  • Advertising your product on a site like Kickstarter and asking people to pay for the product in advance of it being made
  • Set a minimum threshold for funding where you do not build your product before you have raised £X of funding
  • The donators to your product will then get your product once it has been built (i.e. pre-order the product with a discount)

Qualitative Product Tests

Wireframes/ Mockups/ Interactive Prototypes (1, pg 100) (2, pg 124)

  • Demonstrate or show concepts to user to gauge their feedback (e.g. wireframes)
  • Have an ‘ask’ as a definitive pass-or-fail criteria
    • Commitment, monetary value, time, or another investment to show that they are interested
  • E.g. dropbox did a video of their concept (advert as if they had built it) to convince investors
  • Variations in interactivity and fidelity
  • (fidelity refers to how closely the artifact looks like the final product)
Low_Fidelity_Prototype
Example of a Wireframe Sketch

Low fidelity prototype (4, page 49)

  • Start with the a persona
  • Draw the homepage and ask what actions the user wants to do from there
  • For each action draw a box (each box is a story)
  • Continue until the persona has completed their actions (including exploring edge cases) and then start with another persona
Low_fid_pro_2
Example of Low Fidelity Prototype

Concierge (2, pg 122)

  • Deliver the end result to your customer manually
  • Customer understands that it is being done manually and there is no appearance of a final solution to them
  • Conduct with just enough users as this is labour intensive

Wizard of Oz (2, pg 123)

  • Deliver the end result to your customer manually
  • Customer is not aware that it is manual behind the scenes and thinks they are using the end product
  • Tempting to leave them up as if successful you will get value from it, but it is expensive to run
  • Can be combined with A/B testing

Quantitative Product Tests

Fake Door/ 404 page (1, page 100)

  • Good to test demand for a new feature
  • Include a link or button on the product to direct customers to a new feature
  • The link leads to a page saying it hasn’t been built yet and asking for why they would find this feature valuable
  • Overuse will make customers unhappy

Product A/B tests (1, page 100)

  • Used to compare performance of two alternative user experiences in your product

Qualitative Behavioural Tests

Usability testing

  • Online tools can be used to give a user a task and record them completing the task
  • Users are asked to talk through how easy it is to complete a task

Quantitative Behavioural Tests

A/B Testing

  • Two different versions of the product are shown to the user
  • Differences in behaviour are tracked (e.g. conversion percentage)

Analytics

  • Tracking on the product of the users behaviour
  • Data can then be analysed to see if hypothesis was acheived

Qualitative Attitudinal Tests

User Interviews

  • One-on-one interview with a user
  • Coming soon: Tips on User interviews

Quantitative Attitudinal Tests

User Surveys

  • Coming soon: Tips of User Surveys

References

  1. Lean Product Playbook by Dan Olsen
  2. Escaping the Build Trap by Melissa Perri
  3. Mastering Professional Scrum by Stephanie Ockerman and Simon Reindl
  4. User Stories Applied by Mike Cohn
Posted in Scrum Add-ons

Product Metrics

Criteria for Good metrics 

Actionable (2, 143)

  • Demonstrate clear cause and effect
  • Understand how value was achieved (eg was it engineering or marketing)
  • Blame culture when metrics go down is avoided

Accessible (2, 143)

  • Everyone can get them
  • Allows metrics to guide as they are single source of truth
  • “Metrics are people too”
    • E.g. website hit is not as accessible as customer visiting site

Auditable (2, 143)

  • Data is credible to employees
  • Can pot check the data with real people to verify

Iterating Metrics

Analytics Metrics Loop
The Lean Product Analytics Process (1, page 260)

Product Metrics Structures

Pirate or AARRR Metrics

Originally by David McClure

Pirate Metrics
AARRR Metrics Framework (1, page 239)

Metrics

  1. Acquisition (prospects visit from various channels/ users find your product)
  2. Activation (prospects convert to customers/ users have their first great experience)
  3. Retention (customers remain active/ user return to your product)
  4. Referral (Customers refer prospects/ users recommend)
  5. Revenue (customers make your business money/ users pay for your product)

Benefits

  • Can calculate conversion through each step of the funnel (3, page 106)

Shortfalls

  • Does not consider user satisfaction (3, page 106)

HEART Framework

  • This framework is for a specific product or feature
  • Happiness (how satisfied the user is with the product)
  • Engagement (how the user interacts with the product)
  • Adoption (same as activation in Pirate Metrics)
  • Retention (same as Pirate Metrics)
  • Task Success (how easy is it for the user to complete the task)

Specific Metric Details

Retention Parameters

Retention Curves (1, page 243)

  • Days since first use does not start at 0 usually as this would be 100% and would alter the scale of the graph
  • Can use cohort analysis (i.e. plotting the retention rates of different user cohorts (groups) onto the same axis to see the difference in the retention parameters for the separate groups
Retention Curve
Retention Curve (1, page 243)
  • Parameter 1 to notice: The percentage where the graph starts on Day 1 shows the initial drop off rate
  • Parameter 2: Rate that the retention curve decreases from Day 1 value
  • Parameter 3: Terminal value for retention curves is where the retention flattens out. If it is 0% then your product will ultimately lose all of its customers

References

  1. Lean Product Playbook by Dan Olsen
  2. Lean Startup by Eric Ries
  3. Escaping the Build Trap by Melissa Perri
Posted in Scrum Add-ons

BDD and Three Amigos

Overview

Towards the end of 2018 I went to a workshop at Agile Leicester on Behaviour Driven Development (BDD) and Three Amigos. This article gives some references about where you can learn more about these techniques and then continues through the introduction of this technique to a team and the results that were achieved. I gave a talk back to the Agile Leicester community on this subject in early 2019 and the picture above is of that talk.


Motivation

The stories we were using were closer to contracts than stories

Research

I went to a workshop about BDD and Three Amigos at Agile Leicester given by Stuart Day.

Articles that summarise the practices:

Refresh on what stories actually are:

  • User Story Mapping – Jeff Patton
  • User Stories Applied – Mike Cohn

Jeff_Patton_Slide

Starting Point

  • We were keen on using the brilliant minds of all the team members to create the vision and to be a part of creating the solution rather than developing exactly what was written.
  • Our stories had lengthy acceptance criteria that weren’t focused on the user needs and stipulated exactly what the solution should be leaving no space for creativity.
  • There was not enough room to question, influence, and negotiate in the stories.

Experiment Hypotheses

By using BDD and three amigos we would:

  1. Focus our communications on the value to a user of a feature and the behaviours that would help us ultimately achieve that value
    1. This would support us in negotiating and sharing solution ideas, moving away from it being pre-decided
  2. Spread the understanding of the story through the team and enable empowerment
    1. This would allow the team to make decisions on the solution with the Product Owner
  3. In depth behaviour discussions would enable pragmatic approach to MVPe
    1. This would allow us to deliver smaller increments of working software to allow feedback

Method

To achieve our hypotheses we had to change how we worked.

BDD_and_Three_Amigos_Method

Story Writing

  • First thing to change was how we wrote stories. I worked with our Product Owner to refocus the stories back on what the User wanted and only that. No more design specifications, no more interaction specifications that a user doesn’t need to gain the value. We stripped them right back to CARD and CONFIRMATION all from the user’s point of view. (see Ron Jeffries explain CARD, CONFIRMATION, and CONVERSATION here)

Three Amigos

  • We changed how we talked through stories as a team. We previously had backlog refinement where the Product Owner would present to the team the stories and then we would move on when everyone understood.
  • We started with four hour-long Three Amigos sessions per sprint to refine the stories ready for the next sprint.
  • The team would decide who turned up from each specialty (i.e. who for the QA amigo and who for the Developer amigo) and the Product Owner would always be there, sometimes with a stakeholder if it made sense.

BDD

  • We used the acceptance criteria as a guide to writing the scenarios as Stuart demonstrated in his talk. We talked through each user-driven acceptance criteria and created all the behaviour scenarios that supported that confirmation.

Results

1 – Focus our communications on the value to a user of a feature and the behaviours that would help us ultimately achieve that value

  • Changing how we wrote the stories brought the focus back to WHY we were developing the story in the first place and the unnecessary words were removed
  • Our Product Owner has felt that these discussions help to marry the user need and the technical depth behind the behaviours
    • Originally this did take up more time for her as instead of just writing her story and presenting it out to the team she had to spend time creating it with the team
    • But previously a change to a story would be very time-consuming and this made it more tempting to resist the change. Now change happened naturally
    • Overall, time was saved

2 – Spread the understanding of the story through the team and enable empowerment

  • Collaboration with people of all different disciplines shone a light on different options and things that previously may not have been thought of
    • For example a QA amigo may think about all broader scenarios like ‘what should the behaviour be if…’
    • A developer amigo might be able to see that a solution is going to be slow and take a lot of power to achieve
  • Existing behaviour was talked through to share knowledge of the product in general
    • When we started using BDD we only talked through the behaviour that would change with this story
    • We learned that omitting existing behaviour from our discussions was not the best approach as the team members who hadn’t touched that part of the product before didn’t know how this new story would impact what was already there
    • If we felt that the existing behaviour was something we needed to consider as part of the story then we created scenarios
  • Talking through the language in the scenarios all together boosted our shared understanding
    • We had plenty of conversations about what certain words meant to make sure we were all using common language
    • We used our user roles to make the scenarios relatable, defined terms, and debated grammar
    • Some of this was too far admittedly and one of the things we learnt is to not waste time straying from the goal of three amigos on a quest for perfection

3 – In depth behaviour discussions would enable pragmatic approach to MVPe

  • By splitting stories into scenarios we could see a bit clearer on their size
    • For example if we found one had a lot of scenarios we could group them together and there our story was split by functionality just as simple as that.
    • Or we could cut some out that maybe weren’t vital to the user. These scenarios could come later.
  • We did also learn that BDD scenarios don’t work for all types of features for example one with a specific, non-negotiable set of rules set by a regulator. Scenarios are good for in general what happens if a rule is followed or broken but not needed for the actual rules.

Close

All in all using BDD and Three Amigos achieved the three hypotheses that we set out to achieve. There are many more benefits cited from using this technique, including improvements to quality and documentation, but as we weren’t measuring ourselves against them I haven’t included it in this article.

It also goes to prove that Agile community events are wonderful places to learn and I am extremely grateful for them (hence the cheesy slide of my thanks in the header picture).

Extensions

To keep working with and improving. Will update with any new challenges or tips. Let me know how you have found using BDD and Three Amigos in the comments below.

Posted in Scrum Add-ons

FedEx Day

Motivation

To encourage creativity and ownership from the team in the product


Research

Articles on benefits

  • Rob van Lanen’s paper on unlocking intrinsic motivation through FedEx Days
  • Johan den Haan’s 10 reasons why you should organise a FedEx Day
  • Facebook’s features that have resulted from Hackathons

Method

Preparation for the day

  1. Agree a time frame with the team and management
    1. Agreed a time frame in between two sprints (we timed this with an awkward dates for having a planning session I think because a bank holiday had offset us)
  2. Organise a review session to show each other our innovations
    1. Ensure someone from seniority was present to see the innovations
    2. Structured similar to the first half of the Sprint Review
  3. Establish rules
    1. it couldn’t be something we were already planning to do and had to be new
    2. There had to be something to show at the end (i.e. the FedEx delivery must arrive on time)

On the FedEx Day

  1. Meet at the start of the day to refresh the purpose
  2. Participants self-organised into teams
  3. Teams each agreed a direction

Results

Creativity and Innovation Boost

  • Enjoyment from the team members on getting to work on something they either enjoyed or were passionate about
  • The business appreciated the ideas that had been created and put them on the backlog for further investment, or for further investigation where a new product was involved
  • After this event there were more creative questions on the solutions and the features and more suggestions

Team Cohesion

  • The team separated into a few smaller teams to work on their projects – mostly by time zone for ease
  • The team learned that for people who have specialist skill sets in the team it may be harder to join in, so this needs special attention

Delivering Incrementally for Fast Feedback

  • Everyone had something to show for the review
  • The business and the team asked for this to become regular
Posted in Scrum Add-ons

Sprint Review ‘Science Fair’ style

Motivation

We have multiple teams delivering towards the same business goals. They often weren’t fully aware of what the other teams were achieving and missed the opportunity to ask questions.

Research

Articles

  • There is a similar idea to this but demonstrating by feature developed rather than by team in the Nexus Framework (1, pg 67)
  • Quick scrum.org summary is here

Starting Point

  • Each team doing their own end of iteration/ release reviews.
  • Unsure approach from teams – especially those who had worked on features they didn’t feel have a ‘wow’ factor
  • Some team members said they felt they had already completed a review of their work for iteration or release and were not confident any further review would be

Trial Method

  • Each team having a ‘stall’ at the Science Fair (we had 6 teams in total holding stalls)
  • Stalls set out around the edges of the room with enough space for people to wander about and stand around each stall
  • Inviting all stakeholders from all teams and some in-business users
  • A number of rounds of ten minutes each were set up to allow the teams to also rotate and see each other’s stalls
  • An introduction was created to explain the format and summarise the features delivered in the interval length agreed

Results

Notes from Setup

  • We took the idea to schedule these at regular intervals that suited the iteration or release cycle of all of the teams involved as they work to different cadences
  • Setup time in the room was important so that when people entered we were ready to go
  • Talking through the features that were delivered at the start felt like a waste of time as everyone could then talk through them with each team. This took time away from actual conversations so we decided not to keep it in the next one and bring our objectives wall into the room instead for people to refer to if wished.

Stall Activity

  • Each team had someone viewing their demo at almost every ’round’ as we called them
  • Each team felt they got value out of it as they were able to have more in depth conversations and ask more questions about the features from other teams than they usually feel able to in the team specific reviews.
  • The teams who were concerned on repetition of reviews and their stall not having exciting enough features had ample interest, questions, and feedback for us to repeat this Science Review format again
  • Stakeholders and in-business users who would usually only attend the review sessions for specific features broadened their knowledge to the work from other teams

Lessons Learned

  • Introduction to the features at the start is unnecessary – this has now been replaced with bringing the feature board into the room
  • Worthwhile start to improving the cross-team knowledge sharing and communication. It did highlight how difficult it is for each team to keep up with and understand the work of 5 other teams whilst also maintaining their own work.
  • Requests were made from all to make the event more ‘jazzy and exciting to attend with an extension to more people included in this. Biscuits have been suggested
  • This review format allowed a different type of conversation to a solo team review which I believe was because there were less people at one time and so questions of more personal interest seemed appropriate. This is why I don’t believe it felt more repetitive
  • There are more interested people in the features than you immediately think of within the business

Extensions to try

  • Invite more people and advertise as an event around the business for whoever wants to attend.
  • Consideration on whether making it a competition for best stall would create a brighter atmosphere.

References

  1. Nexus Framework by Kurt Bittner
Posted in Scrum Add-ons

Velocity Forecasting with Monte Carlo

Motivation

Communication with stakeholders and confidence in how we forecast was proving difficult

Research

Articles

  • I found this article on Scrum.org that introduced me to Monte Carlo forecasting.
  • Further research here and spreadsheet to use

Starting Point

The team that I trialed this with were previously using the method to take the average velocity of the last 3 sprints, catering for anticipated holidays.

Method

  • Take it to the team and talk through research
  • Input velocity data from previous sprints
  • Use the predictions it comes out with to change how we forecast our sprints
  • Use the calculations to have a strong goal (one that we could have a better confidence in our forecast) and a stretch goal which we would get to if we could
  • Take it to stakeholders and talk through the trial

Results

Communication on our forecasts within the team

  • It enabled conversation over the forecast velocity as we could adjust and see the simulation run rather than be presented with a single number
  • A forecast is as such, a forecast, and using this tool brought us back to metrics being a tool rather than a target number to hit
  • The use of strong goals and stretch goals helped focus on something more realistic to achieve and therefore built confidence within the team

Transparency with Stakeholders

  • As we’d had more conversations as a team we could justify our forecasts when asked and had more confidence
  • The use of strong and stretch goals also helped manage expectations with stakeholders

Extension to try

  • Planning for a goal first and then checking how that goal would look in the simulation and what our chances were of achieving the goal
  • Then we could adapt our approach to the goal if it seemed unlikely for us inside a sprint