While working with various Business Analysts, Product Owners, Developers and Quality Analysts I have observed various anti-patterns on how user stories are written and split. Anti-patterns can often be defined as common defective processes and implementations within organizations or teams. Most teams new to Agile tend to exhibit these behaviours. Here are my top six user story anti-patterns, their mitigations and principle reminders.
User stories should follow the INVEST principle. User stories should be small. The aim is to try and complete them in less than a week - with the sweet spot being between one and three days. This enables the team to estimate, track scope, get early feedback and understand the story better.
An example of a big story would be: “As a user I would like to manage my user access”.
Instead, breaking it down to smaller stories would look something like this:
Above we talked about big stories that require to be split. The opposite is not always better. For example, take this user story:
“As a user I want to log in to an application so that I am authenticated”.
You would find the team breaking down the story into an excessive amount of tech tasks and smaller stories that are not necessarily adding value once the exercise is completed:
With the above example of story-splitting, the developers would assign each other these tasks/user stories. The danger here is that there would so much disconnect and churn in trying to achieve the task/user story. At the end of the day when any of the tasks/user stories are complete, there won’t be a clear understanding on what value was delivered. None of the tasks are independent and none can be shipped as working software that is usable.
Check out this article for more ideas on user story slicing techniques.
Some Agile projects start with an already defined UI screen design or, in some cases, an expectation that all the UI designs are created and signed off before the development team starts. I have seen this happen in one of the projects I’ve been a part of. What I observed was that the Business Analyst would write acceptance criteria purely based on the screen designs and not the functionality.
The designer would create a screen where the client logs in. The screen would contain all the necessary UI elements - username & password fields; forgot password links; footer; header; a section for advertising on the side etc.
The analyst would create a story for all of these features on the screen which the Developer would write the code for. The problem with this story is that the functionality doesn’t work end-to-end. So the “forgot password” link wouldn’t do anything because the page is static. Or once you fill in your login details and click “Login” it wouldn’t take you to a success page. So there was no incremental value created for the customer once the Login story is completed.
User stories are not just elegant screens, but they should represent a piece of functionality that can be tested and used by customers.
Check out vertical slices for more ideas around ensuring your stories talk to the functionality and not just the front-end view.
I have observed teams where Business Analysts work silently in the dark in creating an entire backlog of stories and acceptance criteria all on their own. As a result, when the Developers pick up the story it’s the first time they have context of it. The same is true for the rest of the team. An unintended consequence of this is that stories keep jumping back and forth between development and analysis. This is often because of missed features or scenarios and the Quality Analysts have a field day raising bugs as a result.
A user story belongs to the whole team and it isn’t a task that is the sole responsibility of the Business Analyst. What I’ve seen work is that the Analyst can gather the main ideas and do a lot of the groundwork to understand the business rules and requirements. Then collaborating with all roles in the team ensures that the story is:
By collaborating with the whole team to get insights on your user story, you as the Analyst increase the quality of your story and the story is better understood by all team members.
In this particular example we noticed that the Quality Analysts were idle until the last day of the sprint - even though the Developers kept moving their cards to Testing every few days or so.
Let’s use the user story example in point two to illustrate this point: “As a user I want to log in to an application so that I am authenticated”.
What happened is that the team took this story and split it like so:
Now, because the stories are not split correctly, you have the QA waiting until all tasks (1 to 4) are complete before they can test the story end-to-end. This creates a bottleneck to the story progressing to Done status.
Observing conversations between development teams and Product Owners, there is a notion that tech debt has zero value as it doesn’t feel like a new piece of functionality. Arguably, you could say refactoring existing (and working) code does not provide any business value because the customer doesn’t see the benefit of a new feature. However, if we don’t refactor “dirty” code, the time it takes for us to add a new feature for customers could be adversely affected because of some of the tech debt we acquire. For example, substandard code which needs refactoring could result in slow load times and performance issues which affect the customer’s experience. This is why I believe planning for and attending to this type of tech debt does provide business value - even though this value is eventually realised in the future.
Here are some other examples of necessary tech debt that are sometimes unfairly and incorrectly thrown into the zero-value dumpster:
We need to really analyse the tech debt we have and try to articulate the value (or lack) of it. This enables the business to prioritise and see the potential monetary value of delivering the work instead of blindly categorising all tech debt as zero value.
Hope you find this useful, happy story writing.