Scrum and Quality Assurance

A recent email:

[x] and I are over quality and we put folks from our teams on to the scrum development teams to help ensure quality. Some of the key questions we have are regarding how we ensure quality. How do we know with the scrum process that the quality is adequate? What are key ways we can report and track the quality during a scrum development cycle?

Remember how Tito Puente moved the drummer from the back of the band to the front of the band? That’s what Scrum requires you to do with QA.

The Scrum framework itself is silent on engineering practices. Scrum does require you to build a potentially-shippable product increment every Sprint. “Potentially shippable” generally means you could confidently get it out the door within one stabilization Sprint, or less. A stabilization Sprint is not a testing Sprint. Every Sprint is a testing Sprint. That means the team gets zero credit for work that isn’t tested.

No one said Scrum was easy. If your testing is any good, more than a bunch of unit tests, you may find it difficult to get this done every Sprint. It’s a lot of extra work. This responsibility is owned by the whole team, because the whole team won’t get credit if it’s not done/done/done.

To make this explicit, our training encourages your teams to negotiate a robust definition of “done” for every Product Backlog Item. Write this right on the card until the whole team internalizes the new habits. This means taking on fewer Product Backlog Items during each Sprint Planning Meeting. Welcome to real life. A smaller amount of thoroughly-tested work is worth more to us than a larger amount of low quality work. Instead of reporting and tracking regression failures, we fix everything we broke in the same Sprint we broke it, or the item’s not demonstrated at the Sprint Review Meeting. Sometimes it’s slow going, but this way we always know where we stand (unlike the FBI).

If your team gets sick of all the extra work (which increases every Sprint as your codebase grows), and is willing to learn new skills, it will automate as much as possible: end to end (system) testing, load testing, “negative testing”, security testing…. When anyone can reach the “push to test” button and get rapid feedback whether it’s broken, they can make more radical design changes than they would otherwise because they’re not flying blind. Another useful engineering practice we borrow from the eXtreme Programming folks: continuous integration.

Of course you will still need some manual testing.

Remember the principle behind the practice of combining QA skills with design/coding skills one one team is to tighten the feedback loop. Don’t track bugs; detect them when they’re created and fix them! (Of course if any slip through, you can create Product Backlog Items for them to be prioritized like all other work.) The traditional practice of waiting until the end to test plays havoc with our release planning. Maybe we can predict how long it takes to test, but how long will it take to fix the things we find during testing? And then how long will it take to fix the things we broke while fixing those things? With the long feedback loop of the waterfall process we can’t predict how long it will take for the ball to stop bouncing. A flimsy definition of “done” (in Scrum, or any other approach) leads to an unbounded amount of work before we can ship.

The software industry has an imbalance of skills, personnel, and clout. There hasn’t been much career incentive for our best and brightest to get good at QA. I visited one company that gave their “developers” the desks near the window while the “testers” were clumped toward the center. (They nearly threw me out of that window when I suggested grouping by cross-functional teams instead.) Scrum can change that when combined with the Agile engineering practices and a robust definition of “done” for Product Backlog Items.

Software Process Mentor
(former embedded systems design engineer and embedded systems verification engineer)

The integration of QA and coding is explained in greater depth in the Scrum Training Series, the Scrum Reference Card, and the Scrum Master Checklist.

Michael James

Michael James is a software process mentor, team coach, and Scrum Trainer with a focus on the engineering practices (TDD, refactoring, continuous integration, pair programming) that allow Agile project management practices. He is also a software developer (a recovering "software architect" who still loves good design).

Posted in Agile
7 comments on “Scrum and Quality Assurance
  1. Anonymous says:

    Are you assuming QAs are all white box QAs or pgoramming QA? Because black box QA won’t be able to help much during development phase.

  2. Michael James says:

    You don’t seem to have a very high opinion of your people!

    Becoming more agile than you are now entails learning skills you don’t have now. This could include your Scrum team members who used to call themselves “testers” learning how they can help on the first day of the Sprint, your “coders” learning to collaborate, and everyone learning there’s a fruitful gray area between black box testing and white box testing.

    Scrum’s not for everyone, and some of them may leave. But most people I’ve met in this business are interested in learning new ways of working.


  3. Michael James says:

    A New York Times article about Google culture describes a novel approach to promoting modern engineering practices.

    In the Testing grouplet, our idea was to have developers start writing their own tests. But no matter how hard we tried, we weren’t reaching engineers fast enough in our growing organization. One day, toward the end of a long brainstorming meeting, we came up with the idea of putting up little one-page stories, called episodes, in bathroom stalls discussing new and interesting testing techniques. Somebody immediately called it “Testing on the Toilet,” and the idea stuck.

    We formed a team of editors, encouraged authors to write lots of episodes and then bribed Nooglers with books and T-shirts to put up episodes every week. The first few episodes touched off a flurry of feedback from all corners of the campus. We received praise and flames, but mostly what we heard was that people were bored and wanted us to hurry and publish the next episode.

    Eventually, the idea became part of the company culture and even a company joke, as in, “Excuse me, I need to go read about testing.” That’s when we realized that we had what we needed: a way to get our message out.

  4. Anonymous says:

    Any thoughts on how system level tests like performance, scalability, deployment, compatibility, etc., can fit into scrum ?
    As we understand, such tests can happen only after completion of integration components.

    Also, an isolated/separate test team performing tests on a product adds more value as compared to the teams part of development team….this would help to share separate views/thoughts/indpendent evaluation of the product quality, etc.,almost like a customer. So, does anyone feel it’s not a right idea to follow in scrum ?

  5. Michael James says:

    You probably won’t be able to do all those things every Sprint in the beginning. The ScrumMaster’s job includes pushing the edges of the definition of “done” each Sprint, to eventually include all development needed for shippable product. Each Product Backlog Item should have a set of acceptance criteria.

    So yes, I mean performance testing, scalability testing, deployment, compatibility testing…. In the meanwhile your product is also growing in size, and must be regression tested every Sprint. So the amount of testing must increase each Sprint.

    Your only hope is automating this system testing, which I talk about here:

    Some projects have contractual requirements for Independent Verification & Validation. Others find value in external User Acceptance Testing (UAT). These can also be worked into the definition of “done.” Other times it’s best to have your customers and end users in the Sprint Review Meetings.

    Michael James
    Software Process Mentor

  6. Anonymous-2 says:

    I am a developer on a team using Scrum to develop a brand new software project. We have 5 developers, 1 automation tester and 1 QA (manual) tester on the team. We typically run 3-week sprints. We are struggling with the fact that our developers typically don’t finish designing, implementing and unit testing the new system components until near the last day of the sprint. We are fairly early in the project and much of what we design depends on each other for correctness. Thus, we don’t have anything to give to either tester until near the last day of the sprint. This is not enough time for manual testing and not nearly enough time to write automation scripts. So, our non-unit testing lags behind one sprint. How can we pull all testing into the original sprint?

Leave a Reply

Your email address will not be published. Required fields are marked *