QA Managers get asked all the time, “How long will it take to test it?” Scoping testing is one of the most challenging tasks we face in QA. To understand why, we first need to look at the question itself. What the questioner really means is, “How long will it take for you to tell us it’s okay to ship it?” This is a different question, and a much harder one to answer.
Let’s start with the first question, “How long will it take to test it?” It’s an easier one to answer because the testing task is contained within just one group: the QA group. For the most part we understand what’s involved and can make an estimate as to how long it will take to do it. Here’s an exercise we might do to scope the testing work: We start by considering all the areas we have to test, all the types of testing we need to do in each of those areas (functional, performance, negative, exploratory, etc.), and the matrix of platforms on which we’ll need to do them (browser, OS versions, etc.) Then we factor in any automated tests we have to help us accelerate the testing, and the time needed to evaluate those results. Finally, we come up with a reasonable estimate for how long it will take to run all those tests. That wasn’t so difficult, was it? But that doesn’t get us to release. We’ve answered the question, “How long will it take to test it?” But that wasn’t the real question at all. Remember, the real question was, “How long will it take for you to tell us it’s okay to ship it?”
Our little scoping exercise tells us how long it takes to test through the product once. Running that test cycle merely begins the longer “test phase” of the overall project, by generating the first list of what engineering needs to do in order to get the product closer to release. There are other time-consuming components to “testing time” missing from our initial scoping exercise as well. These are activities not reflected in test plans, such as repeating test scenarios multiple times to isolate exact steps to reproduce, setting up test environments, meeting with engineers or designers to make sure we understand expected behavior, taking notes as we test, testing differently than we’d planned because of a bug we found, tweaking automation code to accommodate functional or UI changes, and logging bugs. These activities are largely dependent on the type of bugs, feature changes, and test environment snafus we encounter, making the time they’ll consume pretty much impossible to scope at the outset.
So far, everything we’ve considered in our effort to scope testing is (mostly) measurable, and (mostly) within QA’s control. We’ve already seen that even some of QA’s tasks are pretty much impossible to scope in advance. And looking only at QA’s tasks was the easy part. A large part of “testing time”, doesn’t even involve QA. What about the time it takes engineering to fix the bugs found after all that testing was done? The scoping exercise above identified the time it takes to test just one time. After that, engineering needs to fix all the important bugs. And then the test-fix cycle repeats — as many times as needed to find and eliminate all the show-stopper bugs. Scoping the time it will take from the start of testing to production-ready state, we must factor in multiple bug-fix cycles. And how is that possible, when we don’t yet know what the bugs are?
Another obstacle to accurate scoping is our inability to predict the quality of the build. Given a fixed amount of testing time, test coverage is a function of the quality of the build. If the build is of low quality, QA will spend a lot of time processing and reporting bugs, leaving less time for test coverage across the entire product. With a higher quality build, fewer bugs are found, allowing for more test coverage because less time is spent processing bugs. (Michael Bolton does a nice job of illustrating this here [see slide 22].) Furthermore, the lower quality build can have a serious ripple effect on the schedule. Not only was less testing accomplished than desired in the first test cycle, but more bug fixes mean more code changes introduced into the next test build. The more code changes, the more risk of instability, increasing the need for broad test coverage.
The legitimate, while impossible, question, “How long will it take for you to tell us it’s okay to ship it?” deserves a thoughtful answer. So how do we answer it? We can carefully make our “best guess” based on what we know: the complexities and risk areas of our product, the skill set of our team, our preparedness, and other resources. And we can hope we don’t incur the questioner’s wrath when we change our answer on a weekly basis (which we will invariably do, if we are honest people.) And we can hope that our team members outside of QA will show an interest in, and take the time to learn about, all the intricacies involved in answering this most unanswerable question.