Testing

Test Plan? We don’t need no stinkin’ test plan!

It’s been a long time! I shouldn’t have left you without a dope blog to read (to). Kudos to whoever picks up on that reference. Moving right along…

Life happens and sometimes you miss opportunities to write another blog post, but I felt compelled to add another chapter to my chronicles of testing. I’m feeling a little feisty so I thought I’d cover a “controversial” topic that ties into my last entry (context-driven testing). With that introduction, here we are, talking about test plans. Or, the lack thereof. Hear me out.

When I first started into my QA journey, I wanted to replicate as much formal QA procedure as I could. Among those procedures was the test plan – what they are, how to make them, what needs to be included, what results you need to glean from them, etc. In the beginning this made total sense, because how else are you supposed to know what exactly you need to be checking? Being able to reference design comps and functional requirements, you knew each feature of your product and how they behave. Simply make a list of each feature, how they behave, add some additional test cases for browser and devices and you’re gold! However, I soon discovered that it was an exercise in futility.

test_plan

I used to spend lots of time creating test plans and populating them with test cases, dreaming of how complete they were and how I was going to catch every damn bug that crept around. But I found out that when it came time to actually apply the test plan, two things became evident:

  1. You’ll never create a test plan that covers everything
  2. You spend a lot of time keeping your test plan up to date

Now keep in mind that most of the projects I work on are web development projects, mostly CMS websites, so it’s not something like product development where you’re testing the same thing but with new features or something like that. Sure, there are considerations that carry over from project to project, but those are few when compared to the testing needed for the rest of the site. So, I spent a lot of time creating a framework that ultimately wasn’t as useful as intended. Not only was its usefulness limited, but creating and maintaining test plans took up a lot of time that could have otherwise been used for testing. As testers, we all know that we’d like to have all the time we can to test without having to shortchange the results. And to top it off, since the completed test plan wasn’t a client deliverable, we didn’t have a use for the results beyond chucking it in the trash. Every reported issue (bug, improvement, new feature, etc) was in Jira, and whether the issue was addressed or not before launch, everything lived in Jira and the test plan was never referenced again.

So, I gave up the test plan. I gave it up not only because of its lack of utility, but also the limitations it placed on the approach to testing. When I latched onto the idea of context-driven testing, I embraced the idea that you’re approaching a project with fresh eyes. With that comes the idea that you shouldn’t be limiting yourself to confirmatory testing, but instead you’re testing the limits of the product. (Writers note: I know confirmatory testing isn’t mutually exclusive to any testing that’s not context-driven testing, and that by doing context-driven testing you’re never doing some form of confirmation testing, but I’m just making a point for brevity). Each product we made was different, and there were different considerations I need to make for each one that I wasn’t able to conceive during design and functional requirement reviews. So as a result, I freed myself from the shackles of test plans and instead relied on my own wits and experience with testing CMS websites.

A couple related side notes:

Side note 1:  I’ll still make a test plan if the SOW calls for it. But if it’s not a deliverable, I’m not going to waste the project’s time.

Side note 2: I want to briefly talk about how I approach each new project. I have my Jira tickets that address each of the main sections, functions and features of the website, and that’s what helps keep me on the path to reviewing each aspect of the product. As I go through each ticket, I approach it within one of four contexts: Design, Functional, Browser, Device. By taking each of these four considerations into account, I felt I was able to do a comprehensive review of the website, CMS, and usability that remained within the time and budget allocated to testing.

Now the part where I address “known unknowns” – my own limited knowledge and possible misapplication of QA process. From the many articles and blog posts I’ve come across, there are plenty of discussions about test plans and how to apply them; but I haven’t found any benefits that are convincing enough to put forth the time and effort to create and maintain the test plan. Am I missing something? My first objective for each project I work on is ensuring functional and design integrity, but a close second is doing so within time and budget. Coming from a PM’s perspective, I think I take a little extra care to observe the second objective than a normal tester would (it’s a subjective opinion, I know). But from a purely testing point of view, I will do my damnedest to cover as much as I can with the time I have. With that, I don’t want to waste time on plans but instead focus on my four contexts and then cross-referencing those contexts. I like structure (which my contexts provide) and I like flexibility (which not following a test plan provides). But as I continue to apply QA methodology, I’m willing to listen to more experienced testers and managers to ensure I’m not missing a point, and get additional insight that I may have missed or even misapplied.

And then finally, the part where I want to hear from you. What’s your experience working with test plans? I understand the need from a product point of view, where the test cases are pretty similar and serve as a platform for regression testing. But what about from a digital agency point of view?  With each website, campaign and mobile app having different standards, can you still make an argument supporting the creation and maintenance of a test plan? If a completed test plan doesn’t serve an actual purpose other than as a test map (that you could keep track of in your head), is it still worth it? Talking ROI here, folks. Let me know!

Thanks for reading, fine folks of finding fixable functions! Until next time, happy testing!

8 thoughts on “Test Plan? We don’t need no stinkin’ test plan!”

  1. Sherry…

    It seems to me that you are confusing a (written) test plan with (written) test results and a (written) test report.

    It’s important, I’d say, to remember that documents represent ideas, plans, and activities. When someone says “that should be documented”, it’s a good idea to change that to “that should be documented if and how and when and to the degree that it suits our purposes.

    As for your suspicion that something didn’t get tested appropriately: if you don’t believe the person, why would you believe a document written by that person?

    —Michael B.

  2. A test plan of the past was your prof of what you plan to test and why, Without it who know’s what was really tested? Maybe everyone just went for coffee ran a few automated tests, poked at a few buttons and just like magic we have 100% coverage. Of course know one actually read them until something blew up.

  3. Thanks for the feedback, Kobi. You’re right, there’s a difference in approaching test procedure and test plan. Do you feel that for test procedures/test cases, gaining a deeper understanding of the system would force you into more of a confirmatory testing approach? It’s my (very humble) opinion that the less we know, the more we’re able to test the limits of the product because you’re trying to feel out what it’s capable of. It could just be my own novice level of experience and not being able to perfectly silo an exploratory approach and then a confirmatory approach. I also feel like for my own particular use case (digital agency work producing CMS websites vs product development), I have a good idea of what some of the procedures and test cases are since a lot of them are consistent through projects (how are input fields reflected on the front end? How does a page respond in a mobile view? etc). I do like the idea of reflecting on system requirements early on as a form of QC, but I run into the same gambit of exploratory vs confirmatory testing. I’d love to hear your take on that.

    I don’t appear in STC blogs rss feeds, but I’ll look into it. Thanks for the heads up!

  4. Thanks for the support! I feel like some of the terms and approaches surrounding testing can be nebulous at times, and adding a “formal” modifier to “test plan” would certainly help put things into a clearer context. If I did that for the title, though, it wouldn’t sound as catchy!

  5. I think before replying to your question we need to clarify the terms – Many consider what you referred to as Test Procedure, while in Test Plan we mostly refer to the more high level definition of the testing project – i.e. Defining the Test Strategy, Risks, Resources, Time-Lines … – these today are mostly managed using other tools than the MS-Word test plan format of the past (Well… Except for the most important part which is normally neglected these days – The Test Strategy).
    As to Test Procedures / Detailed Test Cases – I believe their benefit is mainly when we elaborate just a sample of them to better understand what we are up against:
    1. Elaborating just one out of a group of similar test cases – allows us to better define what resources do we need, are there any items of Design-For-Testability that we lack etc.
    2. That same elaboration cause us to better reflect on the System Requirements – so writing that earlier on improves the depth and quality of the feedback we can give on these – thus reducing amount of issues raised later (or too late should I say?)

    (And last word of advice – do you appear in STC Blogs RSS Feeds?)

    @halperinko – Kobi Halperin

  6. A test plan is the set of ideas the guide your test project, so if you’re even thinking about testing on a project, you already have a test plan. You have not given up a test plan. What you have given up is a *formal* test plan or a *calcified* test plan, which seems to me to be a fine thing to do.

  7. Great insight. I try to insert myself into the project early on, participating in design reviews and dev kickoff to get some QC in before QA. I think there’s some value in approaching the project with a fresh, unbiased perspective, though. Perhaps that’s where a two-person QA approach could come in handy – one person that knows the intimate details and requirements and can do confirmation testing for the most part, while a second person comes in for a sort of usability testing. But back to what you were talking about, planning and giving you an opportunity to think about your approach is certainly a benefit. It sounds like you’re of a similar mindset in minimizing “wasted” time, and like you said, trying to reduce the process to what it was really trying to accomplish. Thanks for the feedback!

  8. To my way of thinking Agile has mostly done away with the test plan as it traditionally existed. For me the core benefit of the test plan wasn’t so much that it gave you a map of the types of testing that you were going to do, the resources you would put into play and the test cases you would perform. When you’re planning all your activity a year out, there’s really a lot of benefits there for sure. For me the one important part of a test plan was that it gave you the dedicated time to sit down and think about what you were about to test, the impact that the changes that are being made are going to have and how you can best test that to try to guarantee quality. This also gave you a chance to figure out the touch points where you needed to be working with development to make things more testable or for areas to watch out for.

    But with agile you’re replacing some of these activities. As long as you’re getting involved in sprint planning your’e having the discussions and the thought processes to understand where you’re risks are. you’re picking the types of testing that you should focus on and understanding the areas that are at greater risk. your’e still doing test planning, you’re just doing it document light. In a similar way, when you use a context based approach you’re still giving yourself a chance to think about what you’re looking at, testing, and figuring out the best approach and coverage. You’re still doing a form of risk analysis even as you’re testing. The one thing i think that you need to watch out for from what you wrote is that if there is extra risk in a released set, ie it warrants some specialized testing, or extra regression that you have a way of capturing that.

    Otherwise, the core goal of the test plan, reducing your risk, i think you’re still figuring out efficient ways to cover that. As with all processes you’re going to analyze and re-factor, try to reduce the process to what it was really trying to accomplish, what risk it was really trying to mitigate and ensure that you’re still covering that in some way.

    sorry. got long winded there.

Leave a reply to Chris Fisher