Test Plan? We don’t need no stinkin’ test plan!

It’s been a long time! I shouldn’t have left you without a dope blog to read (to). Kudos to whoever picks up on that reference. Moving right along…

Life happens and sometimes you miss opportunities to write another blog post, but I felt compelled to add another chapter to my chronicles of testing. I’m feeling a little feisty so I thought I’d cover a “controversial” topic that ties into my last entry (context-driven testing). With that introduction, here we are, talking about test plans. Or, the lack thereof. Hear me out.

When I first started into my QA journey, I wanted to replicate as much formal QA procedure as I could. Among those procedures was the test plan – what they are, how to make them, what needs to be included, what results you need to glean from them, etc. In the beginning this made total sense, because how else are you supposed to know what exactly you need to be checking? Being able to reference design comps and functional requirements, you knew each feature of your product and how they behave. Simply make a list of each feature, how they behave, add some additional test cases for browser and devices and you’re gold! However, I soon discovered that it was an exercise in futility.

test_plan

I used to spend lots of time creating test plans and populating them with test cases, dreaming of how complete they were and how I was going to catch every damn bug that crept around. But I found out that when it came time to actually apply the test plan, two things became evident:

  1. You’ll never create a test plan that covers everything
  2. You spend a lot of time keeping your test plan up to date

Now keep in mind that most of the projects I work on are web development projects, mostly CMS websites, so it’s not something like product development where you’re testing the same thing but with new features or something like that. Sure, there are considerations that carry over from project to project, but those are few when compared to the testing needed for the rest of the site. So, I spent a lot of time creating a framework that ultimately wasn’t as useful as intended. Not only was its usefulness limited, but creating and maintaining test plans took up a lot of time that could have otherwise been used for testing. As testers, we all know that we’d like to have all the time we can to test without having to shortchange the results. And to top it off, since the completed test plan wasn’t a client deliverable, we didn’t have a use for the results beyond chucking it in the trash. Every reported issue (bug, improvement, new feature, etc) was in Jira, and whether the issue was addressed or not before launch, everything lived in Jira and the test plan was never referenced again.

So, I gave up the test plan. I gave it up not only because of its lack of utility, but also the limitations it placed on the approach to testing. When I latched onto the idea of context-driven testing, I embraced the idea that you’re approaching a project with fresh eyes. With that comes the idea that you shouldn’t be limiting yourself to confirmatory testing, but instead you’re testing the limits of the product. (Writers note: I know confirmatory testing isn’t mutually exclusive to any testing that’s not context-driven testing, and that by doing context-driven testing you’re never doing some form of confirmation testing, but I’m just making a point for brevity). Each product we made was different, and there were different considerations I need to make for each one that I wasn’t able to conceive during design and functional requirement reviews. So as a result, I freed myself from the shackles of test plans and instead relied on my own wits and experience with testing CMS websites.

A couple related side notes:

Side note 1:  I’ll still make a test plan if the SOW calls for it. But if it’s not a deliverable, I’m not going to waste the project’s time.

Side note 2: I want to briefly talk about how I approach each new project. I have my Jira tickets that address each of the main sections, functions and features of the website, and that’s what helps keep me on the path to reviewing each aspect of the product. As I go through each ticket, I approach it within one of four contexts: Design, Functional, Browser, Device. By taking each of these four considerations into account, I felt I was able to do a comprehensive review of the website, CMS, and usability that remained within the time and budget allocated to testing.

Now the part where I address “known unknowns” – my own limited knowledge and possible misapplication of QA process. From the many articles and blog posts I’ve come across, there are plenty of discussions about test plans and how to apply them; but I haven’t found any benefits that are convincing enough to put forth the time and effort to create and maintain the test plan. Am I missing something? My first objective for each project I work on is ensuring functional and design integrity, but a close second is doing so within time and budget. Coming from a PM’s perspective, I think I take a little extra care to observe the second objective than a normal tester would (it’s a subjective opinion, I know). But from a purely testing point of view, I will do my damnedest to cover as much as I can with the time I have. With that, I don’t want to waste time on plans but instead focus on my four contexts and then cross-referencing those contexts. I like structure (which my contexts provide) and I like flexibility (which not following a test plan provides). But as I continue to apply QA methodology, I’m willing to listen to more experienced testers and managers to ensure I’m not missing a point, and get additional insight that I may have missed or even misapplied.

And then finally, the part where I want to hear from you. What’s your experience working with test plans? I understand the need from a product point of view, where the test cases are pretty similar and serve as a platform for regression testing. But what about from a digital agency point of view?  With each website, campaign and mobile app having different standards, can you still make an argument supporting the creation and maintenance of a test plan? If a completed test plan doesn’t serve an actual purpose other than as a test map (that you could keep track of in your head), is it still worth it? Talking ROI here, folks. Let me know!

Thanks for reading, fine folks of finding fixable functions! Until next time, happy testing!

Advertisements