Test Plan? We don’t need no stinkin’ test plan!

It’s been a long time! I shouldn’t have left you without a dope blog to read (to). Kudos to whoever picks up on that reference. Moving right along…

Life happens and sometimes you miss opportunities to write another blog post, but I felt compelled to add another chapter to my chronicles of testing. I’m feeling a little feisty so I thought I’d cover a “controversial” topic that ties into my last entry (context-driven testing). With that introduction, here we are, talking about test plans. Or, the lack thereof. Hear me out.

When I first started into my QA journey, I wanted to replicate as much formal QA procedure as I could. Among those procedures was the test plan – what they are, how to make them, what needs to be included, what results you need to glean from them, etc. In the beginning this made total sense, because how else are you supposed to know what exactly you need to be checking? Being able to reference design comps and functional requirements, you knew each feature of your product and how they behave. Simply make a list of each feature, how they behave, add some additional test cases for browser and devices and you’re gold! However, I soon discovered that it was an exercise in futility.

test_plan

I used to spend lots of time creating test plans and populating them with test cases, dreaming of how complete they were and how I was going to catch every damn bug that crept around. But I found out that when it came time to actually apply the test plan, two things became evident:

  1. You’ll never create a test plan that covers everything
  2. You spend a lot of time keeping your test plan up to date

Now keep in mind that most of the projects I work on are web development projects, mostly CMS websites, so it’s not something like product development where you’re testing the same thing but with new features or something like that. Sure, there are considerations that carry over from project to project, but those are few when compared to the testing needed for the rest of the site. So, I spent a lot of time creating a framework that ultimately wasn’t as useful as intended. Not only was its usefulness limited, but creating and maintaining test plans took up a lot of time that could have otherwise been used for testing. As testers, we all know that we’d like to have all the time we can to test without having to shortchange the results. And to top it off, since the completed test plan wasn’t a client deliverable, we didn’t have a use for the results beyond chucking it in the trash. Every reported issue (bug, improvement, new feature, etc) was in Jira, and whether the issue was addressed or not before launch, everything lived in Jira and the test plan was never referenced again.

So, I gave up the test plan. I gave it up not only because of its lack of utility, but also the limitations it placed on the approach to testing. When I latched onto the idea of context-driven testing, I embraced the idea that you’re approaching a project with fresh eyes. With that comes the idea that you shouldn’t be limiting yourself to confirmatory testing, but instead you’re testing the limits of the product. (Writers note: I know confirmatory testing isn’t mutually exclusive to any testing that’s not context-driven testing, and that by doing context-driven testing you’re never doing some form of confirmation testing, but I’m just making a point for brevity). Each product we made was different, and there were different considerations I need to make for each one that I wasn’t able to conceive during design and functional requirement reviews. So as a result, I freed myself from the shackles of test plans and instead relied on my own wits and experience with testing CMS websites.

A couple related side notes:

Side note 1:  I’ll still make a test plan if the SOW calls for it. But if it’s not a deliverable, I’m not going to waste the project’s time.

Side note 2: I want to briefly talk about how I approach each new project. I have my Jira tickets that address each of the main sections, functions and features of the website, and that’s what helps keep me on the path to reviewing each aspect of the product. As I go through each ticket, I approach it within one of four contexts: Design, Functional, Browser, Device. By taking each of these four considerations into account, I felt I was able to do a comprehensive review of the website, CMS, and usability that remained within the time and budget allocated to testing.

Now the part where I address “known unknowns” – my own limited knowledge and possible misapplication of QA process. From the many articles and blog posts I’ve come across, there are plenty of discussions about test plans and how to apply them; but I haven’t found any benefits that are convincing enough to put forth the time and effort to create and maintain the test plan. Am I missing something? My first objective for each project I work on is ensuring functional and design integrity, but a close second is doing so within time and budget. Coming from a PM’s perspective, I think I take a little extra care to observe the second objective than a normal tester would (it’s a subjective opinion, I know). But from a purely testing point of view, I will do my damnedest to cover as much as I can with the time I have. With that, I don’t want to waste time on plans but instead focus on my four contexts and then cross-referencing those contexts. I like structure (which my contexts provide) and I like flexibility (which not following a test plan provides). But as I continue to apply QA methodology, I’m willing to listen to more experienced testers and managers to ensure I’m not missing a point, and get additional insight that I may have missed or even misapplied.

And then finally, the part where I want to hear from you. What’s your experience working with test plans? I understand the need from a product point of view, where the test cases are pretty similar and serve as a platform for regression testing. But what about from a digital agency point of view?  With each website, campaign and mobile app having different standards, can you still make an argument supporting the creation and maintenance of a test plan? If a completed test plan doesn’t serve an actual purpose other than as a test map (that you could keep track of in your head), is it still worth it? Talking ROI here, folks. Let me know!

Thanks for reading, fine folks of finding fixable functions! Until next time, happy testing!

Advertisements

Wax Philosophical with Context (-Driven Testing)

It’s a snow day in Portland, OR. Like, a legitimate snow day. We’ve had about three of these so far this season, but this is the only one of substance – 8 inches of snow in most areas. This is problematic because people of the Pacific Northwest don’t typically know how to drive in the snow for some reason, roads aren’t salted, and snow plows are far and few between. Because of Snowpocalypse Episode IV, I’m unable to be in the office and work on testing a location-based technology project. Very unfortunate since it’s a pretty neat learning experience, but the upside is that you guys get to hear about the next part of my testing story – developing a QA philosophy and strategy. Please, try to keep your excitement contained.

One of the many limitations I found when researching QA practices is how a digital agency should approach testing. As I alluded to in my previous post, I had an idea of what to test (Does it look right in all browsers? Does it respond in mobile? Does it work for older browser/mobile versions?) but didn’t know what standards to establish as the bedrock of testing. From this, I figured that instead of a narrow focus on standards, I should take a step back and look at a general approach. I searched high and low and found many approaches that would work with product-focused development, but that wouldn’t work for our team. Since we develop different web- and app-based projects, a single approach is too limiting. What I found to be the most applicable to my company’s development cycle is Context-Driven Testing.

When I came across Context-Driven Testing (CDT), I saw the clouds part, the sun emerged, and angels started singing from the heavens. While it’s not a defined approach that provides standards and such, it was flexible enough to be applied to all of the projects for my company. Per CDT’s home, “Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.” I found this to be very important because of the different types of products we produce. For example, campaign, static HTML, and CMS websites all require different considerations. With new languages, technologies, and use cases being developed every day, the context will evolve. CDT allows for our testing approach to remain reactive to a project’s needs while still being able to apply knowledge of past experiences and techniques learned along the way.

For your own edification, here are the seven basic principles of the Context-Driven school

  1. The value of any practice depends on its context.
  2. There are good practices in context, but there are no best practices.
  3. People, working together, are the most important part of any project’s context.
  4. Projects unfold over time in ways that are often not  predictable.
  5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
  6. Good software testing is a challenging intellectual process.
  7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effective test our products.

One of the biggest takeaways for me regarding Context-Driven Testing is rejecting the notion of best practices, and that good testing is a matter of skill and not procedure.By rejecting the notion of best practices, a tester is not limited to the narrow focus of a procedure and can apply a more broad focus before narrowing down the scope to what works for a given test case. Essentially, a skilled tester will know what to look for based on experience and intuition. CDT says there are good practices, but no best practices. Best practices may work for a product whose test cases remain relatively consistent over time, but when it comes to different projects and implementations of a digital agency, it’s not as applicable.

I’m also a big fan of the idea that a tester’s skillset and interaction are more important than a process or the tools by which feedback is transmitted. Because a tester is aware of the context in which they are testing, their feedback trumps that of something more process-based because it provides a better picture of why the feedback is important. I see it as the difference in saying “this is what is happening” versus “this is what is happening and is important because…” So, more details, more context.

Another thing I like about CDT is that it values working software over comprehensive documentation. This relates to a few different things, including the need to produce (or not produce) testing documentation, framing what’s really important (a working website), and in the case of my company, the idea that a one-and-done project doesn’t require a test plan to reference in the future. All three are essentially woven together because a) there are a finite amount of hours available for testing, b) I’d rather spend time testing than creating test plans and adjusting them as I test because new considerations are discovered, and c) since the majority of projects won’t require much more documentation than a style guide (created by the development team), a completed test plan won’t provide any additional value to the project in the future. When I first started formally doing QA, I created test plans and cases because I thought that’s what you do. But as time went on, I determined that the real value is a product free of defects. It is of my opinion that the time is best spent testing, though I’m open to the idea of generating test plans for larger projects that will remain in-house with a retainer, will be iterated and require test cases for regression testing, and more or less, are requested from the beginning.

kermit

So, Context-Driven Testing is what anchors my philosophy of testing. Of course, this may change over time, but this is currently what works best for the company. I should also add that CDT is only one part of the overall philosophy – in this case when it comes to testing. I also firmly believe in prevention before things reach development (QC!), like being active in design reviews and development kickoffs, but perhaps that can be explored in a future edition.

Now it’s time for me to turn things over to you guys. What do you think of Context-Driven Testing? Is it something you know much about? Is it something you’ve practiced? Are there any processes you feel that are better suited for a web development/digital agency environment? I’d love to hear more from others so I’m not limiting myself to a certain ideology that seems to make sense.

That’s it for now, folks. In the meantime, I feel obligated to teach some of the Portlanders how to drive in the snow, but that’s none of my business…

Project Manager QA, Or: How I Learned to Stop Worrying and Become a Formal Tester

Hello again, everyone! I’m starting out the new year by getting some actual substance into this blog aside from the “getting to know you” stuff from the initial post, but from this point on I hope to be posting somewhat regularly (as in like perhaps every other week). I have a lot of topics I’d like to cover, but trying to fill them out with articulated ideas and a clear goal for each post are things I have yet to work on. I’m still getting the hang of this, so please bear with me.

So, the lead in… As stated in my previous post, I’m a Project Manager by trade. Before my company had a formal QA department, the task of testing was something a PM did for their own projects. There wasn’t formal training available for how to properly test, so each PM had to more or less do what they thought was necessary to perform a thorough test based on their previous experience. Sometimes this was just eyeballing comps with a bit of time spent in the CMS if there was one. Though each PM had prior experience with testing, Emerge had a deficiency of a standard, cohesive testing plan. That’s where I stepped in.

1gywpb

After working as a PM for more than 6 years, and with my growing interest in testing, I approached my boss to see if it was possible for me to transition into QA. This was scary and exciting at the same time. I’ve had no formal experience as a tester, and all I’ve really known for the past handful of years was being a PM, but it was worth a shot. It turned out that the company was ready to further normalize the development process and establish a formal QA department, so I certainly had the good fortune of being at the right place at the right time. I’m forever grateful for the opportunity I was provided.

So here I am, tasked with developing a formal QA department. Without having any formal training, I didn’t quite know where to begin. I had a notion of what it took to test, but I didn’t know what “standard practice” should look like. The most obvious starting point for me was a simple Google search of “how do I establish a QA department,” which I thought should have provided insight for what procedures should be implemented. However, this came with mixed results that ultimately didn’t help. There were quite a few results geared towards product development companies, as well as general ideas about how to approach setting up a department, but it didn’t provide much valuable information about what I needed in particular. I tried to search different terms to yield more relatable results, but I ultimately ended up taking pieces of different ideas and putting them together for what I thought would work for the company as a digital agency. With this, I’m talking about how I avoided articles that state no-brainer ideas like “you must have a kickoff meeting at the start of the development phase,” and gained insight from articles that say things like “use these strategies when testing” or “these are things that functional requirements should have in them.” It was great being able to find articles that not only talked about ideas I already had about testing, but also related topics that I was unaware of.

There were quite a few resources that I came across that ended up being informative while not being extremely pigeonholed in content. The main ones that I took the most ideas from were Ministry of TestingSatisfice, and Association for Software Testing. Once I was able to identify specific things to explore within the general ideas, I felt that I was on the right path to getting the depth I needed.

To keep this from getting too long, and leaving some things for me to write about in the future, I’ll give a short list of things I started to round out:

  • Developing a QA handbook – knowing what to standardize in the company
  • Testing philosophy – what testing is, and what it isn’t
  • Developing test plans – knowing when to use and not use them
  • Test approach – exploratory vs scripted testing
  • Areas of testing – function, design, browser, device
  • Types of tests to perform – unit, integration, performance, etc.
  • Why confirmatory testing is the worst type of testing
  • Establishing procedure – knowing when to take part in the development process

There are quite a few other ideas that shoot off from here, but the list gives you a general idea of some ideas I had to begin exploring.

As it stands now, I still feel like there is much to learn. I still have this lingering idea that I’m missing something in my overall approach and process, but I haven’t been able to put my finger on it just yet. I’m not sure if it’s because I’m accounting for human nature and the idea that humans are fallible and I’m just missing something, or that my own naïveté has caused me to miss a big idea. The good news is that after two years of running QA, neither I, the production team nor clients have identified any problems in the process and all of our projects have launched without major bugs. Win! I know things aren’t perfect, though, so I’ll continually strive to learn more and increase the productivity and effectiveness of the QA process.

So that’s it for this edition, but now I turn things over to you, good people of the internet. What has your experience learning the QA process been like? Have you ever had to establish a QA department? Have you ever had to establish a QA department from scratch (as in, when you didn’t even know about QA)? Please share your experiences and let me know if there’s something that you think I should consider for my department.

In the next edition, we’ll talk about Context-Driven Testing, a favorite topic of mine.


Disclaimer:

I’m not a writer. Well, not a great one, at least. Let’s just say I’m not a storyteller. I can convey what I want to say, but it may not be all glitz and glam. Let’s just keep the bar low and make sure we’re on the same page here, m’kay?