How to plan and prepare for exploratory testing?

Download the Xray Exploratory Testing App

Many questions arise whenever adopting exploratory testing (ET), primarily due to misunderstandings about what it means.

The fact that exploratory testing is mainly unstructured, especially compared to the traditional scripted testing approach that is highly detailed and restrictive, makes it harder to connect it to the word “plan.” Sometimes, users call it ad hoc

Does all this mean that we can’t prepare ourselves for exploratory testing and establish some level of plan for it? Not really. Let’s find out more ahead.

Xray Exploratory App for exploratory testing

Preparing for testing

Let's say we have a new system to test, a new feature, or some evolution of an existing one. What can we do to prepare ourselves to test better? Do we need to do an extensive reading of all the current documentation? Do we need to look at all previous tickets? 

Recapping one possible definition for testing: testing is about finding relevant information about different aspects of quality, as seen by various stakeholders (internal and external) that are relevant in our context.

Let's start by understanding which stakeholders have a significant say in the quality and try to answer some questions.

  • Business
    • Which features give the most revenue?;
    • What are the major selling points of our product?;
    • Do we have different tiers? What is the difference between them regarding the value we're delivering?;
    • Which features are we discontinuing? Are we offering workarounds?;
    • Do we want to evolve the product? How frequently?

  • Product development and operation
    • Is it a SaaS product or on-premises?;
    • Do we have monitoring in place?;
    • Are we tracking errors? And performance? And security?;
    • Is the product easy to maintain? And operate?;
    • How skilled is our team? How is it structured? What are the current concerns?

  • Customers & end users
    • Who are our customers/end users? What's their business?;
    • What are the most used features?;
    • What are the most critical features for them?;
    • How acceptable are errors?;
    • How acceptable are interruptions of service?

In general, we can prepare ourselves for testing in different ways:

  • Conversations
    • Start with conversations. Most information comes from people rather than from documents that can easily be outdated. Use conversations to understand the context of your product, your team, and how it works to uncover stakeholders, expected users, and non-expected users

  • Risks
    • Always have a risk mindset. Try to uncover risks in the conversations mentioned earlier. Try to understand how the team manages risks, if any at all. 

  • Look at existing information
    • Look at the product brochure and/or product website;
    • Historical builds;
    • Test automation (historical results, what type of tests we have, their nature);
    • Monitoring information.


What about exploratory testing specifics? Are there any special preparations at all?

To perform testing that gives us insights into quality, we need skills, experience, and experimentation (learning by continuously playing with the product).

As exploratory testing requires exploring the product in many different ways and learning while we do so, we need, among many others:

  • High level of curiosity;
  • Extensive knowledge about testing, techniques, heuristics, tools;
  • Courage, to dare to try the things that others won't try and that can take you to roads leading nowhere or that can show you important risks;
  • Knowledge about software development, architecture patterns, and libraries to understand problems that can arise connected to them;
  • Broad knowledge about risks, especially risks in software development

All these are essential for successful exploratory testing or, if you want, for testing in general.


Planning our exploratory testing

There are several ways of exploring our product; we can use different tours as a high-level guide for our testing journey. We can use them as a plan for what we aim to test, but there are other alternatives.

Let's take a step back and reflect.

What is our mission? Do we aim to have a birds-eye view of the product and use testing more to learn? Do we seek to push for critical issues? What time do we have? How mature is our product? And our testing? Do we have any level of test automation in place?

We'll discuss four simplified ideas to help you plan your testing without using the tour concept. Your overall plan will have a mix of these ideas and others. In the end, our plans will be materialized using charters.


1. Plan testing focusing on the new features

If we're iterating our product, adding new features, and having good test automation in place, we can focus on testing the new features.

The new features can have, or not unit and integration tests and even tests that cover acceptance criteria.

We'll have a set of user stories with different priorities. We can use that to define the order of what we'll be testing. A good idea is to assess what would be a successful sprint and a non-successful one.


2. Plan testing focusing on a specific feature

On each feature, we need to understand the level of testing that has been done or not. Do we need to focus on what we already know, or can we look beyond it?

It's like looking at risk but more "low-level" (i.e., focused on the feature).

But then we also need to look at the risk this feature brings to the "outside world," the existing features, and the "non-functional requirements". Are we impacting performance now, for example? Or are we opening a door to affect performance in the future?


3. Plan testing focusing on risks overall

Try to make a collection of risks, discuss them with the team, and prioritize them.

Risks must be clear and detailed. Understand what's critical, acceptable, and unacceptable; don't assume; discuss with the team. Remember that testing involves finding a fair compromise.


4. Plan testing focusing on a specific type of risk

Sometimes, we want to focus on a specific risk or quality aspect. Let's use performance issues as an example of a risk.

We could discuss together:

    • What are the known issues related to performance that we are aware of?
    • Are we tracking this risk somehow? Do we have performance metrics at different levels on different components?
    • What user flows have a higher probability of triggering this risk?
    • What user can be most impacted by these risks?
    • Which stakeholders would be affected by this?
    • How would this impact this business?
    • Do we have any ways of being alerted whenever this risk is about to arise or when it appears?
    • Are there similar risks to this that we may be forgetting about?

The previous questions can support us in building our testing plan.


Elevate your testing excellence: strategies for informed exploratory testing preparation

Those aiming to improve testing that provides valuable insights to relevant stakeholders and aids teams in decision-making can prepare by gaining a deeper understanding of the product, team, stakeholders, and the business context.

Awesome testers continuously improve their skills, core knowledge about testing, new testing techniques, and new ways to expose potential problems. Testers can also learn more about the internal architecture of the product, how it interacts with the outside world, and how it is built. Testers can also learn about new tools that can augment their testing efforts.

There is always some level of planning in testing. The fact that we’ll be performing testing in a more exploratory way doesn’t change much about our plan.

However, exploratory testing usually complements scripted testing (e.g., manual/automated script test cases); therefore, our testing plan will be somehow affected by it. Suppose we have a batch of manual/automated test scripts that cover sanity and regression testing. In that case, we can focus our exploratory testing elsewhere, thinking about other things that can go wrong.


Xray Exploratory App for exploratory testing


Comments (0)