Quality narratives are a better form of reporting 

If you’re asked, “How’s dinner, is it good?” Have you ever responded “I am 14% complete with dinner.” No you don’t; because the natural way to respond is with a quality narrative. So why do we do this with test reporting?

The state of test reporting

Google software test reporting and what do you find? Images of spreadsheets and pie charts that show metrics based test reporting. These spreadsheet are usually showing some form of test progress, number of tests run or number of tests passed/failed.

Fig 1: Test reporting searched from google images shows metrics based reporting.

But how useful are these types of report? They show that testing is doing activity, but don’t tell us anything about quality at all. If I say that 2 tests have failed, that might look bad but what if those failures are “it doesn’t work when I turn off my computer” or “the performance of the endpoint isn’t 0.00003 ms” (i.e. things that we don’t care about)?

Historically testing has used quantitive reporting, data expressed as numbers or graphs, to say “when we hit 100% of these planned tests we have achieved quality”. The reporting has been about how quickly we can get to that planned end goal, rather than to discuss and make decisions on quality; these reports are a project management planning tool and not a testing tool.

Fig 2: Our tests reports really just give a view on “when can we release” not “should we release”.

We can fall into the trap of feeling that numbers feeling more scientific than words. If we express our testing as a pie chart or as a percentage it feels all sciency and therefore must be listened to right? RIGHT??? We (and our teams) can take comfort in the reporting looking like it’s meaningful at a superficial level rather than actually looking at the content.

Fig 3: Some charts, they don’t mean anything but they sure look like they do.

In my experience those reports get glanced at to show “oh testing did something” but not engaged with in terms of decision making. Maybe someone will look to confirm that there’s some green on the chart, or follow up for an estimate on when testing will be finished… but nobody asks “okay but how good is this?”

As we shift away from waterfall and towards embedding into teams the reporting of testing moves away from “have you done something / when will it be done?” given to a PM towards “what have we done and what have we seen?” given to the team.

Quality narratives

Let’s go back to the question I asked at the beginning of the post, “How’s dinner, is it good?” If I were asked this question I’d probably say “It’s really good, the chicken is cooked well and I love how crispy the potatoes are… but the gravy is a little thin for me”.

Notice how I give feedback on what I’ve seen, both good and bad? That’s a quality narrative!

Fig 4: A roast chicken dinner.

A quality narrative is a qualitative report, expressed in words, on how good something is. It allows us to communicate a more in depth and thorough understanding of a topic. Think of it as a statement or conversation that really focuses on the what:

  • What have I tested? I looked at the /puppies endpoint to pull back and edit a list of the puppies in the database using GET, POST, PUT and DELETE commands with valid JSON and different data.
  • What did I see? The different request types all looked good and worked with expected payloads, however when I used special characters in any field I got a 500 error response.
  • What does this mean? If we have any users with double barrelled puppy names, or honourifics in names like Mr. Fluffy-Bottom then they’ll get errors back from the system.
  • What should we do? Is this something we want to fix? I can raise a bug for that. or if we want to know more about it I can test around this some more today.

This is a lower level report of specific testing, but we can also bring this higher to something like a smoke test of a system:

The system overall looks stable and is responding well with the only error being that users cannot use integrated DogeCoin payments. We could demo this to the customers and talk through the payment limitations.

  • Managing a list of puppies (Create / Read / Update / Delete) – Looks good ✅
  • Reviewing puppy backstories – Looks good ✅
  • Puppy shop interface – Looks good ✅
  • Puppy shop checkout – ⚠️ Issues seen ⚠️ (Checkout fails when paying with DogeCoins)
  • Emailing service – Looks good ✅
  • Look & Feel – Looks Good ✅ (All UI pages comply with AA accessibility standards)
  • Performance – Looks Good ✅ (All responses at 4G network seen to respond <4 seconds)

Because I’ve given feedback on what I’ve tested and what I’ve seen the team around me can understand the quality of what we’re working on. Then more importantly, they can make decisions on what they now understand about quality (do we fix things? do we release it? do we do more testing?)

When can we use quality narratives?

We can use quality narratives as a way of talking about testing and quality whenever we’re asked for a report:

We can use narratives alongside metrics or coverage reports to provide comfort from old ways of reporting whilst giving additional context. I favour sending slack messages that give a narrative to the team and then linking to additional test notes, bug lists and metrics reports for people to drill into if they need them.

Fig 5: Example narrative based test report in Slack.

Let’s move away from giving updates that say “I’m doing something” towards reporting on quality. Let’s use quality narratives!