bgdn.me

A personal website.

A project manager test report


A PM test report structure

When you prepare a test report, think a bit of why you're doing it. Is this test report for your eyes only? If so, you probably don't need to read the rest of this article. Otherwise, ask yourself who the "end-user" is.

In my example, the main consumers of test reports were project managers (PMs); and these people were complaining about the following:

  • Lack of a unified template approved within the organization.
    The consequence was not only that testers needed to reinvent it each time, but also that PMs had to get used to different styles, formatting and content structure from project to project.

  • The test reports didn't contain what had actually been tested.
    There was no information about components of a solution and their versions, API versions, configuration settings, etc.

  • Listings of failed tests contained no explanation of what was the reason for failures.
    Or contained description unclear for people out of testing context.

  • All test artefacts like test case results and automation tool output were a part of the test report.
    Because of that, some reports were extremely bloated and hard to get through.

  • Missing information of how automated tests had been run.
    So successors might not have the required information on how to re-run the same tests.

  • It was unclear from a first sight whether a solution is ready for release.
    PMs didn't want to study the whole test report to understand the readiness of a software under test.

After several iterations of refining a test report template, we have come up with the following content structure.

The test report begins with conclusion, one-sentence statement, saying whether a product is ready for release or not.

I know, some software testing experts like James Bach suggest testers not sign-off to approve the release of a product. According to them, a project manager decides when to release, while a job of a tester is to provide the best data for making that decision. I rather agree with that approach, but our PMs requested that a test leader writes also his/her opinion about readiness for release.

Then, we list solution's unfixed defects and issues found during the testing phase. There are explanations given to support the conclusion: whether these issues are showstoppers, or they don't affect significantly the work of a system.

Next we describe a software under test so that the readers can understand what was tested, what components, versions, deployment details, and any 3d-party software integrated with the solution.

PMs aren't usually interested in comprehensive test result details, so we start here with numerical stats (how many tests passed, how many failed or skipped). Afterwards, a list of bug IDs for the failed tests is provided along with explanation for the skipped tests. This section is done for each type of conducted testing (e.g. smoke, functional, or performance).

Although test environment and test tools may not be so important to a manager, it is still worth including in the test report. So we can capture essential information about test environment (both software and hardware parts), test framework and tools that were used during the testing. In addition, it would be useful to provide details of how each testing tool was used, with which configuration. Later this information would be important for people wishing to reproduce the testing in another environment.

Finally, appendixes section contains links to external documents which the testing team have used. These may include test plan, traceability matrix, test tool configurations, test listings, etc.

Do you find this structure meaningful? What does usually your test report look like?


Related Articles: