Tuesday, January 20, 2009

Testing in the open source world. Final Part

In the last two instalments I discussed the collection of requirements, the creation of test cases based on those requirements, and the general setup of a test suite. In this, the final post in this series, I will cover execution of the cases, defect entering, and overall project quality.

Getting Started
Now that we have requirements collected and numbered, and cases written and mapped to those requirements, we are ready for execution. The first and most important point to make in the execution of written cases, it to view them as a guide, and not as an exact representation of the tests to me performed. In other words, be creative! You must test the cases written, but you should also experiment with other tests around each case.


The best way to make this point is to provide an example.

1.1 Select "Featured Content" to see content we highlight that changes on a regular basis
1) Launch Vuze
2) Open Featured Content area
3) Confirm Featured Content area is populated with links to various resources by default

So we have a case to check that the Content area is populated with default links. This seems fairly straight forward and easy to verify. However, this is not the only testing that can be done here, or should be done here. The basis of great software testing is creativity. For instance, for this test, we check the Content area for links, but some other possible tests are:

  • Right click options for links
  • Deleting links
  • Creating links
  • Editing of links
  • Errors provided when invalid links are clicked
  • Special character support in links
  • When clicked does the link open a new dialog (usability)
So, from a single written case I have created 7 additional functional tests. The additional cases are tightly related to the written case, but exercise features not directly referenced by the case. This is an important point. The written cases ensure that testers cover the requirements for the code, but often user experience and usability are forgotten during requirement creation, so testing around the requirements becomes increasingly important.

Keep the user in mind

As a tester you always need to be mindful of the specifications of the software being tested, but if you were to only test the requirements you could still end up with a bad product. Testing has an additional scope, and that is as user advocate.

In the end, people will be using the software, so their experience is paramount to the success of the software. Keep the user in mind as you navigate menus, receive feedback from the system, and access the various features. Do features operate in a logical flow? Is the information provided in error and warning dialogs clear to beginners?

This type of testing may require new users to actually handle the software. Beta testing is an excellent way to get a fresh set of eyes on your product. This method often uncovers usability problems that a tester, who has been on the project since the beginning, may not see.

Submitting bugs
As you test defects in the software will be uncovered, whether in the code itself, or the overall design of the software. Recording these errors with as much useful information as possible will ensure an efficient process and result in higher quality releases.

Open source projects use all manner of defect tracking, but all of them operate under a certain set of rules.

1) Bugs enter the system under a "new" state
2) Bugs are confirmed by a team and assigned priority and ownership
3) Bugs are fixed and placed in a queue for testing
4) Bugs are tested and marked either fixed or not
5) Bugs that are fixed make their way to release
6) Bugs that regress re-enter the system at step 2

This is the basis for any defect tracking system. It is critical that this process be followed, otherwise bugs are missed, dropped, or forgotten. There is nothing worse from a QA standpoint than a bug sitting in your tracker at a limbo state, and then reported to support from customers. This is worse than bugs not found in my opinion because you actually found it first, but a breakdown in *your* process caused it to slip out to the customer.

How do we prevent this? The first step is good information gathering and recording. Always have a notebook by your side to record steps or odd behavior. Next, have a template for entering defects. I like the following one.

Build: Version/Date
Environment: Hardware Version, Model, Bios version, Last Update


Steps to Reproduce:

Expected result:

Actual result:

Whether your tracker includes a template similar to this, or you need to paste it in every time, it is important to have a guideline to ensure no detail is forgotten and the engineer has the information needed to reproduce the problem.

Be sure to also attach any test files, changes, or additional assumptions to the bug as needed. A bug can fall through the cracks if it becomes the ball in a tennis match between test and development in a quest for enough information to reproduce the issue. Gather as much as you can ahead of time.

Quality is not just testing
As you may have already guessed there is more to QA than just exercising a piece of code. QA process should be applied to all aspects of software development from requirements building through to the distribution method. When involved is Software QA you need to be involved in every stage of the development.

Here is a brief outline of other area QA process should be applied to ensure a high quality release:

Requirements gathering
Work closely with technical leads, and project management to ensure all possible hurdles are addressed before requirements are finalized.
  • How will the software be used
  • Does this change the way the user is accustom to using this type of software
  • What steps can be made to more easily acquaint the user with the software features
  • Are we creating/changing core technologies (impact on test effort)
Build process
Often the process of building a binary or image can introduce errors not seen during development unit testing. Develop a repeatable process for build and release.

  • Checklist of libraries, repos, etc. required for a functional build
  • Create a knowledge base of issues encountered by the build engineer for quick reference later
  • Develop a "smoke test" script to certify each build before functional testing begins
Customer expectations
Manage your customer's expectations throughout the development process. Keep them involved.

  • Easy method for submitting feature requests (ex. Brainstorm)
  • Integrate customer test cases where possible
  • Work with customer testers to focus your testing on the areas the customer deems important
  • Triage all customer bugs and provide feedback ASAP
Release process
When an internal release passes all tests and is approved for general release you would be excused for thinking your job was completed, but it is not. Once a final image is approved and CDs/Images are being burned, there is still a ton of finalization tasks to be performed.

  • Check final media/release (Install tests, version checking)
  • Final triage of bugs left in system for update scheduling
  • Creation of final test report
  • "Post Mortem" review of project quality and process
  • Archival of project materials and output
  • Ship Party planning!!
When the customer has reviewed the final release, you have reviewed your process, the project is archived, and you have a frosty ship party pint in your hand, then, and only then, has your job on the project concluded. Or has it??

In reality, even as you celebrate a successful release with your team, undiscovered bugs lurk in the code, and eventually someone will find them. Keep a good interface open with customer support and help out as much as you can in post ship testing.

Go back and review your cases so successful tests can be migrated to the next version/release. Implement some of the lessons learned from the last project. Software will always have bugs, and a process for finding them can always be improved.

This concludes the review of test process. Throughout this series we have gone over many different aspects of the test process and overall software project quality management. I hope it has been helpful to you, and leads to higher quality software.

There are many methods for managing the quality of a software development process, but the thing they all have in common is repeatability. No matter how robust your test cases may be, if they cannot be reliably reproduced then they provide little value.

Documentation is a QA departments core function. Document everything. If there is a process for something then there should also be a process document explaining it step-by-step. Keep templates for every document you produce and keep them current.

Over the next months I will be expanding on some of the topics discussed in brief during this series. If there is a particular topic you would like to see covered in-depth, please feel free to drop me a note to that affect. Thanks for reading and happy hunting!