Friday, September 5, 2008

Testing in the open source world. Part 2

Intro
In my last post I discussed the gathering of requirements, and the organization of those requirements. In this post I will be moving to the next step, case creation based on those requirements.

Getting Started
Now that we have requirements collected and numbered, we can begin creating cases to exercise the features described. So where to begin? Let's take a simple requirement from the example we used in the last post here

1.1 Select "Featured Content" to see content we highlight that changes on a regular basis

So to best exercise this feature there are several things to consider. The first is the name of the section where the feature exists. In this example that is "Featured Content". So our first test case would be to confirm that this area is named correctly:

1) Launch Vuze
2) Confirm main application window contains "Featured Content" area
3) Confirm spelling and grammar of "Featured Content" area is correct

The next section of the requirement is "to see content we highlight". This means there should be something there by default. So, next case:

1) Launch Vuze
2) Open Featured Content area
3) Confirm Featured Content area is populated with links to various resources by default

Along with having content available, we also need to check that the content is functioning as expected. To do this, we need to explore each of the links provided in the content area.

1) Launch Vuze
2) Open Featured Content area
3) Click each content link
4) Confirm that each link launches a service

Lastly, there is the closing part of the requirement "that changes on a regular basis". For this one we may need to contact the developer to find out what regular basis means, or if he/she can trigger a change so we can test that feature. Either way, our case will be the same.

1) Launch Vuze
2) Open Featured Content area
3) Click each content link
4) Confirm that each link launches a service
5) Close Vuze
6) Trigger content shift or wait X hours for normal system cycle
7) Launch Vuze
8) Open Featured Content area
9) Click each content link
10) Confirm that each link launches a service

Now, I did something a little different here. In the previous cases I did one feature at a time. This gets you the best results and makes tracking cases much easier. However, it is also very time consuming. To save a bit, you can often combine cases in a logical way. For instance in this one, I combined the cycling of links and the case previous to it where we checked the links.

Be Careful here. You really only want to combine a new case with ones already run. If you start combining too many unique steps you risk losing focus on what you are actually testing, and the tracking of which cases have passed and which have failed becomes a nasty situation. Any time you have saved writing the cases, quickly evaporates when you try to collect statistics on your testing results.

Mapping
Ok, So we have some cases now. The next step is to ensure that all requirements are covered by cases. We do this by mapping cases to the requirement they were written to explore. This is accomplished by simply referencing the requirement number next to the case. So for the cases we wrote above, each would reference the 1.1 requirement. Having multiple cases for each requirement is very common. If you have several requirements with only a single cases, spend a little more time thinking about how the user might interact with the feature, and I'm sure some more cases will come to mind.

Pulling It All Together
We now have requirements, numbered and organized, and cases created for each or our requirements. The last stage is organizing these into a format that is easy to read, follow, and track. Most QA houses use a table to to this. An example of the cases created in this post can be found here

You'll notice that there are several additional columns in the table to help with tracking. These include a case number, description, and date/build. These help the tester easily find the cases that need to be executed. The pass/fail, bug, and date/build columns help with tracking the state of each feature. These give the reader an idea which areas of the program need the most work and what are the major issues with those areas.

Put all of these together and you have a very efficient way of organizing your cases into a simple format that can be used by both technical and non-technical readers.

Next: Execution
Join me next time for the fun part! Executing the cases, finding bugs, and tracking the fixes through the software development process. Thanks for reading and Happy Hunting.

Thursday, July 3, 2008

Testing in the open source world. Part 1

Intro
Today I would like to begin a several part series about quality assurance testing and how it can work with open source software and projects. There are several stages to be aware of and many techniques to accomplish a high level of quality and code coverage. Although I am familiar with many of these, in an effort to keep things as simple as possible, I will only talk about the most popular aspects of each stage of testing. Enjoy.

How To Begin - a.k.a Requirements
Software comes in many shapes and sizes, and each with it's own varying amount of documentation. I have seen notepad applications with each feature carefully explained, and huge supply chain strategy suites with no documentation at all. Our hope, as testers, is to always land somewhere in the middle of these two. Some documentation is definitely preferred, but even when it is lacking, through some work we can get what we need to begin creating a structured test plan.

So, lets assume for now that we have some documentation. In the open source world this documentation is usually in the form of a FAQ or Release note from the developer. For our purposes here, lets say you are interested in contributing some testing to Vuze (Azeurus). It was the number one download on Sourceforge the day this article was written and it has a rather large FAQ to explain basic functionality found here.

Planning
We now have a requirements document (FAQ) and can begin designing our test objectives. Depending on the scope of what we want to accomplish, we can plan to test all the functionality described, or focus on a single area of functionality. For our example, Vuze, lets pick a small subset of features to test. We will focus on the Content Search features described here.

The first document we will create is a Test Plan. This is a distillation of the project/application requirements. We want to create a document that we can easily reference to ensure that we have not missed a requirements during testing. There are several formats for this, no one is better or worse than any other, so pick a format you are comfortable with. I prefer to have a collaborative testing effort, I use wiki documents for this, so my examples will be based on that technology.

So lets pull some requirements out. Go here and copy the content to your requirements document. Now format them into numbered items under a heading like "1.0 - Discovering Content". You can see my final example requirement here.

It is also very important to reference the source material you used for each section of your plan. This allows the reader to move between the documents easily and for you to quickly track changes in requirements and make necessary updates to all related documents.

Now just repeat these steps with as many sections of the FAQ that you would like to test. Be sure to keep incrementing the sections so you so not repeat the header. So "What's new about the search box" would be "2.0 - Search box"...etc.

Once we have created a large number of indexed requirements we can then begin testing each of the functions the developer has documented. Without a good set of requirements the test effort can be sidetracked or unfocussed. Having a clear path for the progression of the test effort will keep the testing manageble and some even think...fun.

Closing
Next time we will move forward to the next step...creating cases from your requirements. Until then, take a look at the following links for additional examples of test requirements. Happy hunting.

Chris

Additional Links


Requirements for UME/Intel


Template for Requirements
as found in Proprietary Software world (Glad we are not there! :-P )

Tuesday, June 10, 2008

First Release of Ubuntu Mobile Edition

Hello all,
Thanks for checking out this première post if the Canonical Ubuntu Mobile Edition news blog. First let me introduce myself. I am the lead Ubuntu Mobile QA Engineer for Canonical. I have been working feverishly over the first part of this year, with the help of Ubuntu community members, to ensure that this first release of Ubuntu Mobile Edition (UME) is as good as it can be.

For those of you not aware, first, what is Ubuntu Mobile Edition? In short, it is a version of the Hardy Heron release of Ubuntu that has been customized to run on small form factor devices such as the new MID, and netbook.

This first release of UME has been specifically designed for the new Intel MID platform and is designed to be a developer reference build. We combined the best parts of Moblin, Hildon, and several independent open source projects, to roll out a full featured mobile device OS. This means that this first release image will only boot on a device using a specific Intel chipset. However, you can easily explore UME through a Virtual Machine image.

You can grab our first release as a bootable USB or VM image here:
http://cdimage.ubuntu.com/mobile/releases/hardy/

For testing purposes, we also released a version that runs on the Samsung Q1 Ultra. This small device is ideal for exploring UME. If you have one...download our "Mccaslin" build and flash your device over to UME. Warning: This will wipe out all of the existing data on the device.

Once you have an image the Ubuntu Mobile QA Team would love to hear your comments, answer your question, and review any of the issues you find in Ubuntu Mobile Edition!

Post your comments: https://lists.ubuntu.com/mailman/listinfo/mobile

Get your questions answered: https://answers.launchpad.net/ubuntu-mobile

Enter your issues: https://bugs.launchpad.net/ubuntu-mobile

In future posts I will cover the testing effort, day-to-day operations within Ubuntu Mobile QA and any requests for community testing we might need. Please stay tuned.

Sincerely,
Chris Gregan
Ubuntu Mobile QA Engineer