Friday, March 13, 2009

Desktop Automation

Greetings from the Canonical QA Desktop Automation Sprint in Oxford UK!

Opening
Various representatives of Canonical QA have gathered in Oxford to collaborate on automating the Ubuntu desktop testing effort. Some of the areas discussed were tools, team collaboration, best practices, and roadblocks. The following is a brief primer on the current desktop automation efforts as well as links to resources and ideas on how you can get involved.

What is Automation?
In general automation is a process of using software to run a series of actions against another piece of code. This is done on a more common basis with simple shell scripts when developers want to check small bits of code against it's intended feature set. However, in recent years the effort to automate testing has moved this practice into the mainstream. It is now common for QA departments to have several staff members dedicated to scripting automated tests, and basing the majority of their test effort on execution of these scripts.

Today this practice is gaining a footing in the open source world as well. We here at Canonical have begun a concerted effort to develop a framework of automated tests based on proven technologies. The idea is to leverage the coding potential of our internal and community QA teams to build a better product for all. This involves several steps.

The Effort
The first step for building an automation effort is to decide on a solid foundation. This was a fairly trivial decision. The most robust method for accessing the gnome desktop is through the Accessibility Layer. The layer is provided by Orca and is more commonly used for assisted access to the desktop. However this layer can be easily re-purposed to provide a flexible api for automated scripts to access the desktop as well.

Once we had our foundation, we could build on that. There are a couple tools taking advantage of the accessibility layer for testing. The two most well known tools are Dogtail and LDTP. After much deliberation, we decided on the use of LDTP for several reasons, the most important are that it more easily integrates with our exiting tool set and it is already in use by several large companies which provide their scripts freely to the community. This meant we had a tool with a large user base, as well as a library of test scripts already built.

LDTP
LDTP is a test framework based on Python. It uses the accessibility libraries to make calls directly to applications. These actions can be scripted in Python, and executed either from the LDTP Runner or directly from the command line. This flexibility allows us to build suites of scripts that can be run by simple shell commands. It also meant that we could use some of our existing tool to call these suites.

Checkbox
Checkbox is a tool installed by default on every Ubuntu system. When run by the user, it collects information about the system for submission to Launchpad. This allows the user to provide more comprehensive information when logging bugs against Ubuntu.

Checkbox is also capable of performing actions based on the information it collects. So for instance, if the user runs Checkbox, and it detects a built in camera. With this information Checkbox can then be told to run a series of tests based on the type of camera. Checkbox can then prompt the user with errors it detected, and even enter bugs in LP on behalf of the user based on the failed tests. The tests it executes can be based on several technologies, but the current effort is for Checkbox to call LDTP test suites based on the information it collects.

A feature rich combination of these tools is the current focus of Canonical QA, with the hope of producing a comprehensive set of test suites that can be executed automatically based on the tester's hardware. The system would then report any issues to Launchpad with test files, logs, and hardware configuration information attached.

What is the future of the automation effort?
As this effort moves forward, there are several areas where we expect rapid growth and areas that we will need help. The framework itself is fairly sturdy so the expansion of tests based on this framework will be rapid. There are already many cases built for LDTP that can be run against applications on the Gnome desktop today. Over the coming months we will be expanding this coverage.

There will also be greater integration with Checkbox. Once this is complete, individual tests can then be organized into suites, and the execution criteria can be developed based on hardware specification. Checkbox will then be the unifying application for Ubuntu desktop automation, and will hopefully lead to a healthy community of developers building scripts for inclusion in the test framework.

This seems like a task that is straight forward and trivial to accomplish, however some roadblocks still exist. For instance, there are several area of the Gnome standard applications stack that are still not accessibility enabled. Some key areas include the panel notification area and some of the newer technologies such as Clutter and Compiz. Enablement of these applications would allow for more comprehensive tests and, in the end, better software for all.

How can I get involved?
As I said there are several areas where you can join this effort. First step is to visit the QA wiki to get acquainted with the status of the effort. Next, if you are a package maintainer, verify that your software is providing the necessary hooks to the accessibility layer. This can be done by launching your application and scanning it with a program called Accerciser. It can be found in the Ubuntu Universe repository as well.

If you have Python programming skills, many more scripts are needed for every area of the desktop environment. Pick an app and start writing. Instructions on the specifics of writing scripts for LDTP can be found on the project website. There is also a blueprint to direct future coding work on Launchpad that you can use as a reference.

If you are a non-programmer and still would like to help. Please install LDTP and run some of the existing scripts. You can get a primer on setting your system up for testing here. After that we would love to hear from you. How did they work? Enter a bug if you have problems. How could they be better? Enter a Blueprint with additional requirements. Have you found more useful scripts published on the Internet that we could use? Send us an email. All of this would be very helpful to the effort.

Thanks for reading, and we hope to hear from you.

Chris

Tuesday, January 20, 2009

Testing in the open source world. Final Part

Intro
In the last two instalments I discussed the collection of requirements, the creation of test cases based on those requirements, and the general setup of a test suite. In this, the final post in this series, I will cover execution of the cases, defect entering, and overall project quality.

Getting Started
Now that we have requirements collected and numbered, and cases written and mapped to those requirements, we are ready for execution. The first and most important point to make in the execution of written cases, it to view them as a guide, and not as an exact representation of the tests to me performed. In other words, be creative! You must test the cases written, but you should also experiment with other tests around each case.

Example

The best way to make this point is to provide an example.

1.1 Select "Featured Content" to see content we highlight that changes on a regular basis
1) Launch Vuze
2) Open Featured Content area
3) Confirm Featured Content area is populated with links to various resources by default

So we have a case to check that the Content area is populated with default links. This seems fairly straight forward and easy to verify. However, this is not the only testing that can be done here, or should be done here. The basis of great software testing is creativity. For instance, for this test, we check the Content area for links, but some other possible tests are:

  • Right click options for links
  • Deleting links
  • Creating links
  • Editing of links
  • Errors provided when invalid links are clicked
  • Special character support in links
  • When clicked does the link open a new dialog (usability)
So, from a single written case I have created 7 additional functional tests. The additional cases are tightly related to the written case, but exercise features not directly referenced by the case. This is an important point. The written cases ensure that testers cover the requirements for the code, but often user experience and usability are forgotten during requirement creation, so testing around the requirements becomes increasingly important.

Keep the user in mind

As a tester you always need to be mindful of the specifications of the software being tested, but if you were to only test the requirements you could still end up with a bad product. Testing has an additional scope, and that is as user advocate.

In the end, people will be using the software, so their experience is paramount to the success of the software. Keep the user in mind as you navigate menus, receive feedback from the system, and access the various features. Do features operate in a logical flow? Is the information provided in error and warning dialogs clear to beginners?

This type of testing may require new users to actually handle the software. Beta testing is an excellent way to get a fresh set of eyes on your product. This method often uncovers usability problems that a tester, who has been on the project since the beginning, may not see.

Submitting bugs
As you test defects in the software will be uncovered, whether in the code itself, or the overall design of the software. Recording these errors with as much useful information as possible will ensure an efficient process and result in higher quality releases.

Open source projects use all manner of defect tracking, but all of them operate under a certain set of rules.

1) Bugs enter the system under a "new" state
2) Bugs are confirmed by a team and assigned priority and ownership
3) Bugs are fixed and placed in a queue for testing
4) Bugs are tested and marked either fixed or not
5) Bugs that are fixed make their way to release
6) Bugs that regress re-enter the system at step 2

This is the basis for any defect tracking system. It is critical that this process be followed, otherwise bugs are missed, dropped, or forgotten. There is nothing worse from a QA standpoint than a bug sitting in your tracker at a limbo state, and then reported to support from customers. This is worse than bugs not found in my opinion because you actually found it first, but a breakdown in *your* process caused it to slip out to the customer.

How do we prevent this? The first step is good information gathering and recording. Always have a notebook by your side to record steps or odd behavior. Next, have a template for entering defects. I like the following one.


Build: Version/Date
Environment: Hardware Version, Model, Bios version, Last Update

Summary:

Steps to Reproduce:

Expected result:

Actual result:


Whether your tracker includes a template similar to this, or you need to paste it in every time, it is important to have a guideline to ensure no detail is forgotten and the engineer has the information needed to reproduce the problem.

Be sure to also attach any test files, changes, or additional assumptions to the bug as needed. A bug can fall through the cracks if it becomes the ball in a tennis match between test and development in a quest for enough information to reproduce the issue. Gather as much as you can ahead of time.

Quality is not just testing
As you may have already guessed there is more to QA than just exercising a piece of code. QA process should be applied to all aspects of software development from requirements building through to the distribution method. When involved is Software QA you need to be involved in every stage of the development.

Here is a brief outline of other area QA process should be applied to ensure a high quality release:

Requirements gathering
Work closely with technical leads, and project management to ensure all possible hurdles are addressed before requirements are finalized.
  • How will the software be used
  • Does this change the way the user is accustom to using this type of software
  • What steps can be made to more easily acquaint the user with the software features
  • Are we creating/changing core technologies (impact on test effort)
Build process
Often the process of building a binary or image can introduce errors not seen during development unit testing. Develop a repeatable process for build and release.

  • Checklist of libraries, repos, etc. required for a functional build
  • Create a knowledge base of issues encountered by the build engineer for quick reference later
  • Develop a "smoke test" script to certify each build before functional testing begins
Customer expectations
Manage your customer's expectations throughout the development process. Keep them involved.

  • Easy method for submitting feature requests (ex. Brainstorm)
  • Integrate customer test cases where possible
  • Work with customer testers to focus your testing on the areas the customer deems important
  • Triage all customer bugs and provide feedback ASAP
Release process
When an internal release passes all tests and is approved for general release you would be excused for thinking your job was completed, but it is not. Once a final image is approved and CDs/Images are being burned, there is still a ton of finalization tasks to be performed.

  • Check final media/release (Install tests, version checking)
  • Final triage of bugs left in system for update scheduling
  • Creation of final test report
  • "Post Mortem" review of project quality and process
  • Archival of project materials and output
  • Ship Party planning!!
When the customer has reviewed the final release, you have reviewed your process, the project is archived, and you have a frosty ship party pint in your hand, then, and only then, has your job on the project concluded. Or has it??

In reality, even as you celebrate a successful release with your team, undiscovered bugs lurk in the code, and eventually someone will find them. Keep a good interface open with customer support and help out as much as you can in post ship testing.

Go back and review your cases so successful tests can be migrated to the next version/release. Implement some of the lessons learned from the last project. Software will always have bugs, and a process for finding them can always be improved.

Conclusion
This concludes the review of test process. Throughout this series we have gone over many different aspects of the test process and overall software project quality management. I hope it has been helpful to you, and leads to higher quality software.

There are many methods for managing the quality of a software development process, but the thing they all have in common is repeatability. No matter how robust your test cases may be, if they cannot be reliably reproduced then they provide little value.

Documentation is a QA departments core function. Document everything. If there is a process for something then there should also be a process document explaining it step-by-step. Keep templates for every document you produce and keep them current.

Over the next months I will be expanding on some of the topics discussed in brief during this series. If there is a particular topic you would like to see covered in-depth, please feel free to drop me a note to that affect. Thanks for reading and happy hunting!

Friday, September 5, 2008

Testing in the open source world. Part 2

Intro
In my last post I discussed the gathering of requirements, and the organization of those requirements. In this post I will be moving to the next step, case creation based on those requirements.

Getting Started
Now that we have requirements collected and numbered, we can begin creating cases to exercise the features described. So where to begin? Let's take a simple requirement from the example we used in the last post here

1.1 Select "Featured Content" to see content we highlight that changes on a regular basis

So to best exercise this feature there are several things to consider. The first is the name of the section where the feature exists. In this example that is "Featured Content". So our first test case would be to confirm that this area is named correctly:

1) Launch Vuze
2) Confirm main application window contains "Featured Content" area
3) Confirm spelling and grammar of "Featured Content" area is correct

The next section of the requirement is "to see content we highlight". This means there should be something there by default. So, next case:

1) Launch Vuze
2) Open Featured Content area
3) Confirm Featured Content area is populated with links to various resources by default

Along with having content available, we also need to check that the content is functioning as expected. To do this, we need to explore each of the links provided in the content area.

1) Launch Vuze
2) Open Featured Content area
3) Click each content link
4) Confirm that each link launches a service

Lastly, there is the closing part of the requirement "that changes on a regular basis". For this one we may need to contact the developer to find out what regular basis means, or if he/she can trigger a change so we can test that feature. Either way, our case will be the same.

1) Launch Vuze
2) Open Featured Content area
3) Click each content link
4) Confirm that each link launches a service
5) Close Vuze
6) Trigger content shift or wait X hours for normal system cycle
7) Launch Vuze
8) Open Featured Content area
9) Click each content link
10) Confirm that each link launches a service

Now, I did something a little different here. In the previous cases I did one feature at a time. This gets you the best results and makes tracking cases much easier. However, it is also very time consuming. To save a bit, you can often combine cases in a logical way. For instance in this one, I combined the cycling of links and the case previous to it where we checked the links.

Be Careful here. You really only want to combine a new case with ones already run. If you start combining too many unique steps you risk losing focus on what you are actually testing, and the tracking of which cases have passed and which have failed becomes a nasty situation. Any time you have saved writing the cases, quickly evaporates when you try to collect statistics on your testing results.

Mapping
Ok, So we have some cases now. The next step is to ensure that all requirements are covered by cases. We do this by mapping cases to the requirement they were written to explore. This is accomplished by simply referencing the requirement number next to the case. So for the cases we wrote above, each would reference the 1.1 requirement. Having multiple cases for each requirement is very common. If you have several requirements with only a single cases, spend a little more time thinking about how the user might interact with the feature, and I'm sure some more cases will come to mind.

Pulling It All Together
We now have requirements, numbered and organized, and cases created for each or our requirements. The last stage is organizing these into a format that is easy to read, follow, and track. Most QA houses use a table to to this. An example of the cases created in this post can be found here

You'll notice that there are several additional columns in the table to help with tracking. These include a case number, description, and date/build. These help the tester easily find the cases that need to be executed. The pass/fail, bug, and date/build columns help with tracking the state of each feature. These give the reader an idea which areas of the program need the most work and what are the major issues with those areas.

Put all of these together and you have a very efficient way of organizing your cases into a simple format that can be used by both technical and non-technical readers.

Next: Execution
Join me next time for the fun part! Executing the cases, finding bugs, and tracking the fixes through the software development process. Thanks for reading and Happy Hunting.

Thursday, July 3, 2008

Testing in the open source world. Part 1

Intro
Today I would like to begin a several part series about quality assurance testing and how it can work with open source software and projects. There are several stages to be aware of and many techniques to accomplish a high level of quality and code coverage. Although I am familiar with many of these, in an effort to keep things as simple as possible, I will only talk about the most popular aspects of each stage of testing. Enjoy.

How To Begin - a.k.a Requirements
Software comes in many shapes and sizes, and each with it's own varying amount of documentation. I have seen notepad applications with each feature carefully explained, and huge supply chain strategy suites with no documentation at all. Our hope, as testers, is to always land somewhere in the middle of these two. Some documentation is definitely preferred, but even when it is lacking, through some work we can get what we need to begin creating a structured test plan.

So, lets assume for now that we have some documentation. In the open source world this documentation is usually in the form of a FAQ or Release note from the developer. For our purposes here, lets say you are interested in contributing some testing to Vuze (Azeurus). It was the number one download on Sourceforge the day this article was written and it has a rather large FAQ to explain basic functionality found here.

Planning
We now have a requirements document (FAQ) and can begin designing our test objectives. Depending on the scope of what we want to accomplish, we can plan to test all the functionality described, or focus on a single area of functionality. For our example, Vuze, lets pick a small subset of features to test. We will focus on the Content Search features described here.

The first document we will create is a Test Plan. This is a distillation of the project/application requirements. We want to create a document that we can easily reference to ensure that we have not missed a requirements during testing. There are several formats for this, no one is better or worse than any other, so pick a format you are comfortable with. I prefer to have a collaborative testing effort, I use wiki documents for this, so my examples will be based on that technology.

So lets pull some requirements out. Go here and copy the content to your requirements document. Now format them into numbered items under a heading like "1.0 - Discovering Content". You can see my final example requirement here.

It is also very important to reference the source material you used for each section of your plan. This allows the reader to move between the documents easily and for you to quickly track changes in requirements and make necessary updates to all related documents.

Now just repeat these steps with as many sections of the FAQ that you would like to test. Be sure to keep incrementing the sections so you so not repeat the header. So "What's new about the search box" would be "2.0 - Search box"...etc.

Once we have created a large number of indexed requirements we can then begin testing each of the functions the developer has documented. Without a good set of requirements the test effort can be sidetracked or unfocussed. Having a clear path for the progression of the test effort will keep the testing manageble and some even think...fun.

Closing
Next time we will move forward to the next step...creating cases from your requirements. Until then, take a look at the following links for additional examples of test requirements. Happy hunting.

Chris

Additional Links


Requirements for UME/Intel


Template for Requirements
as found in Proprietary Software world (Glad we are not there! :-P )

Tuesday, June 10, 2008

First Release of Ubuntu Mobile Edition

Hello all,
Thanks for checking out this première post if the Canonical Ubuntu Mobile Edition news blog. First let me introduce myself. I am the lead Ubuntu Mobile QA Engineer for Canonical. I have been working feverishly over the first part of this year, with the help of Ubuntu community members, to ensure that this first release of Ubuntu Mobile Edition (UME) is as good as it can be.

For those of you not aware, first, what is Ubuntu Mobile Edition? In short, it is a version of the Hardy Heron release of Ubuntu that has been customized to run on small form factor devices such as the new MID, and netbook.

This first release of UME has been specifically designed for the new Intel MID platform and is designed to be a developer reference build. We combined the best parts of Moblin, Hildon, and several independent open source projects, to roll out a full featured mobile device OS. This means that this first release image will only boot on a device using a specific Intel chipset. However, you can easily explore UME through a Virtual Machine image.

You can grab our first release as a bootable USB or VM image here:
http://cdimage.ubuntu.com/mobile/releases/hardy/

For testing purposes, we also released a version that runs on the Samsung Q1 Ultra. This small device is ideal for exploring UME. If you have one...download our "Mccaslin" build and flash your device over to UME. Warning: This will wipe out all of the existing data on the device.

Once you have an image the Ubuntu Mobile QA Team would love to hear your comments, answer your question, and review any of the issues you find in Ubuntu Mobile Edition!

Post your comments: https://lists.ubuntu.com/mailman/listinfo/mobile

Get your questions answered: https://answers.launchpad.net/ubuntu-mobile

Enter your issues: https://bugs.launchpad.net/ubuntu-mobile

In future posts I will cover the testing effort, day-to-day operations within Ubuntu Mobile QA and any requests for community testing we might need. Please stay tuned.

Sincerely,
Chris Gregan
Ubuntu Mobile QA Engineer