Greetings from the Canonical QA Desktop Automation Sprint in Oxford UK!
Various representatives of Canonical QA have gathered in Oxford to collaborate on automating the Ubuntu desktop testing effort. Some of the areas discussed were tools, team collaboration, best practices, and roadblocks. The following is a brief primer on the current desktop automation efforts as well as links to resources and ideas on how you can get involved.
What is Automation?
In general automation is a process of using software to run a series of actions against another piece of code. This is done on a more common basis with simple shell scripts when developers want to check small bits of code against it's intended feature set. However, in recent years the effort to automate testing has moved this practice into the mainstream. It is now common for QA departments to have several staff members dedicated to scripting automated tests, and basing the majority of their test effort on execution of these scripts.
Today this practice is gaining a footing in the open source world as well. We here at Canonical have begun a concerted effort to develop a framework of automated tests based on proven technologies. The idea is to leverage the coding potential of our internal and community QA teams to build a better product for all. This involves several steps.
The first step for building an automation effort is to decide on a solid foundation. This was a fairly trivial decision. The most robust method for accessing the gnome desktop is through the Accessibility Layer. The layer is provided by Orca and is more commonly used for assisted access to the desktop. However this layer can be easily re-purposed to provide a flexible api for automated scripts to access the desktop as well.
Once we had our foundation, we could build on that. There are a couple tools taking advantage of the accessibility layer for testing. The two most well known tools are Dogtail and LDTP. After much deliberation, we decided on the use of LDTP for several reasons, the most important are that it more easily integrates with our exiting tool set and it is already in use by several large companies which provide their scripts freely to the community. This meant we had a tool with a large user base, as well as a library of test scripts already built.
LDTP is a test framework based on Python. It uses the accessibility libraries to make calls directly to applications. These actions can be scripted in Python, and executed either from the LDTP Runner or directly from the command line. This flexibility allows us to build suites of scripts that can be run by simple shell commands. It also meant that we could use some of our existing tool to call these suites.
Checkbox is a tool installed by default on every Ubuntu system. When run by the user, it collects information about the system for submission to Launchpad. This allows the user to provide more comprehensive information when logging bugs against Ubuntu.
Checkbox is also capable of performing actions based on the information it collects. So for instance, if the user runs Checkbox, and it detects a built in camera. With this information Checkbox can then be told to run a series of tests based on the type of camera. Checkbox can then prompt the user with errors it detected, and even enter bugs in LP on behalf of the user based on the failed tests. The tests it executes can be based on several technologies, but the current effort is for Checkbox to call LDTP test suites based on the information it collects.
A feature rich combination of these tools is the current focus of Canonical QA, with the hope of producing a comprehensive set of test suites that can be executed automatically based on the tester's hardware. The system would then report any issues to Launchpad with test files, logs, and hardware configuration information attached.
What is the future of the automation effort?
As this effort moves forward, there are several areas where we expect rapid growth and areas that we will need help. The framework itself is fairly sturdy so the expansion of tests based on this framework will be rapid. There are already many cases built for LDTP that can be run against applications on the Gnome desktop today. Over the coming months we will be expanding this coverage.
There will also be greater integration with Checkbox. Once this is complete, individual tests can then be organized into suites, and the execution criteria can be developed based on hardware specification. Checkbox will then be the unifying application for Ubuntu desktop automation, and will hopefully lead to a healthy community of developers building scripts for inclusion in the test framework.
This seems like a task that is straight forward and trivial to accomplish, however some roadblocks still exist. For instance, there are several area of the Gnome standard applications stack that are still not accessibility enabled. Some key areas include the panel notification area and some of the newer technologies such as Clutter and Compiz. Enablement of these applications would allow for more comprehensive tests and, in the end, better software for all.
How can I get involved?
As I said there are several areas where you can join this effort. First step is to visit the QA wiki to get acquainted with the status of the effort. Next, if you are a package maintainer, verify that your software is providing the necessary hooks to the accessibility layer. This can be done by launching your application and scanning it with a program called Accerciser. It can be found in the Ubuntu Universe repository as well.
If you have Python programming skills, many more scripts are needed for every area of the desktop environment. Pick an app and start writing. Instructions on the specifics of writing scripts for LDTP can be found on the project website. There is also a blueprint to direct future coding work on Launchpad that you can use as a reference.
If you are a non-programmer and still would like to help. Please install LDTP and run some of the existing scripts. You can get a primer on setting your system up for testing here. After that we would love to hear from you. How did they work? Enter a bug if you have problems. How could they be better? Enter a Blueprint with additional requirements. Have you found more useful scripts published on the Internet that we could use? Send us an email. All of this would be very helpful to the effort.
Thanks for reading, and we hope to hear from you.