With XCode 5 and Mavericks Server, Apple has made a play to support enterprise continuous integration practices and live-device test automation for iOS. What system is provided by Google for their Android platform supporting scalable, live-device test automation?
It turns out that Google doesn’t provide one themselves. Android, inheriting as it does from Java, includes multiple APIs for test automation including JUnit and UIAutomation, but these are just frameworks, not a full CI system. Even looking around the market there isn’t a turn-key system connecting code changes to application builds to test passes, although tools exist to support every phase. With the right combination of tools and a little homework, it is actually something you can build yourself with almost no custom code required. In the following guide, I’ll lay out the pattern I’ve developed along with my fellow engineers at Deloitte Digital to create a robust, scalable, live-device Android test automation system.
First, let’s lay out the goals we started with when we began building the system. It must:
- Support continuous integration from CMS to live device test automation
- Support parallelization as a means of scaling to maximize our use of shared resources
- Be inexpensive and straightforward to deploy and maintain ourselves
- Be accessible to multiple simultaneous teams of internal developers and testers via a convenient interface
Since we already had a space to use, a handful of devices and hubs, some spare hardware, and some experience experimenting with various continuous integration servers, we knew that was definitely the place to start for this project. Looking around the web, there are a myriad of solutions covering a huge spectrum of features and support. Specifically we looked for something that:
- Provided robust API and plugin support
- Was simple to deploy and configure
- Is free
- Has a large user base
- Provided support for both Android and iOS build and test automation
- Had excellent and vast documentation
In our deployment, the master node handles building the applications and manages queuing for the slave nodes. They represent individual devices in our build lab and handle installing and testing the applications on real phones and tablets. Because individual jobs can line up at their required nodes independently, executing as soon as that node becomes available, the shared resources are load-balanced and work is parallelized across as many devices as we’d like to add. For a deep a dive on how we did this, please see this blog post.
Because of our studio model, the system we chose needed to support any number of projects of varying sizes and platforms, all potentially lining up builds and tests throughout the day and night in a shared environment. This is a tricky problem to solve. Any automation system that isolates pools of shared resources behind a single pipeline can result in a bottleneck you could call “automatic serialization.” Let’s compare serialization to parallelization briefly to illustrate the advantage of the latter.
Job A has 300 tests which need to run on three devices and each device will take 15 minutes to complete the suite. In a system using automated serialization, that means it will take 45 minutes to finish with three unique devices.
In a system using automated parallelization, it will still only take 15 minutes. Now imagine there are 15 unique devices, and there are 5 teams with jobs A, B, C, D, and E respectively. Now imagine each team has an average of 10 code commits a day, averaged across 8 hours. These are all reasonable assumptions at a studio our size when we’re busy.
Under automated serialization, that works out this way:
- 15 minutes per suite per device x 15 devices = 3 hours and 45 minutes per run
- 3 hours and 45 minutes per run x 5 teams = 18 hours and 45 minutes per code change cycle
- 18 hours and 45 minutes per code change x 10 code changes per day = 187.5 hours of tests per day.
Under automated parallelization, that works out this way:
- 15 minutes per suite per device x 15 devices = 15 minutes per run
- 15 minutes per run x 5 teams = 1 hour and 15 minutes per code change cycle
- 1 hour and 15 minutes per code change x 10 code changes per day = 12 hours of tests per day
The math is clear:
- Automated Serialization: Tests/hour = 1,200
- Automated Parallelization: Tests/hour = 18,000
Testing at this scale is increasingly important to the enterprise as testers and developers start to recognize the kinds of gains available. With millions of apps and fierce competition for consumers and enterprise users, you need to be able to deliver quickly and maintain a high quality bar through constant testing.
Our teams share the continuous integration system to run tests ranging from simple blind UI stress tests to unit tests and even full integration tests. Importantly, the majority of these are running on live devices in parallel in our device lab. The way the developers of the two major mobile platforms scale is primarily through the use of virtual devices. This works for them because they can leverage their already massive server farms efficiently. We prefer live devices in a lab but the argument around that is for another post on another day.
Russell Collins is QA engineer at Deloitte Digital’s Seattle studio.