The Fallacy of Automated Testing, and an Original Solution
February 19, 2008 Alex Woodie
The state of automated testing in the IT industry is so laughably bad, it almost makes you want to cry. According to statistics provided by testing tool specialist Original Software, organizations that bought automated testing tools have only managed to automate, at most, the testing of 20 percent of their applications. The other 80 percent is done the old fashioned way: by hand. While Original doesn’t have a silver bullet for this problem, it claims its blended approach to testing could provide a solution.
In its new whitepaper, “The Great Software Testing Swindle,” Original Software makes a compelling case that all is not right in the $1 billion market for automated testing tools. In fact, the state of affairs is probably more akin to Shakespeare’s line about Denmark and foul odors.
In “Swindle,” the English software company summarizes that, based on research and its own experience, it is very rare to see an organization automate more than 20 percent of its testing chores. In fact, the percentage is usually much lower than that. As a result, not only have these organizations wasted time and money, but they’ve created another bottleneck in their application development cycles. This, from something that’s supposed to speed up the development cycle and lead to cleaner code.
Colin Armitage, Original’s CEO, puts his money where his company’s mouth is. “I’d happily put $50 on the table and say, ‘Find an account that has successfully managed to automate more than 20 percent of their testing,'” he says. (Original customers being an exception.)
So, how did we get to this fine state? According to Original, the answer is one of those ironic little things that you, at first, might find amusing. The problem, Original says, stems from the use of software scripts–the most widespread technique to achieve test automation (but which Original eschews in favor of its “self-healing” script technology).
The scripts work well, as long as the application doesn’t change. But when the application has undergone some level of transformation (such as what might lead you to test it in the first place), those scripts are practically worthless and must be adapted to work with the application changes.
Vendors of automated testing tools must be aware of this Catch-22–that you can easily check your applications for potentially damaging changes, as long as you don’t change your application. Original is aware of the fundamental flaw, and has been talking about the failures of script-based tools for years.
But the question remains: When does the swindle become fraud?
Armitage responds. “I don’t think there was any deliberate attempt to deceive, but the reality of implementing test automation is far more complicated and far more demanding than even the vendors or many customers realize,” he says. “There are good benefits, if you can do automation at an acceptable price, in terms of timescales, and if you can practically keep it up to date so it continues to work with your application. I haven’t met a client yet who didn’t want to build a decent regression pack, so anytime the application changed, they could run it through the regression test pack and be reasonably confident they haven’t done any damage. But you can’t afford for that process to takes months or be an additional burden on your development timescales.”
Original says it avoids this problem in its flagship TestDrive suite through its self-healing script, or SHS, technology. SHS contains an algorithm, Original says, that can spot when elements on the screen have been added, removed, or changed. The SHS scripts then are automatically updated with the changed elements, allowing the SHS script to be reused even if the application has changed.
Armitage doesn’t claim that TestDrive completely alleviates all the work, but it takes less time–weeks to set up, instead of months with other tools–and has an overall success rate closer to 50 percent.
“We’ve made some radical steps adding prudence in terms of that whole process. But at the end of the day, creating that automation set still takes work,” he says. “Doing test automation in a world where applications don’t stay still is a complex and demanding business. And if you’re not careful you can burn an awful lot of man hours and people’s time trying to make that happen.”
This approach is starting to bear Original fruit in the market, Armitage says. The company recently announced that revenues for its third quarter revenues (ended December 31) grew 144 percent compared to 3Q06. While the privately held company doesn’t disclose financial details, the company is tracking over the $10million per year mark, Armitage says.
And Original is finally starting to get some traction among the IT analysts, who are starting to take a closer look at the market for automated testing tools, Armitage says. Gartner declared Original a “technology leader” in a recent report (alas the Magic Quadrant is dead), according to Armitage, and the company will also be featured in an upcoming Forrester Wave. “We now seem to be firmly established on their radar,” Armitage says.
Meanwhile, Original hopes giving away limited trials of its TestDrive-Assist will further seed the market, which IDC says is in the midst of growing from $950 million in 2005 to $1.8 billion in 2010. First introduced more than a year ago, TestDrive-Assist helps testers by maintaining an audit trail of all testing activities, whether they’re conducted manually or automatically, which enables users to reproduce and then fix bugs or other errors. The tool is also a key element in Original’s “crawl, walk, run” approach to implementing automated testing.
Original is allowing users to download a trial of TestDrive-Assist and use it free of charge for five consecutive days before the trial expires.
The company has also launched a new Web site at www.origsoft.com. In addition to containing flashy new graphics, the new Web site contains a “reading room” where site visitors can access customer case studies (called “original thinkers”) or read about well-publicized consequences of poor testing (called “software nightmares”).