Newsflash: Developers Hate to Test Their Software
Published: June 21, 2010
by Timothy Prickett Morgan
There are some things that transcend platform differences. All computers wait at the same speed. All projects come in over budget and beyond their projected windows. Sometimes people change things for the sake of change and for no damned good other reason. And, according to a recent survey, application software developers hate to test their code.
So why not do what the computers do best and automate the testing? Well, because programmers are artists as much as they are techies, and they all have their own ways of writing code and therefore their own methods for testing code. What's an IT organization to do? First, they have to admit there is a problem and then seek help.
A Silicon Valley firm called Electric Cloud sells cloud-based tools to help developers test their wares, and obviously needed a better sense of what companies were doing, or not doing, when it comes to testing applications. And so it commissioned Osterman Research to survey some big IT shops in North America who have at least 1,000 employees and at least 50 developers to get some insight.
The researcher was able to get developers, testers, managers, and executives at 144 companies to spill the testing beans, and found that only 12 percent of those polled had completely automated their application testing regimens. Another 10 percent said that all of their testing was done manually. Here. In 2010. Some 46 percent of the software developers polled said they knew they did not have as much time to test their code as they knew they should (and no one in the news business has time to read what they write, or to think before they write, or to have someone else carefully edit and polish what they write, so I am not throwing stones here) and 36 reported that they don't think their companies do enough pre-release testing of applications. Of those polled, 56 percent said that bugs found late in the development cycle almost always messed with product release dates; 44 percent of those talking to Osterman on behalf of Electric Cloud said their last big bug had an average cost of $250,000 in lost revenue and took 20 developer hours to correct.
Now here's the kicker: The developers who say they have enough time to test their applications before they are released report they spend half as much time--an average of 12 developer-hours--fixing bugs compared to those who feel they are coding by the seat of their pants, who reported they spend 25 developer-hours fixing the average bug.
Looks like either way, you pay. Time does indeed equal money, as Einstein proved in his as-yet unpublished grand unified theory.
ARCAD Opens ALM Suite a Little More
MKS Adds Test Management to ALM Suite
The Fallacy of Automated Testing, and an Original Solution
Original Teams with Green Hat for SOA App Testing
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot