iSam Blue Predicts Falling HA Costs
February 14, 2011 Dan Burger
Robert Seal is convinced the cost of high availability can be reduced. And he thinks that cloud computing is just the thing that will make that happen. The costs of HA–both software and hardware related–have already been squeezed in the past several years, but even the most optimistic estimates have fewer than 10 percent of IBM i, System i, iSeries, and AS/400 shops implementing this level of business resiliency. Seal says HA is ready for a growth spurt.
Technically speaking, it’s a difference of opinion on remote journaling. An IT Jungle article last week, Remote Journaling: Friend or Foe in HA?, can fill you in on that debate. In short, it comes down to a question of latency–the time it takes to move data and objects from a primary server to a secondary server in an off-site location. In high availability, speed and the ability to replicate right up to the moment a system goes down–minimizing data loss–is the greatest product differentiator.
But back to the cloud and reducing the costs of this business continuity service that has become mandatory for companies that operate in a 24-hour, seven-day week, and 365-day year world where downtime is hugely expensive. And maybe even illegal.
As companies turn to cloud computing to handle major application needs, they are dependent on those applications being available 100 percent of the time. That makes cloud computing and high availability pretty much inseparable. Software companies that are hosting applications for their customers have to provide assurance of high availability.
When companies factor the costs of high availability, they take into account what they stand to lose if their systems go down. They also take into account not only the price of the software licensing and the cost of the backup (target machine) server hardware, but also the cost of any additional infrastructure improvements that allow speedy data transfer over a distance that keeps the target server safe from whatever disaster has struck the primary box. Then there’s the cost of managing and maintaining the system with in-house staff.
A big chunk of this can be reduced by setting up HA in a cloud, Seal says.
Two independent software vendors (ISVs) that are partnering with iSam Blue serve as examples in Seal’s explanation of how HA in the cloud will work and how it will make HA more affordable.
The two ISVs are Soft-Pak, a developer of ERP software for the waste management industry, and Intelek Technologies, an EDI software company. Both vendors specialize in applications that run on the IBM i platform. They offer their customers licensed software that runs on the customers’ systems and they offer it in the cloud, which is another way of saying they will host and manage the applications for customers relieving them of the associated responsibilities. The ISVs will provide the secondary (target) servers if customers don’t already have that option or don’t want to take it on.
Before developing a Web-based product, Soft-Pak for 25 years created and maintained licensed software that offered billing, service, and routing. Two Soft-Pak customers have added Quick-EDD high availability to their own IBM i systems.
Soft-Pak president Brian Porter predicts his company’s Web-based solution will become increasingly popular as customers realize they get more features, more security, and no hardware/maintenance expense and a lower operating cost.
The trend toward Web-based service is an “easy button solution,” according to Terry Wood, vice president of technology at Intelek, because companies can buy a little bit or all of it.
Intelek’s 350-plus customers are in the transportation-shipping industry. Since the software company began offering a Web-based alternative to its licensed software, having that offering protected by high availability became mandatory.
In instances that involve companies buying software that will run in their own data centers, these ISVs recommend iSam Blue and Quick-EDD high availability. When companies inquire about obtaining software in a cloud environment, the ISVs explain their offerings and their systems are highly available because they are backed up by Quick-EDD.
“In cloud computing,” Seal explains, “you have to have 24×7 capabilities or something very close to it. HA is expected. Customers are paying for a provider to take care of everything, so the customer has nothing to worry about. That includes back ups, HA, upgrades, and all problems are covered by support.”
With regard to pricing, Seal says the cost of high availability software will continue to decline as will infrastructure costs to companies that have bandwidth concerns because HA technology can reduce existing bandwidth requirements. With more companies and service providers taking advantage of logical partitioning, that also has a cost-saving effect as targeted backup can be carved out of boxes with everyday uses rather than only being used in case of emergency. Then there’s the labor savings companies can realize by offloading applications and high availability to hosting service providers, a.k.a., the cloud.
Have you read any forecasts for cloud computing that predict anything other than a huge increase in this software as a service model?
One other aspect of high availability that is widely accepted, yet almost unbelievable is the testing of HA systems. Or more precisely put . . . the lack of testing. Companies frequently deploy high availability with the expectations they can switch over to a backup server and eliminate downtime, but they don’t test to make sure.
“The biggest reason why people don’t want to do a switch, a test, is because they are scared to death their data isn’t right, they don’t have licensing right, that they haven’t done the communications correctly,” Seal says. “I don’t see the point of having HA if you don’t run periodic tests.”
His advice is to switch from the primary server to the target server within two to four weeks after installing the software. To prepare for this, Seal says he would do incremental pre-testing before pulling the switch. The first step is to prove that the expected replication is taking place. The second step is to assure that the communications are working and the data on the target system is there and as expected. The final test would be to prove that everything can be replicated back to the primary server.
“We encourage customers to switch (from primary to target machine) every quarter (three months) for a minimum one-day test to make sure everything is as expected,” Seal says. “It’s amazing how much better maintained systems are when it is known that a test is coming each quarter. Everybody does their jobs better. In my opinion, it’s nice to have your own backup, but cost has been an impediment to implementing HA. The software has been too expensive, but that has come down. The hardware costs have dropped dramatically, too. But the price can drop farther. We need to drop the price for the customers who need HA, but can’t afford it.”