Newsletters   Subscriptions  Forums  Store   Career  Media Kit  About Us  Contact  Search   Home 
tug
Volume 1, Number 36 -- October 7, 2004

Rotten to the Core: Chips, Lies, and Software Licenses


by Timothy Prickett Morgan


Software licensing no longer makes sense. The anonymity of the Internet, the prevalence of distributed computing, and clever multi-core, multi-threaded processors have all screwed up software licensing in their own ways. To put it bluntly, the prices most companies pay for software licenses have little correlation to how they actually use software. This is a big problem, and it is going to take some innovative pricing and industry consensus to fix it.

While the looming crisis in software pricing applies equally to desktop, server, and other computing platforms, this discussion is limited to the server market. And this discussion assumes that software makers have a right to create code and to charge for it. Open source software with paid support and services is a great model (and is maybe even preferable), but that does not make closed source software with licensing fees and support fees inherently wrong or evil, as open source zealots would have us believe. The issue is not whether you will pay for software (you most certainly will, one way or the other, either with cash or with your own time), but whether software pricing makes sense.

In the distant past, when servers were monolithic machines that tended to run one application or a set of applications in a host-based environment that more or less confined the data processing to support those applications to a single box, it was relatively easy to come up with a price for software. You could scale the price of the software, whether it was an operating system, a relational database, or a module of an ERP suite, according to the processing capacity of the machine; the more powerful the machine, the more you paid for the software.

Some platforms still use such tiered pricing for their software. But the advent of client/server computing in the late 1980s and early 1990s complicated matters. In client/server architectures, which distribute software functionality across both smart clients and intelligent servers, rather than concentrating computing on central hosts with dumb terminals, vendors had to charge for software on both clients and servers. Some software companies just counted seats and charged a per-seat rate. Other software makers counted concurrent users (not total potential seats) and added a fee for the servers, too. This was tolerable and made some sense.

Then n-tier architectures shortly bloomed, in the late 1990s, and databases were separated from applications and often run in a distributed fashion on much less expensive application servers; over time, as the Internet became the dominate client infrastructure, the heavy client was replaced with a Web client, which by its very nature can be anonymous. Once companies exposed their ERP systems to the Internet, to do direct sales, to work with suppliers, and to lash together their employees around the world, it got much trickier to figure out where software was really running and what a company should be charged to use it. How do you count Web server workloads or tire-kickers coming in over the Internet that tickle the back-end ERP systems as they browse?

In many cases, you can't, so software vendors have stopped trying. Server operating systems, for instance, use a mix of tiered server pricing or tiered server plus client access pricing, and they simply ignore the fact that these systems are probably being hit from the Internet. What this means, of course, is that companies that use software in a closed fashion on their internal networks are subsidizing the many companies that expose that software, in one of a million ways, to the Internet. This is obviously not fair.

To get around this problem, many operating systems, middleware programs, and application programs are available on per-CPU licenses. This seemed more or less fair, until you realize that not all CPUs are created equal. This per-CPU pricing practice has rewarded those server engineers who created very fast processors with the fewest number of processors to support a given workload, and it has severely hurt a server maker that chose to go the other route and make a powerful machine that can do just as much work. Nothing illustrates the bookends of this problem better than comparing the cost of the Oracle 10g database on an IBM eServer p5 Unix server and a Sun Microsystems Sun Fire UltraSparc-IV server. If you assume a processor core is a processor (and all of the software companies I have ever talked to do), IBM's Power5 "Squadron" systems have a huge advantage over Sun's "Serengeti" Sun Fire machines, even with the advent of the dual-core UltraSparc-IV processors this year. IBM just demonstrated a 16-way p5 570 (that's eight chips and 16 cores) with cores running at 1.9 GHz that can churn through 809,144 transactions per minute on the TPC-C online transaction processing benchmark test. My best guess is that it would take a 72-way Enterprise 25000 from Sun using the 1.2 GHz UltraSparc-IV dual-core chips (meaning there are only 36 physical chips in the box) to match that performance. Sun could demonstrate equal price/performance on server hardware, operating system, and middleware. But when Oracle 10g costs $40,000 per processor (at list price) on these machines, IBM has a 4.5 to 1 advantage over Sun on database pricing. This is huge. To be fair, Oracle offers a concurrent user license for its database software, which would tend to level it back into the direction of something akin to fair, but what about ERP software that is priced on a per-CPU basis?

This is such an intractable problem that when Sun announced the dual-core UltraSparc-IVs, earlier this year, it tried to obfuscate and claimed that the UltraSparc-IV was a single processor, just like the single-core UltraSparc-III. If IBM, which had launched the first dual-core processors at the end of 2001 with the Unix-based Power4 "Regatta" servers, had primed the industry for that expectation (that a chip is a processor and that the core count inside that chip is irrelevant), Sun's cheeky marketing might not have met with such a dull thud. And at the Intel Developer Forum in September, Intel, which is getting ready to launch its first dual-core chips next year, tried to pull the same trick, saying that if software makers allowed for virtual processors (enabled through simultaneous multithreading wizardry, which Intel calls HyperThreading in its Xeon and Itanium chips) to be ignored with respect to software pricing, then it only seemed logical that CPU-based software prices should only count physical CPU sockets to determine the price. There was much laughter at this suggestion, which Intel argued cleverly. But for three years now, the industry has been conditioned to count processor cores, and unless something drastic happens, this will not change.

What might be more fair is to count the number of threads--physical and virtual--in a system, and base software pricing on that. The Power5 chips have SMT as well as dual cores, which means each physical chip presents four separate threads to applications running on the system. But if the industry moved to thread counts (not to be confused with good sheets), Sun would get burned again. With its future "Niagara" processors, Sun is putting eight processor cores on a single chip, each core with four threads. Those 32 threads will scream on Web infrastructure workloads that like many threads, but at eight cores, or 32 threads (apparently without SMT, so this is 32 real threads), buying a database or software that is based on CPU or thread count will be very expensive, compared with the two-way servers at the time that will offer similar performance. Sun's engineering is very smart with Niagara, but it goes against the trend in software pricing.


You can see now why Sun has launched per-user annual subscription pricing for its Java Enterprise System operating system and middleware stack. For $100 per employee per year, you get the whole shebang and use what you want. Sun's own software will not discriminate against its own hardware as non-Sun software with CPU-based pricing does today. Neither does, by the way, the support services from middleware upstart JBoss, which gives its open-source, J2EE-compliant Web application server away for free and then charges $8,000 per application hitting that JBoss server for support of the JBoss server. Both Sun's and JBoss' way of pricing are simpler and seem to be more fair, in that they seem to charge based on what companies actually use. But don't expect the players that are benefiting from current pricing practices to change their ways.

Even so-called value-based pricing has some serious issues. Some software makers use complex metrics, including employee count, annual revenue, and profitability, compared with competitors, to figure out a price for their software. In effect, the price is what they think you can pay, and what they think you ought to pay. If the software is going to help you to better compete, presumably under these scenarios you have to pay more for the software. While this is logical, it makes it very difficult to comparison shop. You never know what the list price is for a component, so you never know what the floor or the ceiling for a price on a piece of software is. This is unacceptable.

Any free market, including the slippery software market, is founded on the idea that the market cannot reckon a fair price through the bargaining of a single buyer with a single seller. A fair price for any product can only come into existence through the free exchange of information across many buyers and sellers who have done similar deals. Value-base pricing that is unpublished is therefore inherently undermining the free market in software licensing.

Sponsored By
HEWLETT-PACKARD

Getting from here to there, reliably

From point A to point B, sometimes through point C and oftentimes on to D or E, more than 200,000 passengers a day count on ANA to get them to their destinations on time.

So, through more than 800 flights per day, ANA's Flight Management System sees to it, that, as Japan's leading airline in on-time performance, ANA continues to remain on time.

And behind this benchmark of punctuality, working away millisecond by millisecond, is HP technology. Find out more.


Editor: Timothy Prickett Morgan
Managing Editor: Shannon Pastore
Contributing Editors: Dan Burger, Joe Hertvik, Kevin Vandever,
Shannon O'Donnell, Victor Rozek, Hesh Wiener, Alex Woodie
Publisher and Advertising Director: Jenny Thomas
Advertising Sales Representative: Kim Reed
Contact the Editors: To contact anyone on the IT Jungle Team
Go to our contacts page and send us a message.


THIS ISSUE
SPONSORED BY:

Hewlett-Packard
Arkeia
Sun Microsystems
Stalker Software
Geekcorps


BACK ISSUES

TABLE OF
CONTENTS
Rotten to the Core: Chips, Lies, and Software Licenses

IBM Drops eServer Power5 Clock Speed, Prices to Chase Sun

Dataram Sells Clone eServer p5, i5 Main Memory

New TPC Benchmarks Are on the Horizon

But Wait, There's More


The Four Hundred
Big Blue Should Do Power Windows, Too

PeopleSoft Fires Conway, Brings Back Founder

Azul's Network-Attached Processing to Shake Up Server Market

The Linux Beacon
Red Hat Betas Enterprise Linux 4

IBM Blue Gene/L Tops Supercomputer Performance Charts

Companies Want Good Enough IT, Not 'Best of Breed'

The Windows Observer
Microsoft 'Embedding' Itself into the Retail Supply Chain

SQL Server Gets Business Intelligence Enhancements

Mainframe Migration Alliance Gains New Members, Web Site


Copyright © 1996-2008 Guild Companies, Inc. All Rights Reserved.
Guild Companies, Inc. (formerly Midrange Server), 50 Park Terrace East, Suite 8F, New York, NY 10034
Privacy Statement