The X Factor: Is Memory-Based Software Pricing the Answer?
Published: July 25, 2006
by Timothy Prickett Morgan
A year and a half ago, the IT industry was just beginning to come to the realization that the advent of multiple cores on processor chips was going to wreak havoc on software pricing. Since that time, vendors have been wrestling with how to price software on products where more and more processing elements, virtual threads, and even whole virtualized machines are zooming around dynamically inside of a single physical machine. But the answer might have been there all along: The 1s and 0s of main memory.
Dual-core processors and hyperthreads (which are virtual threads that make a single processor core look like two cores) were only invented because even though Moore's Law can allow the amount of circuits on a chip to double roughly every 24 months now (it used to be 18 months), that transistor density does not allow processors to ramp up their cycle times. For four decades, we could always crank the clock to get more computing, but 4 GHz clock speeds using current chip-making and system cooling technologies is hitting a practical thermal limit and 10 GHz speeds would have hit a theoretical limit. Now, instead of cranking clocks to get more performance--and not worrying about the efficiency of a processor and its server at all--everyone is running around the data center trying to virtualize their machines to make workloads portable and malleable.
A year and a half ago, Sun Microsystems had just launched its dual-core UltraSparc-IV processors, and it was still calling a dual-core chip a single processor because, to put it bluntly, Sun's processor cores were under-powered and Sun was getting killed on comparisons to other chips when it came to software pricing. So Sun said the industry should look at the socket level and forget what is going on inside the chip. Of course, when it came to its own middleware stack, Sun opted for subscription-based pricing tied to the number of employees a company had and Sun didn't care how many employees used its middleware or how many servers it ran on.
Since that time, Microsoft and VMware have opted for socket-based pricing, and Oracle and IBM have come up with slightly more complex software pricing schemes that, in the end, charge companies more if they use RISC or proprietary processor cores than if they use X64 cores. This strikes no one as being particularly fair, but that's the computer business for you.
Still, in one way or another, these vendors are giving a slight or a substantial price break because processor makers shifted to two-core chips. But there is no way they can continue to be generous going forward, no way they can put software on the same trend line as Moore's Law. Software has always gotten more expensive each year, with very few exceptions. (The notable exception is Windows. Microsoft was able to create and maintain a desktop operating system monopoly by making Windows cheaper each year throughout the 1990s while at the same time making it better--a brilliant strategy, as it turns out.)
If cores and threads were a problem for software pricing, virtualization is going to create a bigger mess. With virtualized server environments where the amount of processor, memory, and I/O capacity can be dialed up or down on a whim. It is going to be increasingly difficult to tie the value of software to usage.
Some have suggested that what we need to do is establish a quanta that describes a unit of computer processing power, and then rate machines based on a benchmark test and then price software based on the theoretical processing capacity of a machine. This is messy, and politically impossible to achieve.
But there is, quite possibly, one way to price software that spans architectures: Price it based on how much main memory it uses when it is running. A startup systems management vendor called 3tera, which has created a virtualization environment for Web applications called AppLogic, is the first vendor to propose the idea, and as far as I know, is the only one to do so as yet. AppLogic is, in essence, a container that wraps around a stack of software so it can be moved around a cluster of machines, and because that container can be moved around and sized as needed, 3tera's engineers and marketeers realized that the only constant they had was memory. And so the company charges 4 cents per GB of main memory per hour to use AppLogic.
This could be a breakthrough idea. Think about it. No matter what system or chip architecture, no matter what data type an application uses, no matter what programming language it is created in, no matter what operating system it runs on, the price is the same. The one thing that all programs, large and small, be they systems programs or application programs, have in common is that they consist of millions or billions of bits--flipped to either a 1 or a 0 state--that are dancing around inside of main memory. Rather than worry about how many threads, or cores, or chips a piece of software is using, just track how much main memory it is using and price it accordingly. This approach works if a server is a single physical machine, or a virtualized one, since a megabyte of memory is a megabyte of memory. It works if you are pricing a perpetual license for a piece of software or if you are charging under a usage-based, software as a service, or utility computing delivery model. Memory-based software pricing is elegant, and simple.
The other benefit that this pricing model has, of course, is that it fits with creeping featurism, which the software industry often calls an upgrade. If software makers create new code with more functionality, the odds are that it is going to use more main memory, and that means they are going to be able to charge more money--provided customers use the features. This will make them happy, and, more importantly, will get software pricing back off the Moore's Law curve and back on something that is more or less linear and pointing up and to the right.
The main issue with memory-based pricing would be coming up with ways of keeping track of memory usage for particular pieces of software. This would mean that operating system and application software vendors would have to come up with a generic, and very likely open source, tool for monitoring memory usage over time for applications on systems and desktops. They would probably also have to have license keys tie into running processes so memory usage could be tied directly to the multiple components that make up an instance of a program within a single machine, and then have aggregation and reporting mechanisms that can aggregate memory usage data over time across multiple machines in a network. Vendors may have to come up with a standard way of tagging software so usage fees can be audited by people who do not speak binary or hex. This all seems very reasonable, doable, and obvious, now that 3tera has shown the way.
VMware Goes for Per-Socket Pricing, But Can It Hold?
Oracle's Multicore Pricing: Right Direction, Not Far Enough
Microsoft Backs Intel, AMD on Dual-Core Licensing
Rotten to the Core: Chips, Lies, and Software Licenses