Fifty Years Of Operating IBM Systems
March 11, 2019 Bill Hansen
The world is celebrating some important 50th anniversaries this year. My interests in aerospace and music led me to recall four events from 1969. The most famous event was the first manned moon landing in July, which occurred the same week that I turned 21. Two months before that was the first flight of the Concorde supersonic transport. I mark the beginning of the “summer of love” with the Woodstock concert, and its end with the tragic concert at Altamont Speedway. (Who knew that Hell’s Angels would not make great security guards?)
For me, all of this pales in comparison to two events that shaped my life forever. In late August, my wife Sandy and I will be celebrating our 50th wedding anniversary with our four kids, their spouses, and five grandchildren. Roughly three months before the wedding, I started my first job in IT. Of course, the field would not be called Information Technology for many years. To be precise, I began my career as a summer intern in the data processing department of a large Chicago bank. While I cannot avoid talking about my personal history, my goal here is to describe the relationships I have observed between today’s most modern operating system (IBM i) and those that I encountered during my early career.
I thought that working as a computer operator would be a temporary thing to get me through college. The summer job turned into a part-time job as I finished my senior year and started grad school. It evolved into a full-time job when our first child came along. Unfortunately, this coincided with the hard part of graduate school, coming up with a math idea that no one has thought of before and documenting it in a thesis. Somehow, A Computer Calculation of the Homology of the Lambda Algebra got produced and I was on my way. Today, the only part of it that I remember well is the primal fear of dropping my 2,000-card Fortran program.
From the very beginning, my “real world” experience gave me an advantage over my peers. As a new PhD graduate in math, I was lost in the crowd. But as someone who could also teach Fortran and COBOL, I had multiple college teaching offers before my classmates got interviews.
I have discovered that most old-timers in this industry like to reminisce about their first computers. I don’t remember mine because we never met. As a student, I passed card decks containing my programs through a window and returned two hours later to pick up the green-bar paper that mockingly shouted “Compiler error” at me. As a new operator, however, things got much more personal – but not necessarily in a good way. I still sport a scar I received from a sharp corner of a 1403 printer train.
The System/360 Lobster Trap
The first computer I got to operate was an IBM System/360 Model 30. It took up the same space as a two-car garage. The generous 32K of memory held the complete operating system (IBM DOS) and let me run two partitions: a foreground partition that ran the important jobs and a background partition that got any remaining CPU cycles. With two active partitions, there were two printers, a card reader, and a card reader/punch. Four tape drives and four disk drives rounded out the configuration.
Much has been written praising the innovative design of the System/360 series. It was the first family of computers built around a common system architecture. Theoretically, organizations could start small – with something like the Model 30 – and move to larger systems in the family as their needs evolved. The shared system architecture meant that migration was easy. A COBOL program written for a Model 30, for example, could run on a Model 50. (Ironically, I never heard the word “scalability” used until I saw it in a Microsoft ad 30 years later.)
What IBM never mentioned, however, was that an organization’s journey through the System/360 was one-way. While it was always possible to move to a larger system, going the other way was very difficult, if not impossible. I described it to my non-computer-literate friends as walking into a lobster trap. (Younger readers may prefer the Hotel California analogy.) With each upgrade, new features became available that tied the organization to that level and higher. You could go in as deep as you liked, but you could never leave.
This “feature” eventually because a problem for IBM and its customers, but nobody saw it coming. Over the years, as Moore’s law drove prices down and performance up, many companies that used mainframes could now run their entire workloads on smaller systems. But moving downward in the System/360 family was no easier than moving to a Unix box.
Babel Had It Easy
By the end of my first summer as an operator, I got promoted to the “big boy” side of the computer room. (The bank took up an entire downtown block, and the operations department took up the entire 8th floor.) This meant moving from a System/360 Model 30 to a System/360 Model 50. Having been briefed on the brilliance of the “”computer family” idea, I thought this would be a trivial move. But the Model 30 used the IBM Disk Operating System (DOS), while the Model 50 ran OS/MFT (Operating System/Multiprocessing with a Fixed number of Tasks). This had two major impacts on operations.
First, rather than balancing two partitions, I could create as many partitions as I liked within the then-massive 512K memory available to me. (IBM i users can think of a partition as an old-school subsystem.) While IBM recommended that installations set and then rarely change their memory configurations under MFT, I became a superstar by moving memory on the fly. By using two to 10 partitions as needed by my workload, I was able to finish twice as many batch jobs in a day than anyone else. I think multi-dimensional topology problems came in handy.
Second, while a COBOL programmer could make the jump to a larger system and still speak COBOL, an operator had to learn a completely new command language. In fact, I had to learn four. While the Model 30 had one card reader and one printer for each partition, the Model 50 had a single reader and a single printer. This was possible due to a new concept (spooling) that was implemented by an IBM customer (NASA) and included with the system as an optional add-on (the Houston Automatic Spooling Priority program or HASP). HASP had its own set of commands, which we used more than the IBM system commands, since they controlled jobs, initiators, queues, readers, and printers. (An initiator selects jobs for execution just like an IBM i subsystem monitor.) In addition to the two operator control languages, I had to learn JCL, the Job Control Language used to execute jobs and describe the resources they use. Submitted with the JCL cards were optional HASP control statements. Four languages with four completely different formats, but apparently that was not enough for IBM.
During the System/360 and early System/370 eras, the operator’s console was a modified IBM Selectric typewriter. CRTs first appeared in the programming department with TSO, the IBM Time Sharing Option. Along with the CRTs came one more IBM manual, covering the new TSO command language. There were now five ways to communicate with the operating system and its components: system commands, HASP commands, TSO commands, JCL statements, and HASP control statements.
These five languages were duplicated with OS/MFT’s sister operating system, OS/MVT (Operating System/Multiprocessing with a Variable number of Tasks), although the system commands controlling partitions and memory allocation had different formats. Eventually, DOS also got a spooling subsystem (called POWER) and a second spooling option (the Asynchronous Spooling Program, or ASP) became available for connected OS/MVT systems.
As the System/360 evolved into the System/370, virtual storage versions of DOS, OS/MFT, and OS/MVT became available as DOS/VS, OS/VS1, and OS/VS2. The latter evolved into various MVS versions and is now known as z/OS. Moreover, a new operating system was introduced (VM), which let an installation run more than one operating system (very slowly) on a single physical box. A special purpose operating system (CMS) was also developed to allow individual users to control their own virtual partitions on a VM system.
While even IT professionals may consider this list to be stale alphabet soup, it played a huge role in my life. After teaching college for two years, I became a developer for Deltak, one of the first companies to offer multi-media, self-study training for IBM operating systems and programming languages. One of my jobs was to give a two-hour “IBM acronym dump” to new sales reps, focusing on mapping each acronym to one of our courses.
Ya Want SNA with That?
All of the original System/360 systems were batch oriented. As technology evolved, new capabilities were offered to IBM customers as extra-price add-ons. As I mentioned, interactive programming was available at the cost of buying TSO. If you wanted to give users interactive access to their data, you bought CICS. If you wanted a database, IMS was the thing. Were you upset that absolutely nothing was secure? You bought and installed RACF.
I can go on and on, but you get the idea. Each new development led to another program product. Some came free and some had associated costs, but all had their own command sets and a steep learning curve. To a trainer, this was a lifetime of job security!
Not My Future System
By the 1970s, several external developments changed the mainframe landscape. Due to government pressure, IBM could no longer bundle the operating system, training, and maintenance with the hardware. New vendors – many of whom started at IBM – took advantage of this. Of particular note was Gene Amdahl, whose company was able to produce high-end System/370-compatible machines at a price that IBM could not duplicate. In reaction, IBM created a Future Systems (FS) project, which was designed to reconfigure the industry in IBM’s favor as much as the previous System/360 project. The primary goal was to develop a completely new system architecture based on IBM technology, making every existing computer system obsolete. Lowering the cost of program development and operations was another goal. The third major goal was to find a technical (and legally justifiable) reason to re-bundle as much as possible in order to slow, if not stop, the “death by many cuts” caused by the flood of third-party vendors.
The output of the FS project was a technical tour de force that will sound familiar to IBM i professionals. One new idea was the extension and replacement of virtual storage with a single-level store. This would be the most efficient memory management scheme when the main memory/DASD storage model got replaced by new technologies such as bubble memory. Although that never happened, the single-level store provided many benefits and remains the best memory management technique for all future 100 percent flash memory computers. Another design change was the addition of I/O processors to manage the flow of data and reduce the workload of the main processor. (To be fair, this idea evolved from System/360 channel processors.)
The “ease of use” and “bundling” goals were achieved by integrating features into the operating system and hardware. Rather than having separate products for database management, interactive processing, and security, all of these would be available via operating system commands and APIs. (I was struck in later years how Microsoft used the same strategy with Windows and Internet Explorer to put NetScape out of business.) In turn, the operating system passed as many of these functions as possible into calls to microcode routines or specially built circuitry. In effect, the proposed Future System would be the ultimate complex instruction set computer (CISC). Every function needed by an application program would be available directly as an operating system API.
Implementation plans were drawn for three families of systems covering everything from small business systems through the largest mainframes. Nevertheless, the project was killed in 1975. Numerous reasons have been cited, including the large number of new technologies that still needed to be invented, the impossible complexity of implementing everything over IBM’s entire product line, and the debate within IBM as to whether CISC or RISC systems were best for future products. The overriding cause that I heard at the time – and what still remains as the most likely factor – was the lack of a reasonable migration path for System/360 customers. Even IBM could not get away with telling its best customers that they had to scrap everything and start over.
The Future Is Now
While the Future Systems project was canceled, various organizations within IBM were free to take advantage of its suggestions. Foremost among those doing so was the group in Rochester, Minnesota, which would have been responsible for the “small business system” part of the project. As you should have recognized, the key ideas of the FS project first showed up in the System/38 and were later incorporated into the AS/400. As mentioned, these ideas included a single-level store and the use of microcode to present a consistent, powerful system image over all family members. Add to that an object oriented operating system design, independent subsystems, automatic tuning, and a single command language for use by users, operators, programmers, and system administrators and you will understand why I became an immediate fan. One story from the early 1990s illustrates why I went from fan to advocate.
Between 1982 and 1994, my company developed over 150 courses for Science Research Associates, a textbook publisher that turned into a business training vendor after it was purchased by IBM. Our contribution over the years included roughly a third of their total output and encompassed most of their mainframe operations and system administration courses. After winning the contract to develop a new course, my biggest problem was always in getting a live system on which to develop programs, test commands, and capture screens. In 1991, I was developing a self-study course on logical partitioning, which was a new feature of IBM’s largest systems. The course was to include an emulator that students could use to practice reconfiguring the system complex, so getting my hands on a live system was crucial. In return for free copies of my eventual course, SRA talked a major corporation into letting me play with their $12 million system and capture screen shots (onto green-bar, continuous-form paper). The only conditions were that I had to do everything between midnight and 7 a.m. on Labor Day morning and I could never, ever tell anyone that I had been there.
A few months later, SRA asked us to take over the AS/400 project that had been started by another subcontractor. Although IBM sales reps claimed that all necessary training was built into the AS/400, customers had been asking SRA for more, claiming that the built-in training was incomplete and growing obsolete with every new release. On my insistence, SRA bought us an entry-level AS/400 system as part of the contract. As a result, we were able to develop operations, programming, and system administration courses that applied to every size of AS/400, from our small F2 to the largest available. Ironically, the only part of the system I did not like was the built-in training. The general design was based on my first computer-based courses (on VM/CMS), which we did for SRA in 1985 when it was still owned by IBM. In contrast, the AS/400 contract led us to build a 5250 simulator into our proprietary courseware so that we could simulate AS/400 green screens. (A GUI simulator came later.) I was so in love with the platform that within three years, we bought the AS/400 courses back from SRA, updated and improved them, and launched Manta Technologies to finally bypass the middlemen between us and our students. Manta celebrates its 25th anniversary this year; but that’s a story for another day.