History Podcasts

IBM Introduces System 360 - History

IBM Introduces System 360 - History


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

International Business Machines (IBM) introduced the system 360 Comptuter. The computer, which was a second generation computer based on transistors, was a huge success, and became the mainstay computer of many businesses for many years.

In 1911, Charles F. Flint, a trust organizer, oversaw the merger of Herman Hollerith's Tabulating Machine Company with two others: the Computing Scale Company of America and the International Time Recording Company. The three companies merged into one company called the Computing-Tabulating-Recording Company or C-T-R. C-T-R sold many different products including cheese slicers, however, they soon concentrated on manufacturing and marketing accounting machines, such as: time recorders, dial recorders, tabulators, and automatic scales.

In 1914, a former executive at the National Cash Register Company, Thomas J. Watson, Senior becomes the general manager of C-T-R. According to IBM's historians, "Watson implemented a series of effective business tactics. He preached a positive outlook, and his favorite slogan, "THINK," became a mantra for C-T-R's employees. Within 11 months of joining C-T-R, Watson became its president. The company focused on providing large-scale, custom-built tabulating solutions for businesses, leaving the market for small office products to others. During Watson's first four years, revenues more than doubled to $9 million. He also expanded the company's operations to Europe, South America, Asia and Australia."


Contents

The earliest computers were mainframes that lacked any form of operating system. Each user had sole use of the machine for a scheduled period of time and would arrive at the computer with program and data, often on punched paper cards and magnetic or paper tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a control panel using dials, toggle switches and panel lights.

Symbolic languages, assemblers, [1] [2] [3] and compilers were developed for programmers to translate symbolic program-code into machine code that previously would have been hand-encoded. Later machines came with libraries of support code on punched cards or magnetic tape, which would be linked to the user's program to assist in operations such as input and output. This was the genesis of the modern-day operating system however, machines still ran a single job at a time. At Cambridge University in England the job queue was at one time a washing line from which tapes were hung with different colored clothes-pegs to indicate job-priority. [ citation needed ]

As machines became more powerful the time to run programs diminished, and the time to hand off the equipment to the next user became large by comparison. Accounting for and paying for machine usage moved on from checking the wall clock to automatic logging by the computer. Run queues evolved from a literal queue of people at the door, to a heap of media on a jobs-waiting table, or batches of punch-cards stacked one on top of the other in the reader, until the machine itself was able to select and sequence which magnetic tape drives processed which tapes. Where program developers had originally had access to run their own jobs on the machine, they were supplanted by dedicated machine operators who looked after the machine and were less and less concerned with implementing tasks manually. When commercially available computer centers were faced with the implications of data lost through tampering or operational errors, equipment vendors were put under pressure to enhance the runtime libraries to prevent misuse of system resources. Automated monitoring was needed not just for CPU usage but for counting pages printed, cards punched, cards read, disk storage used and for signaling when operator intervention was required by jobs such as changing magnetic tapes and paper forms. Security features were added to operating systems to record audit trails of which programs were accessing which files and to prevent access to a production payroll file by an engineering program, for example.

All these features were building up towards the repertoire of a fully capable operating system. Eventually the runtime libraries became an amalgamated program that was started before the first customer job and could read in the customer job, control its execution, record its usage, reassign hardware resources after the job ended, and immediately go on to process the next job. These resident background programs, capable of managing multi step processes, were often called monitors or monitor-programs before the term "operating system" established itself.

An underlying program offering basic hardware-management, software-scheduling and resource-monitoring may seem a remote ancestor to the user-oriented OSes of the personal computing era. But there has been a shift in the meaning of OS. Just as early automobiles lacked speedometers, radios, and air-conditioners which later became standard, more and more optional software features became standard features in every OS package, although some applications such as database management systems and spreadsheets remain optional and separately priced. This has led to the perception of an OS as a complete user-system with an integrated graphical user interface, utilities, some applications such as text editors and file managers, and configuration tools.

The true descendant of the early operating systems is what is now called the "kernel". In technical and development circles the old restricted sense of an OS persists because of the continued active development of embedded operating systems for all kinds of devices with a data-processing component, from hand-held gadgets up to industrial robots and real-time control-systems, which do not run user applications at the front-end. An embedded OS in a device today is not so far removed as one might think from its ancestor of the 1950s.

The broader categories of systems and application software are discussed in the computer software article.

The first operating system used for real work was GM-NAA I/O, produced in 1956 by General Motors' Research division [4] for its IBM 704. [5] [ specify ] Most other early operating systems for IBM mainframes were also produced by customers. [6]

Early operating systems were very diverse, with each vendor or customer producing one or more operating systems specific to their particular mainframe computer. Every operating system, even from the same vendor, could have radically different models of commands, operating procedures, and such facilities as debugging aids. Typically, each time the manufacturer brought out a new machine, there would be a new operating system, and most applications would have to be manually adjusted, recompiled, and retested.

Systems on IBM hardware Edit

The state of affairs continued until the 1960s when IBM, already a leading hardware vendor, stopped work on existing systems and put all its effort into developing the System/360 series of machines, all of which used the same instruction and input/output architecture. IBM intended to develop a single operating system for the new hardware, the OS/360. The problems encountered in the development of the OS/360 are legendary, and are described by Fred Brooks in The Mythical Man-Month—a book that has become a classic of software engineering. Because of performance differences across the hardware range and delays with software development, a whole family of operating systems was introduced instead of a single OS/360. [7] [8]

IBM wound up releasing a series of stop-gaps followed by two longer-lived operating systems:

    for mid-range and large systems. This was available in three system generation options:
      for early users and for those without the resources for multiprogramming. for mid-range systems, replaced by MFT-II in OS/360 Release 15/16. This had one successor, OS/VS1, which was discontinued in the 1980s. for large systems. This was similar in most ways to PCP and MFT (most programs could be ported among the three without being re-compiled), but has more sophisticated memory management and a time-sharing facility, TSO. MVT had several successors including the current z/OS.

    IBM maintained full compatibility with the past, so that programs developed in the sixties can still run under z/VSE (if developed for DOS/360) or z/OS (if developed for MFT or MVT) with no change.

    IBM also developed TSS/360, a time-sharing system for the System/360 Model 67. Overcompensating for their perceived importance of developing a timeshare system, they set hundreds of developers to work on the project. Early releases of TSS were slow and unreliable by the time TSS had acceptable performance and reliability, IBM wanted its TSS users to migrate to OS/360 and OS/VS2 while IBM offered a TSS/370 PRPQ, they dropped it after 3 releases. [9]

    Several operating systems for the IBM S/360 and S/370 architectures were developed by third parties, including the Michigan Terminal System (MTS) and MUSIC/SP.

    Other mainframe operating systems Edit

    Control Data Corporation developed the SCOPE operating systems [NB 1] in the 1960s, for batch processing and later developed the MACE operating system for time sharing, which was the basis for the later Kronos. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and time sharing use. Like many commercial time sharing systems, its interface was an extension of the DTSS time sharing system, one of the pioneering efforts in timesharing and programming languages.

    In the late 1970s, Control Data and the University of Illinois developed the PLATO system, which used plasma panel displays and long-distance time sharing networks. PLATO was remarkably innovative for its time the shared memory model of PLATO's TUTOR programming language allowed applications such as real-time chat and multi-user graphical games.

    For the UNIVAC 1107, UNIVAC, the first commercial computer manufacturer, produced the EXEC I operating system, and Computer Sciences Corporation developed the EXEC II operating system and delivered it to UNIVAC. EXEC II was ported to the UNIVAC 1108. Later, UNIVAC developed the EXEC 8 operating system for the 1108 it was the basis for operating systems for later members of the family. Like all early mainframe systems, EXEC I and EXEC II were a batch-oriented system that managed magnetic drums, disks, card readers and line printers EXEC 8 supported both batch processing and on-line transaction processing. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BASIC system.

    Burroughs Corporation introduced the B5000 in 1961 with the MCP (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages, with no software, not even at the lowest level of the operating system, being written directly in machine language or assembly language the MCP was the first [ citation needed ] OS to be written entirely in a high-level language - ESPOL, a dialect of ALGOL 60 - although ESPOL had specialized statements for each "syllable" [NB 2] in the B5000 instruction set. MCP also introduced many other ground-breaking innovations, such as being one of [NB 3] the first commercial implementations of virtual memory. The rewrite of MCP for the B6500 is still in use today in the Unisys ClearPath/MCP line of computers.

    GE introduced the GE-600 series with the General Electric Comprehensive Operating Supervisor (GECOS) operating system in 1962. After Honeywell acquired GE's computer business, it was renamed to General Comprehensive Operating System (GCOS). Honeywell expanded the use of the GCOS name to cover all its operating systems in the 1970s, though many of its computers had nothing in common with the earlier GE 600 series and their operating systems were not derived from the original GECOS.

    Project MAC at MIT, working with GE and Bell Labs, developed Multics, which introduced the concept of ringed security privilege levels.

    Digital Equipment Corporation developed TOPS-10 for its PDP-10 line of 36-bit computers in 1967. Before the widespread use of Unix, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. Bolt, Beranek, and Newman developed TENEX for a modified PDP-10 that supported demand paging this was another popular system in the research and ARPANET communities, and was later developed by DEC into TOPS-20.

    Scientific Data Systems/Xerox Data Systems developed several operating systems for the Sigma series of computers, such as the Basic Control Monitor (BCM), Batch Processing Monitor (BPM), and Basic Time-Sharing Monitor (BTM). Later, BPM and BTM were succeeded by the Universal Time-Sharing System (UTS) it was designed to provide multi-programming services for online (interactive) user programs in addition to batch-mode production jobs, It was succeeded by the CP-V operating system, which combined UTS with the heavily batch-oriented Xerox Operating System.

    Digital Equipment Corporation created several operating systems for its 16-bit PDP-11 machines, including the simple RT-11 system, the time-sharing RSTS operating systems, and the RSX-11 family of real-time operating systems, as well as the VMS system for the 32-bit VAX machines.

    Several competitors of Digital Equipment Corporation such as Data General, Hewlett-Packard, and Computer Automation created their own operating systems. One such, "MAX III", was developed for Modular Computer Systems Modcomp II and Modcomp III computers. It was characterised by its target market being the industrial control market. The Fortran libraries included one that enabled access to measurement and control devices.

    IBM's key innovation in operating systems in this class (which they call "mid-range"), was their "CPF" for the System/38. This had capability-based addressing, used a machine interface architecture to isolate the application software and most of the operating system from hardware dependencies (including even such details as address size and register size) and included an integrated RDBMS. The succeeding OS/400 for the AS/400 has no files, only objects of different types and these objects persist in very large, flat virtual memory, called a single-level store. i5/OS and later IBM i for the iSeries continue this line of operating system.

    The Unix operating system was developed at AT&T Bell Laboratories in the late 1960s, originally for the PDP-7, and later for the PDP-11. Because it was essentially free in early editions, easily obtainable, and easily modified, it achieved wide acceptance. It also became a requirement within the Bell systems operating companies. Since it was written in the C language, when that language was ported to a new machine architecture, Unix was also able to be ported. This portability permitted it to become the choice for a second generation of minicomputers and the first generation of workstations. By widespread use it exemplified the idea of an operating system that was conceptually the same across various hardware platforms, and later became one of the roots of free software and open-source software operating system projects including GNU, Linux, and the Berkeley Software Distribution. Apple's macOS is also based on Unix via NeXTSTEP [10] and FreeBSD. [11]

    The Pick operating system was another operating system available on a wide variety of hardware brands. Commercially released in 1973 its core was a BASIC-like language called Data/BASIC and a SQL-style database manipulation language called ENGLISH. Licensed to a large variety of manufacturers and vendors, by the early 1980s observers saw the Pick operating system as a strong competitor to Unix. [12]

    Beginning in the mid-1970s, a new class of small computers came onto the marketplace. Featuring 8-bit processors, typically the MOS Technology 6502, Intel 8080, Motorola 6800 or the Zilog Z80, along with rudimentary input and output interfaces and as much RAM as practical, these systems started out as kit-based hobbyist computers but soon evolved into an essential business tool.

    Home computers Edit

    While many eight-bit home computers of the 1980s, such as the BBC Micro, Commodore 64, Apple II series, the Atari 8-bit, the Amstrad CPC, ZX Spectrum series and others could load a third-party disk-loading operating system, such as CP/M or GEOS, they were generally used without one. Their built-in operating systems were designed in an era when floppy disk drives were very expensive and not expected to be used by most users, so the standard storage device on most was a tape drive using standard compact cassettes. Most, if not all, of these computers shipped with a built-in BASIC interpreter on ROM, which also served as a crude command line interface, allowing the user to load a separate disk operating system to perform file management commands and load and save to disk. The most popular [ citation needed ] home computer, the Commodore 64, was a notable exception, as its DOS was on ROM in the disk drive hardware, and the drive was addressed identically to printers, modems, and other external devices.

    Furthermore, those systems shipped with minimal amounts of computer memory—4-8 kilobytes was standard on early home computers—as well as 8-bit processors without specialized support circuitry like an MMU or even a dedicated real-time clock. On this hardware, a complex operating system's overhead supporting multiple tasks and users would likely compromise the performance of the machine without really being needed. As those systems were largely sold complete, with a fixed hardware configuration, there was also no need for an operating system to provide drivers for a wide range of hardware to abstract away differences.

    Video games and even the available spreadsheet, database and word processors for home computers were mostly self-contained programs that took over the machine completely. Although integrated software existed for these computers, they usually lacked features compared to their standalone equivalents, largely due to memory limitations. Data exchange was mostly performed through standard formats like ASCII text or CSV, or through specialized file conversion programs.

    Operating systems in video games and consoles Edit

    Since virtually all video game consoles and arcade cabinets designed and built after 1980 were true digital machines based on microprocessors (unlike the earlier Pong clones and derivatives), some of them carried a minimal form of BIOS or built-in game, such as the ColecoVision, the Sega Master System and the SNK Neo Geo.

    Modern-day game consoles and videogames, starting with the PC-Engine, all have a minimal BIOS that also provides some interactive utilities such as memory card management, audio or video CD playback, copy protection and sometimes carry libraries for developers to use etc. Few of these cases, however, would qualify as a true operating system.

    The most notable exceptions are probably the Dreamcast game console which includes a minimal BIOS, like the PlayStation, but can load the Windows CE operating system from the game disk allowing easily porting of games from the PC world, and the Xbox game console, which is little more than a disguised Intel-based PC running a secret, modified version of Microsoft Windows in the background. Furthermore, there are Linux versions that will run on a Dreamcast and later game consoles as well.

    Long before that, Sony had released a kind of development kit called the Net Yaroze for its first PlayStation platform, which provided a series of programming and developing tools to be used with a normal PC and a specially modified "Black PlayStation" that could be interfaced with a PC and download programs from it. These operations require in general a functional OS on both platforms involved.

    In general, it can be said that videogame consoles and arcade coin-operated machines used at most a built-in BIOS during the 1970s, 1980s and most of the 1990s, while from the PlayStation era and beyond they started getting more and more sophisticated, to the point of requiring a generic or custom-built OS for aiding in development and expandability.

    Personal computer era Edit

    The development of microprocessors made inexpensive computing available for the small business and hobbyist, which in turn led to the widespread use of interchangeable hardware components using a common interconnection (such as the S-100, SS-50, Apple II, ISA, and PCI buses), and an increasing need for "standard" operating systems to control them. The most important of the early OSes on these machines was Digital Research's CP/M-80 for the 8080 / 8085 / Z-80 CPUs. It was based on several Digital Equipment Corporation operating systems, mostly for the PDP-11 architecture. Microsoft's first operating system, MDOS/MIDAS, was designed along many of the PDP-11 features, but for microprocessor based systems. MS-DOS, or PC DOS when supplied by IBM, was designed to be similar to CP/M-80. [13] Each of these machines had a small boot program in ROM which loaded the OS itself from disk. The BIOS on the IBM-PC class machines was an extension of this idea and has accreted more features and functions in the 20 years since the first IBM-PC was introduced in 1981.

    The decreasing cost of display equipment and processors made it practical to provide graphical user interfaces for many operating systems, such as the generic X Window System that is provided with many Unix systems, or other graphical systems such as Apple's classic Mac OS and macOS, the Radio Shack Color Computer's OS-9 Level II/MultiVue, Commodore's AmigaOS, Atari TOS, IBM's OS/2, and Microsoft Windows. The original GUI was developed on the Xerox Alto computer system at Xerox Palo Alto Research Center in the early 1970s and commercialized by many vendors throughout the 1980s and 1990s.

    Since the late 1990s, there have been three operating systems in widespread use on personal computers: Apple Inc.'s macOS, the open source Linux, and Microsoft Windows. Since 2005 and the Mac transition to Intel processors, all have been developed mainly on the x86 platform, although macOS retained PowerPC support until 2009 and Linux remains ported to a multitude of architectures including ones such as 68k, PA-RISC, and DEC Alpha, which have been long superseded and out of production, and SPARC and MIPS, which are used in servers or embedded systems but no longer for desktop computers. Other operating systems such as AmigaOS and OS/2 remain in use, if at all, mainly by retrocomputing enthusiasts or for specialized embedded applications.

    Mobile operating systems Edit

    In the early 1990s, Psion released the Psion Series 3 PDA, a small mobile computing device. It supported user-written applications running on an operating system called EPOC. Later versions of EPOC became Symbian, an operating system used for mobile phones from Nokia, Ericsson, Sony Ericsson, Motorola, Samsung and phones developed for NTT Docomo by Sharp, Fujitsu & Mitsubishi. Symbian was the world's most widely used smartphone operating system until 2010 with a peak market share of 74% in 2006. In 1996, Palm Computing released the Pilot 1000 and Pilot 5000, running Palm OS. Microsoft Windows CE was the base for Pocket PC 2000, renamed Windows Mobile in 2003, which at its peak in 2007 was the most common operating system for smartphones in the U.S.

    In 2007, Apple introduced the iPhone and its operating system, known as simply iPhone OS (until the release of iOS 4), which, like Mac OS X, is based on the Unix-like Darwin. In addition to these underpinnings, it also introduced a powerful and innovative graphic user interface that was later also used on the tablet computer iPad. A year later, Android, with its own graphical user interface, was introduced, based on a modified Linux kernel, and Microsoft re-entered the mobile operating system market with Windows Phone in 2010, which was replaced by Windows 10 Mobile in 2015.

    In addition to these, a wide range of other mobile operating systems are contending in this area.

    Operating systems originally ran directly on the hardware itself and provided services to applications, but with virtualization, the operating system itself runs under the control of a hypervisor, instead of being in direct control of the hardware.

    On mainframes IBM introduced the notion of a virtual machine in 1968 with CP/CMS on the IBM System/360 Model 67, and extended this later in 1972 with Virtual Machine Facility/370 (VM/370) on System/370.

    On x86-based personal computers, VMware popularized this technology with their 1999 product, VMware Workstation, [14] and their 2001 VMware GSX Server and VMware ESX Server products. [15] Later, a wide range of products from others, including Xen, KVM and Hyper-V meant that by 2010 it was reported that more than 80 percent of enterprises had a virtualization program or project in place, and that 25 percent of all server workloads would be in a virtual machine. [16]

    Over time, the line between virtual machines, monitors, and operating systems was blurred:

    • Hypervisors grew more complex, gaining their own application programming interface, [17] memory management or file system. [18]
    • Virtualization becomes a key feature of operating systems, as exemplified by KVM and LXC in Linux, Hyper-V in Windows Server 2008 or HP Integrity Virtual Machines in HP-UX.
    • In some systems, such as POWER5 and POWER6-based servers from IBM, the hypervisor is no longer optional. [19]
    • Radically simplified operating systems, such as CoreOS have been designed to run only on virtual systems. [20]
    • Applications have been re-designed to run directly on a virtual machine monitor. [21]

    In many ways, virtual machine software today plays the role formerly held by the operating system, including managing the hardware resources (processor, memory, I/O devices), applying scheduling policies, or allowing system administrators to manage the system.


    IBM Introduces 1400 series

    The 1401 mainframe, the first in the series, replaces earlier vacuum tube technology with smaller, more reliable transistors. Demand called for more than 12,000 of the 1401 computers, and the machine´s success made a strong case for using general-purpose computers rather than specialized systems. By the mid-1960s, nearly half of all computers in the world were IBM 1401s.

    Ferranti Sirius magnetostrictive delay line


    IBM 2321 Data Cell Drive

    Seven years in the making, IBM’s 2321 Data Cell Drive stored up to 400 MB. The Data Cell Drive was announced with the System/360 mainframe computer. Wide magnetic strips were plucked from bins and wrapped around a rotating cylinder for reading and writing. Reliability problems plagued the initial models, but after improvements were made it became relatively reliable and sold until 1976.

    IBM Pavillion, 1964 World’s Fair


    Contents

    IBM 350 Edit

    The IBM 350 disk storage unit, the first disk drive, was announced by IBM as a component of the IBM 305 RAMAC computer system on September 14, 1956. [8] [9] [10] [11] Simultaneously a very similar product, the IBM 355, was announced for the IBM 650 RAMAC computer system. RAMAC stood for "Random Access Method of Accounting and Control". The first engineering prototype 350 disk storage shipped to Zellerbach Paper Company, San Francisco, in June 1956, [12] with production shipment beginning in November 1957 with the shipment of a unit to United Airlines in Denver, Colorado. [13]

    Its design was motivated by the need for real time accounting in business. [14] The 350 stores 5 million 6-bit characters (3.75 MB). [15] It has fifty two 24-inch (610 mm) diameter disks of which 100 recording surfaces are used, omitting the top surface of the top disk and the bottom surface of the bottom disk. Each surface has 100 tracks. The disks spin at 1200 rpm. Data transfer rate is 8,800 characters per second. An access mechanism moves a pair of heads up and down to select a disk pair (one down surface and one up surface) and in and out to select a recording track of a surface pair. Several improved models were added in the 1950s. The IBM RAMAC 305 system with 350 disk storage leased for $3,200 per month. The 350 was officially withdrawn in 1969.

    U.S. Patent 3,503,060 from the RAMAC program is generally considered to be the fundamental patent for disk drives. [16] This first-ever disk drive was initially cancelled by the IBM Board of Directors because of its threat to the IBM punch card business but the IBM San Jose laboratory continued development until the project was approved by IBM's president. [17]

    The 350's cabinet is 60 inches (152 cm) long, 68 inches (172 cm) high and 29 inches (74 cm) wide.

    The RAMAC unit weighs about one ton, has to be moved around with forklifts, and was frequently transported via large cargo airplanes. [18] According to Currie Munce, research vice president for Hitachi Global Storage Technologies (which acquired IBM's storage business), the storage capacity of the drive could have been increased beyond five million characters, but IBM's marketing department at that time was against a larger capacity drive, because they didn't know how to sell a product with more storage. Nonetheless, double capacity versions of the 350 were announced [8] in January 1959 and shipped later the same year.

    In 1984, the RAMAC 350 Disk File was designated an International Historic Landmark by The American Society of Mechanical Engineers. [19] In 2002, the Magnetic Disk Heritage Center began restoration of an IBM 350 RAMAC in collaboration with Santa Clara University. [20] In 2005, the RAMAC restoration project relocated to the Computer History Museum in Mountain View, California and is now demonstrated to the public in the museum's Revolution exhibition. [21]

    IBM 353 Edit

    The IBM 353, used on the IBM 7030, was similar to the IBM 1301, but with a faster transfer rate. It has a capacity of 2,097,152 (2 21 ) 64-bit words or 134,217,728 (2 27 ) bits and transferred 125,000 words per second. [22] A prototype unit shipped in late 1960 was the first disk drive to use one head per surface flying on a layer of compressed air as in the older head design of the IBM 350 disk storage (RAMAC). Production 353s used self-flying heads essentially the same as those of the 1301.

    IBM 355 Edit

    The IBM 355 was announced on September 14, 1956 as an addition to the popular IBM 650. [23] It used the mechanism of the IBM 350 with up to three access arms [b] and stored 6 million decimal digits and 600,000 signs. [23] It transferred a full track to and from the IBM 653 magnetic core memory, an IBM 650 option that stored just sixty signed 10-digit words, enough for a single track of disk or a tape record.

    IBM 1405 Edit

    The IBM 1405 Disk Storage Unit was announced in 1961 and was designed for use with the IBM 1400 series, medium scale business computers. [24] The 1405 Model 1 has a storage capacity of 10 million alphanumeric characters (60,000,000 bits) on 25 disks. Model 2 has a storage capacity of 20 million alphanumeric characters (120,000,000 bits) on 50 disks. In both models the disks are stacked vertically on a shaft rotating at 1200 rpm.

    Each side of each disk has 200 tracks divided into 5 sectors. Sectors 0–4 are on the top surface and 5–9 are on the bottom surface. Each sector holds either 178 or 200 characters. One to three forked-shaped access arms each contains two read/write heads, one for the top of the disk and the other for the bottom of the same disk. The access arms are mounted on a carriage alongside the disk array. During a seek operation an access arm moved, under electronic control, vertically to seek a disk 0–49 and then horizontally to seek a track 0–199. Ten sectors are available at each track. It takes about 10 ms to read or write a sector.

    The access time ranges from 100ms to a maximum access time for model 2 of 800ms and 700ms for model 1. The 1405 model 2 disk storage unit has 100,000 sectors containing either 200 characters in move mode or 178 characters in load mode, which adds a word mark bit to each character. The Model 1 contains 50,000 sectors. [25]

    IBM 7300 Edit

    The IBM 7300 Disk Storage Unit was designed for use with the IBM 7070 IBM announced a model 2 in 1959, but when IBM announced the 1301 on June 5, 1961, 7070 and 7074 customers found it to be more attractive than the 7300. The 7300 uses the same technology as the IBM 350, IBM 355 and IBM 1405

    IBM 1301 Edit

    The IBM 1301 Disk Storage Unit was announced on June 2, 1961 [26] [27] with two models. It was designed for use with the IBM 7000 series mainframe computers and the IBM 1410. The 1301 stores 28 million characters (168,000,000 bits) per module (25 million characters with the 1410). Each module has 25 large disks and 40 [c] user recording surfaces, with 250 tracks per surface. The 1301 Model 1 has one module, the Model 2 has two modules, stacked vertically. The disks spin at 1800 rpm. Data is transferred at 90,000 characters per second.

    A major advance over the IBM 350 and IBM 1405 is the use of a separate arm and head for each recording surface, with all the arms moving in and out together like a big comb. This eliminates the time needed for the arm to pull the head out of one disk and move up or down to a new disk. Seeking the desired track is also faster since, with the new design, the head will usually be somewhere in the middle of the disk, not starting on the outer edge. Maximum access time is reduced to 180 milliseconds.

    The 1301 is the first disk drive to use heads that are aerodynamically designed to fly over the surface of the disk on a thin layer of air. [3] This allows them to be much closer to the recording surface, which greatly improves performance.

    The 1301 connects to the computer via the IBM 7631 File Control. Different models of the 7631 allow the 1301 to be used with a 1410 or 7000 series computer, or shared between two such computers. [28]

    The IBM 1301 Model 1 leased for $2,100 per month or could be purchased for $115,500. Prices for the Model 2 were $3,500 per month or $185,000 to purchase. The IBM 7631 controller cost an additional $1,185 per month or $56,000 to purchase. All models were withdrawn in 1970. [26]

    IBM 1302 Edit

    The IBM 1302 Disk Storage Unit was introduced in September 1963. [29] Improved recording quadrupled its capacity over that of the 1301, to 117 million 6-bit characters per module. Average access time is 165 ms and data can be transferred at 180 K characters/second, more than double the speed of the 1301. There are two access mechanisms per module, one for the inner 250 cylinders and the other for the outer 250 cylinders. [30] As with the 1301, there is a Model 2 which doubles the capacity by stacking two modules. The IBM 1302 Model 1 leased for $5,600 per month or could be purchased for $252,000. Prices for the Model 2 were $7,900 per month or $355,500 to purchase. The IBM 7631 controller cost an additional $1,185 per month or $56,000 to purchase. The 1302 was withdrawn in February 1965.


    IBM Introduces System 360 - History

    Photo: Steve Bellovin. Columbia 360/91 console and 2250 Display Unit. Photo: Steve Bellovin. CU 360/91 Hazeltine 2000 ASP control terminal, 1972 (ASP = Attached Support Processor).

    From the IBM Photo Archive: "This wide-angle view of the multiple control consoles of the IBM System/360 Model 91 shows the nerve center of the fastest, most powerful computer in operation in January 1968. It was located at NASA's Space Flight Center in Greenbelt, Md."
    The IBM System/360 Model 91 was introduced in 1966 as the fastest, most powerful computer then in use. It was specifically designed to handle high-speed data processing for scientific applications such as space exploration, theoretical astronomy, subatomic physics and global weather forecasting. IBM estimated that each day in use, the Model 91 would solve more than 1,000 problems involving about 200 billion calculations.

    The system's immense computing power resulted from a combination of several key factors, including advanced circuits that switched in billionths of a second, high-density circuit packaging techniques and a high degree of "concurrency," or parallel operations.

    To users of the time, the Model 91 was functionally the same as other large-scale System/360s. It ran under Operating System/360 -- a powerful programming package of approximately 1.5 million instructions that enabled the system to operate with virtually no manual intervention. However, the internal organization of the Model 91 was the most advanced of any System/360.

    Within the central processing unit (CPU), there were five highly autonomous execution units which allowed the machine to overlap operations and process many instructions simultaneously. The five units were processor storage, storage bus control, instruction processor, fixed-point processor and floating-point processor. Not only could these units operate concurrently, they could also perform several functions at the same time.

    Because of this concurrency, the effective time to execute instructions and process information was reduced significantly.

    The Model 91 CPU cycle time (the time it takes to perform a basic processing instruction) was 60 nanoseconds. Its memory cycle time (the time it takes to fetch and store eight bytes of data in parallel) was 780 nanoseconds. A Model 91 installed at the U.S. National Aeronautics & Space Administration (NASA) operated with 2,097,152 bytes of main memory interleaved 16 ways. Model 91s could accommodate up to 6,291,496 bytes of main storage.

    With a maximum rate of 16.6-million additions a second, NASA's machine had up to 50 times the arithmetic capability of the IBM 7090.

    In addition to main memory, NASA's Model 91 could store over 300 million characters in two IBM 2301 drum and IBM 2314 direct access storage units. It also had 12 IBM 2402 magnetic tape units for data analysis applications, such as the processing of meteorological information relayed from satellites. Three IBM 1403 printers gave the system a 3,300-line a minute printing capability. Punched card input/output was provided through an IBM 2540 card read punch.

    The console from a Model 91 has been preserved in the IBM Collection of Historical Computers, and is exhibited today in the IBM Technology Gallery in the company's corporate headquarters in Armonk, N.Y.

    The console of Columbia University's 360/91 is in storage at the Computer History Museum, 1401 N. Shoreline Blvd, Mountain View, California.

    Here's an excellent photo of the 360/91 console and 2250 display, just like ours at Columbia, but this is not Columbia (I believe it is NASA because I found a thumbnail of the same picture HERE). See how the console dwarfs the puny humans.

    Here's a May 2003 shot of the last remnants of our 360/91 &mdash the console nameplate (visible in the Luis Ortega photo above), the console power switch, and assorted lamps, shown just before they were sent to the new Computer History Museum to be reuinited with the rest of our 360/91 console.

    Semifinally, here's a shot of Columbia's 360/91 control panel in "deep storage" in the Computer Museum's Moffet Field facility, before relocating to Mountain View in June 2003:

    And finally, look what I found on Mayday 2015 at Paul Allen's Living Computer Museum (formerly PDP Planet):

    Amazing. Look: lights! It was referenced from this page (don't count on the link lasting for any amount of time).


    IBM System 360 Changes the Industry Forever

    April 7, 1964

    IBM launches the System 360 mainframe architecture, which comprised six compatible models complete with 40 peripherals. The line, dubbed the “360″ because it addressed all types and sizes of customer, cost IBM over five billion dollars to develop, and it is widely considered one of the riskiest business gambles of all time.

    Up until this time, computer systems, even from the same manufacturer, were generally incompatible with each other. Software and peripherals from old systems would not work with new systems. This stifled acceptance and deployments of new systems as business customers were hesitant to lose their investments in their current systems. By developing a mutually compatible series of mainframes, customers were assured that their investments would not be lost if they purchased further System 360 models.

    IBM’s gamble paid off handsomely, as in just the first three months of its release, IBM will receive US$1.2 billion in orders. Within five years, over thirty-three thousand units will be sold, popularizing the concept of a computer “upgrade” around the world. The 360 family was the most successful IBM system of all time, generating in over US$100 billion in revenue through the mid-1980’s. It became the basis for all sequent IBM mainframe architectures, which will hold a 65% marketshare in the 1990’s.

    The 360 architecture also introduced a number of industry standards to the marketplace, such as the worldwide standard of the 8-bit byte. Its enormous popularity catapulted the business world into the technology age and transformed the computer industry. Not bad for a bunch of suits.


    IBM Introduces System 360 - History

    This pictorial timeline is an expansion of a presentation originally given in the media technology forum at the PCA/ACA annual conference. That presentation was mainly limited to a history of video technology, whereas here I'm including many other media types dating from the days of the Edison cylinder to the present time. A goal with this timeline is to provide a decent picture of the technology at hand and a brief description of it, with links to more extensive web sites when they are available. This stems from my habit when picking up a book, particularly those that have picture sections in the middle, to look at those pictures and read the captions prior to reading anything else in the book.

    Since this timeline is closely associated with the CED M a g i c web site, it provides the greatest emphasis on video technology and innovations that originated at RCA. The timeline will be a continuous work in progress as new technology emerges and I continue to fill holes in the past timeline.


    IBM’s Century of Innovation

    A merger of three 19th-century companies gives rise to the Computing-Tabulating-Recording Company in 1911. The company’s name is changed to International Business Machines Corporation in 1924, and under the leadership of Thomas J. Watson Sr. becomes a leader in innovation and technology. Early machines, like the dial recorder above, set the stage for further mechanization of data handling.

    IBM begins a corporate design program and hires Eliot Noyes, a distinguished architect and industrial designer, to guide the effort. Noyes, in turn, taps Paul Rand, Charles Eames and Eero Saarinen to help design everything from corporate buildings to the eight-bar corporate logo to the IBM Selectric typewriter with its golf-ball shaped head.

    IBM introduces the IBM System/360 compatible family of computers. The company calls it the most important product announcement in its history.

    A new era of computing begins, and IBM’s entry into the personal computer market in 1981 is an endorsement of the new technology. IBM makes the PC a mainstream product, used in businesses, schools and homes. Its choice of Microsoft and Intel as key suppliers propels upstarts into corporate giants.

    IBM shows computing’s potential with Deep Blue, a computer programmed to play chess like a grandmaster. In 1997, Deep Blue defeats the world chess champion Garry Kasparov, a historic win for machine intelligence.

    IBM makes Watson, the artificial-intelligence technology that famously beat humans in the quiz show “Jeopardy!” in 2011, into a stand-alone business. The company hopes Watson will be an engine of growth. It is investing heavily in data assets, from medical images to weather data, to help make Watson smarter and useful across many industries.


    Watch the video: LGR - IBM PCjr Vintage Computer System Review (July 2022).


Comments:

  1. Jedediah

    I apologise, but, in my opinion, you are not right. Let's discuss it.

  2. Tristen

    Willingly I accept. The theme is interesting, I will take part in discussion. Together we can come to a right answer. I am assured.

  3. Zulkisho

    I am sorry, that I interfere, but, in my opinion, this theme is not so actual.

  4. Rutledge

    nice question

  5. Carver

    I find that you are not right. We will discuss. Write in PM, we will talk.

  6. Yozshusida

    I am not clear.



Write a message