Consolidation of Cost and Functionality


Increasing demand for processing power and available resources, including memory capacity, storage volumes, and processors, has provoked a profound change in what describes a computer. From a precious few extremely expensive devices to a commodity present in nearly every home and office, a computer is well on its way to becoming just another building block for electronic devices, as well as a dedicated machine designed for desktop, workstation, or service roles. All these machines are also conveniently serviced by a supporting infrastructure that enables small, inexpensive servers to coexist smoothly with ultra-expensive (but far more modern) mainframes and supercomputers.

During 1970, Centronics introduced the first dot-matrix printer to the marketplace. Fast-forward a year later to the birth of the first 8-inch floppy drive, the first laser printer (developed at Xerox PARC), and a newfound file exchange service for networks: File Transfer Protocol (FTP). Then, in 1972, the compact disc (CD) was created in the United States, and interactive laser discs made their debut. Although Intel was first to market a microprocessor (the 4004) in 1971, it wasn't until a year or two later that the chip manufacturing giant produced the 8080, which defined a de facto standard that virtually all other microprocessors would follow to this very day.

In the wake of all these advancements, organizations began to see an upswing in the consolidation of computing resources as the cost of ownership reached more tolerable levels. Smaller- and medium-sized businesses found these emerging technologies more affordable, and as a result of their entry into the market, further stimulated the growth and development of better, faster, and more capable computing technologies of all kinds.

Introduction of High-Capacity RAM

Beginning in 1970, Intel produced the first high-capacity (1KB) dynamic random access memory (DRAM) solution. Hewlett-Packard introduced the HP-9800 series shortly thereafter, and these became the first commercially available computers to use such memory technology. Two short years later, Intel's PMOS DRAM IC design became the best-selling semiconductor chip on the market, surpassing magnetic core memory (a de facto standard during that period). Also in 1970, the Fairchild Corporation invented the first 256KB static random access memory (SRAM) chip.

These and other advancements started a sustained boom in the computing market and enabled the development of products better designed for small- to medium-sized business operations (and paralleled a change in equipment investments from "calculators" to "computers"). This in turn translated into vastly lower costs of ownership for computers and enhanced the development of all kinds of services and applications that promoted wider usability, greater flexibility, and rapid deployment. Scalability became the key to defining a central role in the workplace for emerging computing technologies, as the industry began to experience productivity growth and improving returns on technology investments.

The Importance of SCSI Technology

Hard disk drive improvements came by way of the conception of the small computer system interface (SCSI) protocol interface originally proposed in 1979. However, the standardization was also incredibly slow. The SCSI standard earned the approval of the American National Standards Institute in June 1986 (five years after being introduced to the X3T9.2 committee). By this time, however, the technology was already in widespread use.

Disk drive interfaces prior to SCSI were specialized components unique to particular systems, which did not lend themselves especially well to adaptability and scalability. In the world of big business, limited-capability controllers and devices were squeezed into an increasingly narrow installation base. As an interface with embedded device recognition and some level of data handling intelligence, SCSI was inherently aware of the devices attached to it, and it quickly supplanted its proprietary predecessors.

Since then, the SCSI specification has gone through several major overhauls and seen three major standards revisions. In the same year SCSI-1 was being actively developed, for example, the engineering groundwork for SCSI-2 standards were already well under way. With the addition of new command sets, wider data lines, smaller physical connectors, faster (synchronous) transfer rates, three distinct types of voltage signaling, and higher pin counts, SCSI quickly became a popular enterprise-level storage solution. You can learn more about the development and design of SCSI in Chapter 7, "The SCSI Bus."

For a long time, in fact, SCSI was the input/output (I/O) interface for storage devices of all kinds on network servers. Though personal computers went through several generations of Integrated Drive Electronics (IDE) and increased speed and capability with each succeeding one, it wasn't really until the mid-1990s that any other standard drive interfaces could compete with SCSI in terms of bandwidth, speed, or performance. That explains why SCSI completely dominated the server marketplace until 2000 or so. But SCSI has also always been considerably more expensive than consumer-oriented, lower-end interfaces. However, it continues to be used on servers where performance and reliability remain more important than price.

Also noteworthy is the official Internet SCSI (iSCSI) protocol specification, which defines a method of transacting block-level disk transfers using a TCP/IP based network. This effectively enables a given endpoint to contact a dedicated storage volume remotely and perform I/O transfers much like local disk access. Using the iSCSI protocol translates into better cost-effective centralization of storage without the added expense and cumbersome compatibility issues involved in deploying Fibre Channel equipment (typical of storage area networks).

The Development of RAID Technology

As the processor power quotient increased dramatically, the I/O subsystem needed to be beefed up in order to keep pace. Compelled by this and a growing need for high data reliability and availability, David A. Patterson led a development team of academic researchers at the University of California at Berkeley to devise what would arguably become the most valuable storage-preservation technique to date. Patterson's idea was to take a series of small, inexpensive desktop disk drives and form them in an array to provide higher I/O operations. The concept was known as a redundant array of inexpensive disks (RAID); it is covered in full detail in Chapter 11, "Disk Subsystems."

Coincidentally, IBM implemented much the same concept in its AS/400 series of minicomputers, except that IBM's concept was to mirror given data sets to provide fault-tolerant reliability. To minimize downtimeand the inconvenience of recovering lost dataIBM established a specification to provide duplication of data so that fewer operational interruptions would occur when hardware failure is imminent.

Various types of RAID configurations continue to see widespread usage in today's server world, and the technology has even become popular on the consumer front, as the latest motherboards now often support on-board ATA RAID solutions. Depending on what flavor of RAID is desired, the benefits can include improved data integrity, fault tolerance, data throughput, or storage capacity.

The Development of Standard Server Platforms

Rack-mounted server design addresses the issue of footprint. A rack-mounted case has designations such as 1U, 2U, 4U, and 42U, signifying the class and dimensions for a given application. There are 42 units total in a given rack, and each internal component must be carefully designed to fit into the tight clearances. (A 1U rack mount server stands only 1.75 inches tall.)

Compaq started this phenomenon in 1994 with the introduction of rack-mountable servers in its ProLiant product family. Other vendors quickly followed suit because the rack-mounted model worked so well for companies that typically deployed servers in equipment rooms or closets, where floor space is at a premium and machine density is therefore quite valuable. By 1998, most major PC and computer vendors (including Dell, IBM, Hewlett-Packard, and so forth) all had various rack-mounted server product offerings on the market. Ultimately, these would lead to the development of the blade server architecture, where servers on a card might maintain independent (but compact) power supplies and I/O ports, or they might instead plug in to a backplane for power, data communications, and peripherals access, thereby increasing server density further.

You can learn more about rack-mounted servers in Chapter 16, "Server Racks and Blades."




Upgrading and Repairing Servers
Upgrading and Repairing Servers
ISBN: 078972815X
EAN: 2147483647
Year: 2006
Pages: 240

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net