Login

Username:

Password:

Remember me



Lost Password?

Register now!

Related Sponsor

Author : kyoshiro mibu
Article ID : 58
Audience : Top News
Version 1.00.01
Published Date: 2005/2/21 16:49:26
Reads : 2805

Click to see original Image in a new windowWell, since they're redesigning pc architecture to take advantage of Serial I/O what's going to happen to Ethernet? I mean, if they can make PCI jump from 33MHz to 2.5GHz what's in store for the Internet?!

Here's your answer:




REINVENTING CONNECTIVITY

The InfiniBand Trade Association’s member companies, now at 100 and counting, run the gamut of high performance computing, data center and storage implementations. The association is lead by a steering committee comprised of eight elected member companies. The current steering committee includes Dell, Hewlett-Packard, IBM, Intel, Lane15 Software, Mellanox, Network Appliance and Sun Microsystems. The first version of the specification for the technology was completed in October 2000 and the InfiniBand Trade Association is well on its way to establishing a new signaling rate specification beyond 100Gb/s . More than 30 companies have introduced InfiniBand-based products into the marketplace with more products announced on a weekly basis.. InfiniBandimplementations today are prominent in server clusters where high-bandwidth and low latency are key requirements.. In addition to server clusters, InfiniBand is the interconnect that unifies the compute, communications and storage fabric in the data center. Several InfiniBandblade server designs have been announced by leading server vendors which is accelerating the proliferation of dense computing. InfiniBanddraws on existing technologies to create a flexible, scalable, reliable I/O architecture that interoperates with any server technology on the market. With industry-wide adoption, InfiniBand continues to transforms the entire computing market.
A SINGLE, UNIFIED I/O FABRIC

Click to see original Image in a new windowEthernet. Fibre Channel. Ultra SCSI. Proprietary interconnects. Given that these and other I/O methods address similar needs, and are being implemented in data centers worldwide, it's easy to wonder why so much I/O technology innovation continues in this already-crowded arena. To understand, one needs only to look at the complexity in interconnect configurations in today's Internet data centers. Servers are often connected to three or four different networks redundantly, with enough wires and cables spilling out to give them the look of an overflowing I/O pasta machine. By creating a unified fabric, InfiniBand takes I/O outside of the box and provides a mechanism to share I/O interconnects among many servers. InfiniBand does not eliminate the need for other interconnect technologies. Instead, it creates a more efficient way to connect storage and communications networks and server clusters together, while delivering an I/O infrastructure that will produce the efficiency, reliability and scalability that data centers demand.
IDENTIFYING THE NEED

Before the emergence of personal computers (PCs), mainframes featured scalable performance and a "channel-based" model that delivered a balance between processing power and I/O throughput. Data centers provided reliable data processing in a world of predictable workloads. The primary concern of the data center manager was system uptime, as failures led to loss of productivity. The industry transitioned from the model of mainframes and terminals to the client server age, where intelligence is shared between intelligent PCs and racks of powerful servers. With this transition came the advent of the "PC server," a concept that started with a network-connected PC turned on its side. This has evolved into an ever-rising specialization of "N-tier" server implementations, architectures that have applications distributed across a range of systems. The heart of the data center, where mission critical applications live, still relies on servers featuring the proprietary interconnects first seen in early mainframe systems. Today, data center managers are looking for more functionality from standard interconnect server models.
NETWORK VOLUME EXPANDS

The Internet's impact on the industry has been as big as the PC's, fundamentally changing the way CIOs manage their compute complexes. In a world where eighty percent of computing historically resided locally on a PC, Internet traffic and the rise of applications driven by Internet connectivity have created a model where more than eighty percent of computing is done over the network. This has created a wave of innovation in Ethernet local area network (LAN) connectivity, moving 10 Mbps LAN infrastructure to speeds of up to 100 Mbps, and now, 1 Gbps. The first wave of Internet connectivity also led to investment of trillions of dollars in communications infrastructure, greatly expanding the ability to transfer large amounts of data anywhere in the world. This has created the foundation for an explosion in applications addressing virtually every aspect of human interaction. It also creates unique challenges for the data center, the "mission control" of information processing. The world of predictable workloads has now been turned into an increasingly unpredictable environment. Once, downtime meant only a loss in productivity. Now an array of other factors, such as decreased consumer confidence and lost sales, complicate the mix. Business success depends on data center performance and flexibility today, and this reliance will only increase as firms escalate their dependence on connectivity for business results.
THE TREND TO SERIAL I/O

Traditionally, servers have relied on shared bus architecture for I/O connectivity, starting with the industry standard architecture (ISA) bus. For the past decade, servers have also utilized myriad iterations of the peripheral component interconnect (PCI) bus. Bus architectures have proven to be an efficient transport for traffic in and out of a server chassis, but as the cost of silicon has decreased, serial I/O alternatives have become more attractive. Serial I/O provides point-to-point connectivity, a "siliconization" of I/O resources, and increased reliability and performance. As serial I/O has become a financially viable alternative, new opportunities have been created for the industry to address the reliability and scalability needs of the data center.

To meet the demands of changing data center environments, companies have new server platform requirements.

*
Increased platform density for scaling more performance in defined physical space
*
Servers that can scale I/O and processing power independently
*
Racks of servers that can be managed as one autonomous unit
*
Servers that can share I/O resources
*
True "plug-and-play" I/O connectivity

INFINIBAND: ADRENALINE FOR DATA CENTERS

InfiniBand answers these needs and meet the increasing demands of the enterprise data center. The intense development collaboration marks an unprecedented effort in the computing industry, underscoring the importance of InfiniBand to future server platform design. The architecture is grounded in the fundamental principles of channel-based I/O, the very I/O model favored by mainframe computers. InfiniBand channels are created by attaching host channel adapters and target channel adapters through InfiniBand switches. Host channel adapters are I/O engines located within a server. Target channel adapters enable remote storage and network connectivity into the InfiniBand fabric. This interconnect infrastructure is called a "fabric," based on the way input and output connections are constructed between host and targets. All InfiniBand connections are created with InfiniBand links utilizing both copper wire and fiber optics for transmission. Seemingly simple, this design creates a new way of connecting servers together in a data center. With InfiniBand, new server deployment strategies become possible.
INDEPENDENT SCALING OF PROCESSING AND SHARED I/O

Click to see original Image in a new windowOne example of InfiniBand's impact on server design is the ability to design a server with I/O removed from the server chassis. This enables independent scaling of processing and I/O capacity, creating more flexibility for data center managers. Unlike today's servers, which contain a defined number of I/O connections per box, InfiniBand servers can share I/O resources across the fabric. This method allows a data center manager to add processing performance when required, without the need to add more I/O capacity (the converse is also true). Shared I/O delivers other benefits as well. As data center managers upgrade and add storage and networking connectivity to keep up with traffic demand, there's no need to open every server box to add network interface cards (NICs) or fibre channel host bus adapters (HBAs). Instead, I/O connectivity can be added to the remote side of the fabric through target channel adapters and shared among many servers. This saves uptime, decreases technician time for data center upgrades and expansion, and provides a new model for managing interconnects. The ability to share I/O resources also has an impact on balancing the performance requirements for I/O connectivity into servers. As other networking connections become increasingly powerful, data pipes that could saturate one server can be shared among many servers to effectively balance server requirements. The result is a more efficient use of computer infrastructure and a decrease in the cost of deployment of fast interconnects to servers.
RAISING SERVER DENSITY, REDUCING SIZE

Removal of I/O from the server chassis also has a profound impact on server density (the amount of processing power delivered in a defined physical space). As servers transition into rack-mounted configurations for easy deployment and management, floor space is at a premium. Internet service providers (ISPs) and application service providers (ASPs) were among the first companies faced with the problem of finding enough room to house server racks to keep up with processing demand. "Internet hotels"– buildings that house little more than racks of servers – have become commonplace. As the impact of Internet computing grows, the density requirements of servers will become more widespread.

By removing I/O from the server chassis, server designers can fit more processing power into the same physical space. Server manufacturers have already productized sub "1U" (a U is a measurement of rack height equating to 1.75 inches) server designs. More importantly, compute density–the amount of processing power per U–will increase through the expansion of available space for processors inside a server. Additionally, the new modular designs will improve serviceability and provide for faster provisioning of incremental resources like CPU modules or I/O expansion.
CLUSTERING AND INCREASED PERFORMANCE

Data center performance is now measured in the performance of individual servers. With InfiniBand, this model will shift from individual server capability to the aggregate performance of the fabric. InfiniBand will ultimately enable the clustering and management of multiple servers as one entity. Performance will scale by adding additional boxes, without many of the complexities of traditional clustering. Even though more systems can be added, the complex can be managed as one unit. As processing requirements increase, additional power can be added to the cluster in the form of another server or "blade." Today's server clusters rely on proprietary interconnects to effectively manage the complex nature of clustering traffic. With InfiniBand, server clusters can be configured for the first time with an industry standard I/O interconnect, creating an opportunity for clustered servers to become ubiquitous in data center deployments. With the ability to effectively balance processing and I/O performance through connectivity to the InfiniBand fabric, data center managers will be able to more quickly react to fluctuations in traffic patterns, upswings in data center processing demand, and the need to retool to meet changing business needs. The net result is a more agile data center with the inherent flexibility to tune performance to an ever-changing landscape.
ENHANCED RELIABILITY

The returns on investment associated with InfiniBand go beyond enhanced performance, shared I/O and server density improvements. Since the advent of mainframe computing, the most important data center requirement has been the resiliency of the compute complex. As this requirement increases with the advent of Internet communications, a more reliable server platform design is required. InfiniBand increases server reliability in a multitude of ways.
Channel-Based Architecture Because InfiniBand is grounded on a channel-based I/O model, connections between fabric nodes are inherently more reliable than today's I/O paradigm.
Message-Passing Structure InfiniBand protocol utilizes an efficient message-passing structure to transfer data. This moves away from the traditional "load store" model used by the majority of today's systems and creates a more efficient and reliable transfer of data.
Natural Redundancy InfiniBand fabrics are constructed with multiple levels of redundancy in mind. Nodes can be attached to a fabric for link redundancy. If a link goes down, not only should the fault be limited to the link, the additional link should ensure that connectivity continues to the fabric. By creating multiple paths through the fabric, intra-fabric redundancy results. If one path fails, traffic can be rerouted to the final endpoint destination. InfiniBand also supports redundant fabrics for the ultimate in fabric reliability. With multiple redundant fabrics, an entire fabric can fail without creating data center downtime.
THE INDUSTRY-WIDE EFFORT

Building a successful computer industry standard takes collaboration and cooperation. Through the InfiniBand Trade Association, some of the industry's most competitive companies have made InfiniBand a reality. They have the vision to see the shift to a fabric-based I/O architecture, and know it benefits everyone in the server world. OEMs and hardware and software manufacturers who want to keep pace with innovation and rapid changes should definitely support InfiniBand.

-Kyoshiro Mibu

Printer Friendly Page Send this Article to a Friend
The comments are owned by the author. We aren't responsible for their content.
Author Thread

Related Sponsor

Bookmark and Share