Transforming the Network in Federal Research and Development

DATA CENTER and CAMPUS NETWORK Transforming the Network in Federal Research and Development A Technical Brief on Network Technology for Federal Resea...
Author: Marvin Lloyd
2 downloads 0 Views 4MB Size
DATA CENTER and CAMPUS NETWORK

Transforming the Network in Federal Research and Development A Technical Brief on Network Technology for Federal Research and Development

CONTENTS Accelerating Federal R&D ......................................................................................................................................................1 Introduction.......................................................................................................................................... 1 Brocade: SDN and Open Standards Drive Accessibility.......................................................................................................3 IT Virtualization & Convergence in R&D ...............................................................................................................................4 Transformation to Ipv6 ............................................................................................................................................................4 Putting the Economics in Eco-Friendly...................................................................................................................................5 Enabling a Global Research Network.....................................................................................................................................7 SUMMARY: What Differentiates Brocade? ..........................................................................................................................9 News on Brocade SDN for Federal IT ...................................................................................................................................9

DATA CENTER and CAMPUS NETWORK

TECHNICAL BRIEF

ACCELERATING FEDERAL R&D To capitalize on the changes in the storage and networking world and to accelerate discovery, R&D agencies must prepare for Software-Defined Networking (SDN) and future-proof their architectures for the requirements of high-performance networking.

Introduction Finding a cure for cancer, developing new energy sources, tracking planes in the sky, tracking interstate bridge conditions, and forecasting the next hurricane: these are only a few of the significant challenges that government agencies and institutions face on a daily basis. To meet these challenges, to drive the research and development (R&D), to lead the innovation charge, researchers require massive compute power. They must effectively collaborate with one another and move immense quantities of raw data to and from multiple sources around the world. Performance matters: accuracy and speed accelerates discovery. Scale matters: these problems impact everyone. Collaboration matters: these problems will not be solved by just one entity. The federal government has always played a leading role in funding R&D and sponsoring organizations and institutions. This involvement causes the industry to follow that funding in the development of technologies, which leads to further innovation and job creation. Recent budgets have a renewed federal focus on these R&D disciplines and its good news for all of us. Figure 1, from the Office of Science and Technology Policy (OCTP) 2013 budget report, shows that most R&D agencies have received increases in funding to keep America ahead of the world when it comes to technology investment. Note: For a copy of the OCTP 2013 budget report, refer to: www.whitehouse.gov/administration/eop/ostp/rdbudgets/2013 Federal Research by Agency, FY 1995-2013 In billions of constant FY 2012 dollars

70 NIH NSF DOD DOE NASA USDA All other

60 50 40 30 20 10 0

95 996 997 998 999 000 001 002 003 004 005 006 007 008 009 010 011 012 013 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2

19

Figure 1. 2013 “Office of Science and Technology” policy budget. Across industry and in those funded agencies, these questions are asked: In which innovations do we invest? Where do we allocate funding to sustain current R&D efforts? How can we consolidate what we have to capitalize on new technologies? How do we enable those programs and allow researchers access to the tools to make the breakthroughs the world needs? These are just some of the ideas that drive IT organizations within the R&D community. Transforming the Network in Federal Research and Development

1

DATA CENTER and CAMPUS NETWORK

TECHNICAL BRIEF

IT strategies and budgets must accommodate the scientific challenges that researchers face as they push the envelope of discovery. Some of these challenges, as identified by the National Energy Research Scientific Computing Center (NERSC), include: • Science at Scale: Enabling petaflops to exaflops of supercomputing power • Science Through Volume: Increasing from thousands to millions of simulations • Science in Data: Scaling-up from petabytes to exabytes of stored research, true “big data” What new internet technologies and focus can be employed in the data center and across the enterprise network to overcome these challenges? Several IT efforts and breakthroughs are accelerating new waves of discovery: • Software-Defined Networks (SDN) • Standardization for integration • Virtualization and convergence • Transformation to IPv6 protocol • High-Performance Networking 40/100 Gb+ • Economics of “going green” SDN is currently the biggest buzz word for IT enterprises. SDN technologies enable network transport and data center, “cloud-based,” systems provisioning. OpenFlow is the leading SDN protocol for the network transport layer. OpenFlow enables the separation of the data plane and control plane functions that are normally associated with hardware-based router functions. You can run control plane functions, the routing and switching “smarts,” within a server-based OpenFlow “controller.” Then you define your transport path independent of the hardware using route determination paths called “flows.” These flows are pushed out to the network hardware (that runs the OpenFlow protocol) across the WAN, and the data that is encapsulated in that OpenFlow protocol follows the predetermined flow. Further information on OpenFlow can be found here: www.opennetworking.org/ OpenStack technology was originally developed by NASA as part of their Nebula computing platform to dynamically streamline provisioning of compute resources and storage for users who must run certain applications using a cloud-based model. Application Programming Interface (API) code is written from hardware and software vendors and integrated to a central OpenStack controller that reaches out to those data center active components. Some examples of the provisioning targets consist of active components such as VMware vCenter for creating an operating system, a storage management tool like vCenter, NetApp, or EMC for storage space, and a network fabric such as the Brocade® VDX® Switches. This OpenStack controller provisions compute and data path resources on-the-fly. To do so, the controller allocates a new server, creates the SAN zoning, allocates disk space, and provides the VLAN access using these API plug-ins. A user is given permissions to access the servers to load the applications that they need to fulfill their mission. These compute resources are decommissioned as soon as a user is finished, and the disk space, CPUs, and VLAN network connectivity is returned into the “cloud pool.” Refer to www.openstack.org/ for details on the OpenStack community.

2

Transforming the Network in Federal Research and Development

DATA CENTER and CAMPUS NETWORK

TECHNICAL BRIEF

BROCADE: SDN AND OPEN STANDARDS DRIVE ACCESSIBILITY As an early adopter of OpenFlow, the SDN approach from Brocade drives program-like control of network infrastructure. This approach provides mass scale and intelligent service delivery capabilities and stays ahead of the bandwidth requirements needed to access R&D big data repositories. OpenFlow in true hybrid mode provides the ability to run SDN and traditional routing algorithms over the same ports. Industry-leading 10 Gigabit Ethernet (GbE), 40 GbE, and 100 GbE Brocade MLXe Series solutions with support for OpenFlow in hybrid mode seamlessly integrate with existing networks to enable SDN in conjunction with traditional networking capabilities. This dual functionality delivers the flexible flow control that is needed to respond to dynamic traffic patterns, address application needs, while still providing seamless services to existing infrastructure. Organizations can maintain their traditional forwarding policies and all the established operational processes that keep that traffic flowing. Then they selectively enable OpenFlow into their environment to create their own communities of interest (COIs) and to keep their R&D transport networks segregated from the production enterprise data. Using Brocade OpenFlow-enabled hardware, you can use either hybrid switch mode, which allows ports to be set for either traditional WAN routing protocols or for forwarding OpenFlow traffic, or hybrid port mode. Hybrid port mode is the feature that allows all protocols, traditional and OpenFlow, to be forwarded out any Brocade ports. In the current economic environment, IT organizations should focus on how to take advantage of SDN to enable cloud-based services and integrate siloed virtualized infrastructure in the data center and across the enterprise. Brocade has a long track record of supporting existing standards and is a strong supporter of the new emerging standards. Brocade has worked to integrate our data center network offerings to support cloud provisioning tools such as OpenStack. OpenStack is an SDN cloud provisioning tool that allows systems administrators to quickly provision servers, storage, and network access within the hosted data center. This ease of provisioning enables users to access data stores and run applications or simulations locally, on-the-fly, with little to no disruption to existing production or operations staff. When these systems are no longer needed, they can easily be “spun down” and reallocated with only a few clicks from an OpenStack controller administrator. Brocade data center network devices support this tool and other cloud provisioning tools, such as VMware vCloud, out-of-the-box. This wide and differentiated SDN adoption, coupled with the ability to run in hybrid modes, provides the simultaneous interconnectivity as well as the higher bandwidths that are needed between multiple R&D entities. Science teams at NERSC can provision and create access to their Hopper Supercomputer System for researchers at the National Science Foundation, NOAA, or analysts at the Department of Energy. Science teams at CERN can provide access and create a bridge to researchers at any university or agency across the country who may be interested in collecting Large Hadron Collider data to disseminate patterns and piece together the puzzles of the universe, such as the recent discovery of the Higgs boson (http://home.web.cern.ch/about/ physics/search-higgs-boson).

Figure 2. Transport massive amounts of data at the CERN Data Center.

Transforming the Network in Federal Research and Development

3

DATA CENTER and CAMPUS NETWORK

TECHNICAL BRIEF

Adherence to standards in using SDN technology is not just beneficial for entities to provide access to large data repositories or supercomputing power. Adherence to such standards can also be instrumental to allow disparate federal and state agencies who need integrated networks to share time-sensitive data for projects. One such project is the interstate bridge monitoring system that is currently funded by the National Institute of Standards & Technology (NIST). The ability to provision services using open cloud data center enablers such as OpenStack, and then create dynamic WAN connectivity using SDN routing protocols such as OpenFlow, introduces a new paradigm in research data accessibility and user access control that was never before available. To ensure support of these innovations, Brocade routers and our converged Virtual Cluster Switching (VCS®) data center Fabric technology come with these cloud provisioning and SDN protocol abilities built in, which future-proof the solution sets. It is becoming more important for worldwide maintenance and operations teams to adhere to open standards. It is no longer feasible to hire or contract subject matter experts or to have multiple engineering teams maintain multiple specific certifications. The cost to hire and keep personnel updated on numerous proprietary technologies is unsustainable. Being “open” does not mean just to build your own, proprietary, “open” controller. Being “open” means to provide input to, and conform to, the standards that are being created and adopted. Being “open” means to work within the community to provide the tools and access that are needed to drive research in the simplest, most seamless way possible. Technology designs that are based on industry standards drive integration and ease-of-use, which allow users to focus on the mission and accelerate program results. Open standards also drive interoperability, which, in turn, drives collaboration. Brocade VCS Fabric technology enables any I/O interface(s) to connect into an Ethernet fabric for transport across the data center, through the core, and across the WAN through IP, which is the de-facto standard for transport of data across computer networks. Brocade accomplishes this interoperability through our relationship with the standards bodies in the development of the TRILL protocol. Brocade was one of the first hardware providers to join the IETF TRILL Working Group to develop this new Ethernet-based Data Center Bridging technology.

IT VIRTUALIZATION & CONVERGENCE IN R&D Administrators that run these massive environments face numerous challenges and problems around “science at scale” and how the community is evolving to enable discoveries that require “crunching the numbers.” The Brocade OpenFlow-enabled hardware with 100 GbE Ethernet SDNs implementation provides persistent and dynamic VLAN tagging, VLAN translation, and hybrid mode support for traditional networks and OpenFlow, simultaneously. Brocade draws from a vast expertise as an innovator in data transport fabrics with Brocade SAN Fiber Channel switch platforms (Silkworm and Brocade DCX 2, 4, 8 Gbps, and now GEN5 16 Gbps Backbone). Brocade collaborated with the Internet Engineering Task Force (IETF) Working Group to help develop the TRILL protocol. Brocade also created the industry’s first data center fabrics with Brocade VCS Fabric technology, which is available with the rack-based Brocade VDX 6700 Switches and new chassis-based Brocade VDX 8770 Switch. These technologies are evidence of Brocade’s position as a market leader in next-generation software-centric data center networking. Brocade has further pushed the boundaries of SDN and virtualization in the network with the recent acquisition of virtual routing pioneer, Vyatta. (See last page of this brief for more information.)

TRANSFORMATION TO IPV6 In September 2010, OMB issued a memorandum that required federal agencies to transition to Internet Protocol Version 6 (IPv6) for public internet servers and applications used by the government. The September 2014 deadline is approaching and the CIO Council is releasing an updated version of the “Planning Guide/ Roadmap toward IPv6 Adoption within the U.S. Government.” The roadmap was jointly developed by the Federal Government and the American Council for Technology—Industry Advisory Council (ACT-IAC). This roadmap will help federal agency leaders in this transition with practical and actionable best practice guidance for how industry and federal agencies can successfully integrate to the next generation internet. Brocade was one of the first hardware providers to embrace IPv6 within its routing hardware and code. Brocade Note: For more information on the “Planning Guide/Roadmap toward IPv6 Adoption within the U.S. Government,” refer to: www.cio.gov/resources/document-library/

4

Transforming the Network in Federal Research and Development

DATA CENTER and CAMPUS NETWORK

TECHNICAL BRIEF

has implemented IPv6 based on industry open standards to guarantee interoperability. In addition to offering similar features as IPv4, Brocade continues to develop new features, technologies, and capabilities, all of which bring innovation to the IPv6 world. Hurricane Electric, one of the largest IPv6 internet service providers in the world, uses Brocade routers in its core ISP network for dual IPv4 and IPv6 interconnectivity and to service all customers a highly reliable IPv6 transport. “Advanced network infrastructures have become complex extensions of business operations that must adapt to rapid growth, data-intensive applications, security threats, and new technologies that can stress the entire network.” —Ken Cheng, CTO & VP, Corporate Development for Brocade. To address these issues, Brocade will continue to innovate to build one of the industry’s most complete set of IPv6 unicast, multicast, and transition protocols. Brocade will support enterprise and service provider IPv6 dual-stack environments, and Brocade will continue to be a leading player to help governments and organizations around the world transition to IPv6. Brocade application delivery switches (Brocade ADX Series) process load balancing within the data center and they are coded to perform IPv6 gateway and Network Address Translation (NAT) functions. These functions can convert IPv4 to IPV6 without fork-lift changes to the legacy IPv4 applications resource environments. The Brocade ADX® Series of solutions allow an agency to run existing IPv4 resources and then deploy new IPv6 resources concurrently, which preserves any unrealized Returns on Investment (ROI). Brocade innovation and leadership mean that Brocade solutions are future-proofed and have built-in capabilities to globally transport data over these new internet protocols. Additional Brocade IPv6 information can be found at the Brocade IPv6 information website (www.brocade.com/ solutions-technology/technology/ipv6/index.page). Martin Levy, director IPv6 strategy for Hurricane Electric, discusses the importance of IPv6 in modern networks: www.youtube.com/watch?v=h4-CjG1Y9jY

PUTTING THE ECONOMICS IN ECO-FRIENDLY Brocade is focused uncompromisingly on efficiency, for our customers and in our own IT environments. Efficiency means less management and lower operational costs so that you can better align resources to the mission. To show our commitment to this effort we started this focus at our own world HQ. The Brocade headquarters campus in San Jose meets the LEED Gold certification standard.

Figure 3. Brocade facilities and data centers in San Jose lead California in LEED standards.

Transforming the Network in Federal Research and Development

5

DATA CENTER and CAMPUS NETWORK

TECHNICAL BRIEF

The Brocade campus exceeds California’s Energy Code requirements by 16 percent. The flat-floor design in our data center provides 12 percent more space in the same footprint. A 550 kW photovoltaic rooftop system maximizes solar energy usage. The water consumption at the Brocade campus is 40 percent less than a traditional campus. Being eco-friendly makes environmental sense and it makes economic sense: we reinvest the realized savings back into innovation for our customers. According to The Federal Times, the number of LEED-certified federal building projects in the United States has increased by more than 50 percent from 2011 to 2012. As this trend continues, research agencies can look to Brocade for best practices. In a recent technology evaluation at one of the federal intelligence agencies where power utilization, space, and cooling were key criteria, Brocade handily beat our competitors in these categories. Note: For more information on LEED-certified federal building projects, refer to: www.usgbc.org/home

HIGH PERFORMANCE R&D DATA TRANSPORT Research networks are not your typical networks: the data flows are long-lived and larger, and traffic bursts require massive line-rate performance. With supercomputers such as the Hopper Cray XE6 at NERSC, and the IBM Blue Gene/Q Sequoia at Lawrence Livermore Laboratories, uplinks into the data center core need to perform as a “super transport” conduit. Note: To learn more about the IBM Blue Gene/Q Sequoia supercomputers, refer to: https://asc.llnl.gov/computing_resources/sequoia/index.html

Figure 4. NERSC “Hopper” Cray XE6 at 1.28 petaflops/sec of performance. To move data at these speeds, legacy “oversubscription” switching architectures are no longer adequate. The requirement for completely nonblocking architecture and line-rate performance is now an absolute requirement. The Layer 2 and Layer 3 switches from Brocade meet these requirements and lead the industry with highperformance transport. These switches provide true 100 GbE for lossless, wire-speed, at scale, and at distance deployments that transport mission-critical big data, which drives more efficient collaboration among researchers and faculty. This high performance transport solution supports a wide range of science data flows that range from sensor networks and real-time data sources (“mouse” flows) to large files and distributed data (“elephant” flows). R&D High-Performance Computing (HPC), and the network architectures that are necessary to support it, have received more attention recently. With funding from the Department of Energy (DoE), due in large part from the American Recovery and Reinvestment Act, the Energy Sciences Network (ESNet) has developed a highperformance access network called the “Science-DMZ”. Science-DMZ benefits the Research and Education Network (REN) community, which consists of dozens of higher education institutions, science communities, and universities from across the United States. Science-DMZ was developed to address issues that relate to utilizing legacy or low-performance enterprise LAN/WAN connectivity from the campus or institution edge for research community data sharing, HPC processing, or usage of supercomputing resources. Note: To learn more about Science-DMZ, refer to: http://fasterdata.es.net/science-dmz/

6

Transforming the Network in Federal Research and Development

DATA CENTER and CAMPUS NETWORK

TECHNICAL BRIEF

Figure 5. Science-DMZ topology diagram. This concept emerged as connectivity to the HPC environment was moved into its own DMZ. In other words, this environment is no longer connected behind the campus DMZ and firewalls. This environment now sits on a network that is engineered purposely to support the HPC requirements and segments any dedicated highspeed remote access that is requested from the rest of the campus enterprise. IT managers no longer need to create special paths from the WAN edge through access lists and firewalls. As Figure 5 shows, the science and research environment is now connected to a Science-DMZ switch, which in turn connects to a ScienceDMZ border router that is connected to an internet WAN transport such as the NSF-funded 100 GbE Internet2 backbone (www.internet2.edu). Access control lists in the border router are leveraged to maintain security of this HPC environment. In addition to simpler access control mechanisms, if a researcher needs to set up a logical connection to another scientist or researcher to share data, the HPC network can be given that connectivity directly with provisioning in the border router. The Brocade NetIron CER 2000 and MLXe Series of advanced routers are deployed in these HPC environments and are highly optimized for these high-speed I/O IP Ethernet architectures. These advanced routers provide symmetric scaling with large chassis options that include 4-, 8-, 16-, and 32-slot systems. These routers offer industry-leading wire-speed port capacity up to 100 GbE without compromising performance of advanced capabilities such as IPv6, MPLS, and MPLS virtual private networks. For example, the Brocade MLXe-32 delivers data forwarding performance in excess of 6 Tbps today and scales to 15.36 Tbps, which is enough capacity to future-proof networks for years to come. One notable production implementation that already utilizes Brocade high-performance and high-bandwidth network technologies for HPC in the R&D community is the CERN distributed computing architecture. Note: For more information on this implementation, refer to: www.zdnet.com/cern-prepares-for-shift-to-100gbe-with-brocade-gear-3040091043/

ENABLING A GLOBAL RESEARCH NETWORK In 2009, under the American Recovery and Reinvestment Act, the Federal Communications Commission released a National Broadband Plan. This plan includes support for a Unified Community Anchor Network (UCAN). As a result, funding was provided to enhance an initiative known as Internet2. Internet2 is an advanced networking consortium comprising (as of June, 2011) 221 U.S. universities, 45 leading corporations, 66 government agencies, laboratories and other institutions of higher learning, 35 regional and state research and education networks and more than 100 national research and education networking organizations that represent over 50 countries. Note: To learn more about the FCC national Broadband Plan, refer to: www.internet2.edu/government/

Transforming the Network in Federal Research and Development

7

DATA CENTER and CAMPUS NETWORK

TECHNICAL BRIEF

Figure 6. Internet2 national 100 GbE SDN across the U.S. The program actively engages the community to develop new technologies including middleware, security, network research, and performance measurement capabilities, which drive the development of new applications. Internet2 is the 100 GbE OpenFlow WAN that feeds the Science-DMZ constructs mentioned earlier. An important partner of Internet2, Brocade provides a national and global routing platform for these entities to exchange data and interact as a larger community worldwide. The Brocade NetIron CER 2000 and MLXe Series of routers provide the best combination of 10 GbE and 100 GbE density and flexibility for many of the Internet2 WAN edges. The hybrid mode capabilities of Brocade hardware also provides a transport medium for IPv6 traffic and OpenFlow protocol data that can use the Open Exchange point architecture of the National Research and Education Network (NREN) to link Internet2 sites worldwide using our MPLS implementation. The data centers that provide the compute power for R&D institutions utilize backup and disaster recovery data centers, or COOP sites, which are often idle. These sites are part of a legacy “active-passive” dual site design, and they become an expensive “black-hole” that is used only during outages or disaster scenarios. Brocade solutions provide the ability to relocate virtual server hotspots to underutilized data centers at regionally dispersed locations. This technology is called Global Server Load Balancing (GSLB) and provides a regional load balancing capability that can reallocate the usage of storage and networking assets and utilizes those assets as regionally dispersed user access sites. This active-active data center architecture from Brocade is enhanced with VCS Fabric technology and SAN extensions. You can relocate data center workloads between sites worldwide, without disrupting applications, to avoid disasters, to load balance and consolidate, to leverage cloud infrastructure, to optimize power consumption, and to perform maintenance. You can use current MPLS/ VPLS WAN protocol technology and Brocade storage array partners to create a full disaster avoidance strategy. The Brocade global cloud service data center architecture can now provide full disaster avoidance. This service fully utilizes resources when and where they are needed and then dynamically reallocates them when they are not needed. Brocade does all of this in minutes instead of days or weeks as with the current old Spanning Tree Protocol (STP)-based data center architectures. Brocade enables this cloud services model capability by employing several underlying technical innovations: •P  roven Brocade DCX SAN directors with Fibre Channel over IP (FCIP) to extend FC storage networks across the WAN •A  Layer 2 virtual machine mobility network that uses Brocade VCS Fabric technology, which is based on the IETF TRILL standard • Core Brocade MPLS routers with Virtual Private LAN Services (VPLS) for VLAN extension to multiple sites

8

Transforming the Network in Federal Research and Development

DATA CENTER and CAMPUS NETWORK

TECHNICAL BRIEF

SUMMARY: WHAT DIFFERENTIATES BROCADE? Brocade networking solutions help research organizations transition smoothly to a world where applications and information reside anywhere in the world. Brocade leads the industry with its pioneering solutions for research and HPC clouds using Software-Defined Networking (SDN) and flexible Ethernet fabrics. Brocade VDX switches and VCS Fabric technology are the foundation for building large-scale Ethernet fabrics that address the unprecedented challenges of big data and Web 2.0 applications. In addition, OpenFlow-enabled Brocade MLXe routers pave the way for the improved network analytics and optimization required for SDN. Currently, more than 250 research networks around the world run Brocade products and solutions. Among them are the U.S. Navy SPAWAR labs to private sector supercomputing initiatives at the CERN Hadron Collider, Sandia’s Red Storm Environment, and The European Center for Midrange Weather Forecasting. The world’s leading research institutions rely on Brocade to drive the innovation and high performance that is needed to solve some of the world’s biggest issues.

NEWS ON BROCADE SDN FOR FEDERAL IT On November 5, 2012, Brocade announced acquisition of Vyatta, a pioneer in software networking. Vyatta (which is Sanskrit for “open”) solutions enable routing, firewall, and security functions through software that can be embedded into any server or virtual machine. Researchers can work faster and collaborate in a more secure and virtual environment than ever before.

Transforming the Network in Federal Research and Development

9

© 2013 Brocade Communications Systems, Inc. All Rights Reserved. 10/13 GA-TB-486-00 ADX, AnyIO, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, ICX, MLX, MyBrocade, OpenScript, VCS, VDX, and Vyatta are registered trademarks, and HyperEdge, The Effortless Network, and The On-Demand Data Center are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned may be trademarks of their respective owners. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.

Suggest Documents