Building Cloud Foundry PaaS with PLUMgrid ONS

White Paper Building Cloud Foundry PaaS with PLUMgrid ONS Introduction PLUMgrid provides a truly scalable, secure, multi-tenant virtual network infra...
Author: Emery Norman
3 downloads 1 Views 2MB Size
White Paper

Building Cloud Foundry PaaS with PLUMgrid ONS Introduction PLUMgrid provides a truly scalable, secure, multi-tenant virtual network infrastructure for Cloud Foundry deployments on OpenStack. The deployment can be made secure and scalable by using Virtual Domains, which provide built-in secure tenant isolation for each application developer and scales automatically with growing application developers. This document describes a blueprint that captures the key aspects of growing PLUMgrid ONS and an OpenStack Distro. The blueprint also references generic physical infrastructure design elements that when combined with the key PLUMgrid ONS features provides a highly available and scalable solution that supports cloud scale workloads.

Cloud Foundry VLAN Networking Overview As you know Cloud Foundry is the leading enterprise PaaS (Platform-as-a-Service) and delivers an always-available, turnkey experience for scaling and updating PaaS on the private cloud. The following figure shows typical networking layout for Cloud Foundry deployment:

Figure 1: Networking Layout for Cloud Foundry

As shown in the figure, the VLAN based networking has to support the various communications between the key categories of infrastructure required to support services on Cloud Foundry. www.plumgrid.com

PPS213_v1.2_0815

1/6

©2015 PLUMgrid, Inc. All rights reserved.

White Paper

These include: • Elastic runtime • Application Services • OpenStack Distro • Management Infrastructure Drawbacks of VLAN based Network Current VLAN based networking recommended for Cloud Foundry deployments is not easy to secure and scale. VLAN inherently hasn’t been build for multi-tenant cloud environment, where you don’t know in advance which VLANs will be where or which tenants will be assigned to which VLANs. It’s a very dynamic environment. This means that as part of the network provisioning process you would have to dynamically reconfigure your switch fabric every time you place a new VM. This is exceptionally difficult to scale as application developers and users of Cloud Foundry platforms increase over time in your cloud. Further, if you simply tag all VLANs down to all hypervisors you are risk to introduce the core security problem where an attacker has access to all tenant networks when the hypervisor is breached. To manage such complexity with zero time and zero touch provisioning, VLAN based networking is insufficient.

Virtual Network Infrastructure for Cloud Foundry Deployment To run a successful, scalable and secure Cloud Foundry deployment, it is important to have Virtual Network Infrastructure (VNI) or Software Defined Network (SDN) which scales dynamically, assists automated deployment of application environment and has security from application end points to corresponding VM endpoints. It eliminates the worry about reaching VLAN type address limits and making corresponding physical configurations changes on switches. Further, VNI ensures separation and secure access between the various components of Cloud Foundry platform in the multitenant cloud infrastructure. PLUMgrid ONS via Virtual Domains and scalable platform provides secure tenant isolation for a multi-tenant environment. Virtual Domains can be specifically used to isolate different networks based on application requirements and timings of resource consumption . It also seamlessly provides inter connectivity where required e.g. Cloud Foundry to OpenStack Distro to application and can be automated as part of resource provisioning framework for application developers. Key Considerations for Deploying PLUMgrid ONS with Cloud Foundry OpenStack Environment Following sections will provide you an overview of key considerations for various deployment pieces of designing Cloud Foundry with PLUMgrid ONS and an OpenStack distro.

Physical Infrastructure The physical Infrastructure consists of physical server and network infrastructure.

Figure 2: Physical Infrastructure Overview

www.plumgrid.com

PPS213_v1.2_0815

2/6

©2015 PLUMgrid, Inc. All rights reserved.

White Paper

Server Infrastructure Total of minimum five physical servers are required of which three host management infrastructure components and two perform PLUMgrid Gateway functionality to support connectivity between physical and virtual network infrastructure. It is highly recommended that all five servers should be configured homogeneously in terms of CPU and Memory to ensure optimal resource management and have at minimum dual Intel Sandy Bridge generation CPUs and 256GB+ memory. Management The three infrastructure servers are configured as a management cluster, ideally having a shared storage system e.g. NFS, iSCSI or FC, to ensure high availability of at rest data between infrastructure hosts. These servers require dual 10GigE physical connectivity for PLUMgrid Fabric as well as two GigE connections for Management traffic. The pair of 10GigE connections will be bonded together to form a single high available connection to the physical network. All the management servers are typically located in a management rack but for increased high availability it is recommended to physically separate the placement of these servers across two or more physical racks, ideally not located in close proximity e.g. different rows of data center. All Management servers run minimum Ubuntu 12.04 LTS Server x64 Base install with KVM and SSH additional packages. Gateway The two PLUMgrid Gateway servers are configured as a High Availability Active/Active gateway function with traffic processed on both depending on the specific platform configuration. These servers require four 10GigE physical connections, two for PLUMgrid Fabric and two for physical network infrastructure connectivity that will support high speed redundant connectivity between physical and virtual network infrastructure. Both pairs of 10GigE connections will be bonded together to form a single high available connection to the physical network. Compute Compute servers, also known as edges, will be hosting customer workloads and required to be homogeneously configured in all aspects of CPU, memory, internal hard drives and connectivity. Ideal minimum configuration is dual Intel Sandy Bridge generation CPUs and 256GB+ memory with two SSD drives, four SAS drives and rest SATA drives, having at least four SATA drives for optimal distributed IO sub system. Connectivity requires two 10GigE connections to PLUMgrid Fabric and single GigE connection to Management network. IPMI capability of these servers are required and hence a second GigE connection is required to each compute server to the IPMI port of the server that is usually a dedicated out of band port in addition to regular connectivity ports. In order to support high availability of customer workloads it is imperative that physical placement of servers are over two or more physical racks, ideally not located in close proximity e.g. different rows of data center. Compute servers will also run OpenStack Distro and are provisioned during OpenStack Distro deployment, not requiring any prior configuration.

Physical Network Infrastructure Management Single management switch in each rack supports all management traffic via dedicated interfaces from physical servers, including the IPMI interface that is used to power cycle physical servers for network booting as well as remote access for troubleshooting and support purposes.

www.plumgrid.com

PPS213_v1.2_0815

3/6

©2015 PLUMgrid, Inc. All rights reserved.

White Paper

PLUMgrid Fabric A L3 leaf spine 10GigE fabric is the most optimal physical network infrastructure to ensure highest performing and highly available fabric for transporting all virtual network infrastructure communications. Fabric is implemented with two ToR (Top of Rack) switches per rack configured as a single logical switch to support maximum high availability for servers in rack connectivity. The ToRs are then connected to a spine switch infrastructure either via 10GigE or 40GigE for maximum performance. PLUMgrid Gateway The PLUMgrid Hardware Gateways are similar in capacity to the ToRs but different because they are managed and operated as PLUMgrid components e.g. software updates, and they utilize L2 tagged traffic for all external networks that have to connect to the virtual infrastructure networks. They are typically connected directly to the data center router and bypass the L3 spine leaf fabric as their purpose is to provide external connectivity.

Virtual Network Infrastructure (VNI) VNI for Cloud Foundry deployment will consist of PLUMgrid Zone, Virtual Domains, Virtual Network Functions and 3rd party services such as FWaaS and LBaaS. PLUMgrid Zone A single PLUMgrid Zone is deployed to provide all virtual network infrastructure services to OpenStack infrastructure as well as to network boot physical servers. Below is an overview of the PLUMgrid Zone:

Figure 3: PLUMgrid Platform Zone Details

OpenStack Virtual Domain A single Virtual Domain (VD) is created to support all network infrastructure for OpenStack Distro operation, from network booting, storage and OpenStack services. Key objective for having a VD for OpenStack Distro is to support the L2 requirements for the Host and Services networks. On the Host network the software defined storage system such as Ceph requires a single L2 segment for replication and operation. On the Services network the virtual IPs for the various OpenStack APIs require a single L2 segment for heartbeat availability to ensure APIs are consistently accessed by the same predictable IP. www.plumgrid.com

PPS213_v1.2_0815

4/6

©2015 PLUMgrid, Inc. All rights reserved.

White Paper

Virtual Domains Key requirement of Cloud Foundry is to have access to OpenStack APIs for various provisioning workflows e.g. Nova, Neutron, Swift, etc. To support secure access to these core infrastructure APIs while allowing workloads access to the internet, several Virtual Domains are deployed to provide absolute secure isolation. Additionally the standard network topologies created via Neutron are extended to include an additional external network supporting secure access to OpenStack APIs.

Figure 4: Virtual Domains (A)

Finally a Service Virtual Domain is created to host all the external network configuration items, hence isolating the external network configuration from the various tenant Virtual Domains. You should also refer to specific OpenStack Distro Cloud Foundry deployment guide for deploying Cloud Foundry.

Figure 5: Virtual Domains (B)

Virtual Domains will address scalability, instant provisioning, security requirements of cloud foundry deployment. www.plumgrid.com

PPS213_v1.2_0815

5/6

©2015 PLUMgrid, Inc. All rights reserved.

White Paper

PLUMgrid and 3rd Party (FWaaS and LBaaS) Virtual Network Functions VNFs are used to build a virtual network topology for each virtual domain in PG console without interacting with physical network hardware with simple click and drop method in GUI. The automatic IP assignments and real-time network build-out happens as your PaaS deployment grows. You can also build the network via Restful APIs provided in PLUMgrid ONS and have it as a part of automated script to deploy resources for applications. With your physical and virtual network infrastructure up and running, you should be able provide your application developers instant and automatic access to the infrastructure resources.

Conclusion This blueprint provides you key building blocks, based on PLUMgrid ONS, to build a scalable, secure and always available Virtual Network Infrastructure for Cloud Foundry based Platform as a Service environment.

PLUMgrid is a leader of secure and scalable software-defined networking (SDN) solutions for OpenStack® clouds. To learn more about PLUMgrid visit: http://www.plumgrid.com/contact-us/

www.plumgrid.com

PPS213_v1.2_0815

6/6

©2015 PLUMgrid, Inc. All rights reserved.