Cisco UCS Mini, Nimble Storage, and Citrix XenDesktop Seat, Mixed Workload on Cisco UCS B200 M3 Blade Servers

Configuration Guide Cisco UCS Mini, Nimble Storage, and Citrix XenDesktop 7.6 500-Seat, Mixed Workload on Cisco UCS B200 M3 Blade Servers February 20...
Author: Adele Webb
9 downloads 0 Views 4MB Size
Configuration Guide

Cisco UCS Mini, Nimble Storage, and Citrix XenDesktop 7.6 500-Seat, Mixed Workload on Cisco UCS B200 M3 Blade Servers February 2015

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 1 of 53

Contents Executive Summary ................................................................................................................................................. 3 Cisco UCS Mini: Edge-Scale Solution.................................................................................................................... 4 Cisco UCS B200 M3 Blade Server ....................................................................................................................... 5 Cisco UCS 6324 Fabric Interconnect .................................................................................................................... 6 Cisco UCS Manager ............................................................................................................................................. 7 Nimble Storage Adaptive Flash Platform............................................................................................................... 7 Nimble Storage CS300 Array ................................................................................................................................ 8 Nimble Storage CS300 Volume Monitoring......................................................................................................... 10 Nimble Storage CS300, Cisco UCS, and Host Connectivity ............................................................................... 13 VMware vSphere 5.5 .............................................................................................................................................. 13 VMware ESXi 5.5 Hypervisor .............................................................................................................................. 13 Citrix XenApp and XenDesktop 7.6 ...................................................................................................................... 14 Citrix Provisioning Services 7.6 ........................................................................................................................... 16 Benefits for Citrix XenApp and Other Server Farm Administrators ..................................................................... 17 Benefits for Desktop Administrators .................................................................................................................... 17 Citrix Provisioning Services Solution ................................................................................................................... 18 Citrix Provisioning Services Infrastructure .......................................................................................................... 19 Test Configuration ................................................................................................................................................. 19 Hardware Components ....................................................................................................................................... 21 Software Components ........................................................................................................................................ 21 Cisco UCS Mini Service Profile Configuration .................................................................................................... 22 Building the Virtual Machines and Environment ................................................................................................ 31 Software Infrastructure Configuration ................................................................................................................. 31 Citrix XenApp and XenDesktop Virtual Desktop Configuration ........................................................................... 32 Provisioning Citrix XenApp and XenDesktop Virtual Desktop Machines ............................................................. 33 Citrix XenDesktop Policies and Profile Management .......................................................................................... 34 Test Methodology .................................................................................................................................................. 36 User Workload Simulation: Login Virtual Session Indexer .................................................................................. 36 Test Procedure ................................................................................................................................................... 37 Solution Validation ................................................................................................................................................ 38 Single-Server Citrix XenApp (RDS) Testing, 190 Users ..................................................................................... 38 Single-Server Citrix XenDesktop (VDI) Testing, 150 Users ................................................................................ 43 Full-Scale Mixed-Workload Testing, 500 Users .................................................................................................. 47 Conclusion ............................................................................................................................................................. 52 For More Information ............................................................................................................................................. 52

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 2 of 53

Executive Summary Enterprises are seeking to balance the need for large, centralized data centers and the need for excellent user experiences in remote and branch offices with larger user communities. Small and medium-sized businesses are seeking ways to run a compact, self-contained computing infrastructure that is economical and efficient and that offers the potential for growth. Desktop virtualization can help meet these challenges. However, for midsize customers, one of the main barriers to entry is the capital expense required to deploy proof of concept (PoC,) pilot, and development environments. For smaller customers, deployment of a desktop virtualization system for fewer than 300 users is cost prohibitive. To overcome these entry-point barriers, Cisco has developed a self-contained desktop virtualization solution that can host 500 Citrix XenDesktop-based virtual desktops. This architecture uses non-persistent virtual desktop infrastructure (VDI) desktops and remote desktop service (RDS) server desktops on a four-blade Cisco UCS® Mini platform using Cisco UCS B200 M3 Blade Servers with a Nimble Storage system. The complete system hosts the following required infrastructure as well: ●

VMware vSphere 5.5 Update 1



VMware vCenter 5.5



Microsoft Active Directory domain controllers



Microsoft Windows Server 2012



Microsoft SQL Server 2012



Microsoft file server for user data and user profiles



Citrix XenDesktop 7.6



Citrix Provisioning Services 7.6

The Cisco UCS Mini configuration used to validate the configuration is: ●

1 Cisco UCS 5108 Blade Server Chassis



2 Cisco UCS 6324 Fabric Interconnects



4 Cisco UCS B200 M3 Blade Servers

◦ Intel® Xeon® processor E5-2660 v2 10-core 2.2-GHz CPUs: 2 per blade ◦ 128-GB 1866-MHz DIMMs (8 x 16 GB): 1 per infrastructure blade ◦ 256-GB 1866-MHz DIMMs (16 x 16 GB): 1 per VDI blade ◦ 1 Cisco UCS Virtual Interface Card (VIC) 1240 converged network adapter (CNA) The Nimble Storage array used to validate the configuration is: ●

Nimble Storage CS300 with 10.92 terabytes (TB) of raw capacity (12 x 1-TB 7200-rpm SATA hard-disk drive [HDD]), 1.09 TB of solid-state drive (SSD)–based flash memory, and dual 10 Gigabit Ethernet data network interface card (NIC) connections per controller



Note:

2 controllers operating in an active-standby redundant configuration on the Nimble Storage CS300

This reference architecture places all required virtual machine-based infrastructure components, such as

Microsoft Active Directory (AD) domain controllers and Microsoft SQL Server, within a self-contained design and

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 3 of 53

with a shared desktop workload. These components can optionally be hosted elsewhere with scalability tested and validated accordingly. The configuration used in these tests provides an excellent virtual desktop end-user experience for 500 mixed-usecase sessions as measured by the test tool, Login Virtual Session Indexer (VSI), at a highly competitive price. As with any solution deployed to users with data storage requirements, a data protection solution must be deployed to help ensure the continuity of the user data. Such a solution can be deployed on the host or storage and is outside the scope of this document. Scalability guidance: Looking at the results of the testing done in this document, the four-blade Cisco UCS Mini with Nimble Storage CS300 easily supported a total of 500 desktop sessions in a mixed VDI and RDS configuration. You can double the user density by adding up to four more blades to the Cisco UCS Mini chassis. The Nimble Storage CS300 can easily support this workload in both capacity and performance. The software infrastructure components that comprise Citrix XenDesktop can also effortlessly support an increased load.

Cisco UCS Mini: Edge-Scale Solution With Cisco UCS Mini, the Cisco Unified Computing System™ (Cisco UCS), originally designed for the data center, is now optimized for branch and remote offices and point-of-sale (PoS) and smaller IT environments. Cisco UCS Mini is designed for customers who need fewer servers but still want the robust management capabilities provided by Cisco UCS Manager. This solution delivers servers, storage, and 10 Gigabit networking in an easy-to-deploy, compact form factor. The solution includes these components (Figure 1): Cisco UCS B200 M3 Blade Server: Delivering performance, versatility, and density without compromise, the Cisco UCS B200 M3 Blade Server addresses a broad set of workloads. Cisco UCS 5108 Blade Server Chassis: The chassis can accommodate up to eight half-width Cisco UCS B200 M3 Blade Servers. Cisco UCS 6324 Fabric Interconnect: The Cisco UCS 6324 provides the same unified server and networking capabilities as the top-of-rack (ToR) Cisco UCS 6200 Series Fabric Interconnects embedded in the Cisco UCS 5108 Blade Server Chassis. Cisco UCS Manager: Cisco UCS Manager provides unified, embedded management of all software and hardware components in a Cisco UCS Mini solution.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 4 of 53

Figure 1.

Cisco UCS Mini Components

Cisco UCS B200 M3 Blade Server Delivering performance, versatility, and density without compromise, the Cisco UCS B200 M3 Blade Server (Figure 2) addresses a broad set of workloads, from IT and web infrastructure to distributed databases. View the video data sheet to see how to boost density and performance without compromise using Cisco's new blade server. The enterprise-class Cisco UCS B200 M3 further extends the capabilities of the Cisco UCS portfolio in a half-blade form factor. The Cisco UCS B200 M3 server harnesses the power of the Intel Xeon processor E5-2600 and E52600 v2 product families and offers up to 768 GB of RAM, two hard drives, and up to eight 10 Gigabit Ethernet ports to deliver exceptional levels of performance, memory expandability, and I/O throughput for nearly all applications. In addition, Cisco UCS has the architectural advantage of not having to power and cool switches in each blade chassis. Having a larger power budget available for blades enables Cisco to design uncompromised expandability and capabilities in its blade servers, as evidenced by the new Cisco UCS B200 M3 and its leading memory and drive capacities, resulting in outstanding performance. The Cisco UCS 5108 Blade Server Chassis can house up to eight Cisco UCS B200 M3 Blade Servers or a combination of Cisco UCS B200 M3 servers and other Cisco UCS blade servers. The Cisco UCS B200 M3 offers these features and capabilities: ●

Suitable for a wide range of applications and workload requirements



Exceptional building block for the Cisco Unified Computing System



Half-width form factor offers industry-leading benefits and features without compromise



Cisco UCS VIC 1240 designed for the M3 generation of Cisco UCS B-Series Blade Servers

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 5 of 53

Figure 2.

Cisco UCS B200 M3 Blade Server

Cisco UCS 6324 Fabric Interconnect The Cisco UCS 6324 Fabric Interconnect (Figure 3) provides the management, LAN, and storage connectivity for the Cisco UCS 5108 Blade Server Chassis and direct-connect rack-mount servers. It provides the same fullfeatured Cisco UCS management capabilities and XML API as the full-scale Cisco UCS solution in addition to integrating with Cisco UCS Central Software and Cisco UCS Director. From a networking perspective, the Cisco UCS 6324 Fabric Interconnect uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 Gigabit Ethernet on all ports, switching capacity of up to 500 Gbps, and 80-Gbps uplink bandwidth for each chassis, independent of packet size and enabled services. Sixteen 10-Gbps links connect to the servers, providing a 20-Gbps link from each Cisco UCS 6324 Fabric Interconnect to each server. The product family supports Cisco® low-latency; lossless 10 Gigabit Ethernet unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks. The fabric interconnect supports multiple traffic classes over a lossless Ethernet fabric, from the blade through the fabric interconnect. Significant savings in total cost of ownership (TCO) can be achieved from the Fibre Channel over Ethernet (FCoE)–optimized server design, in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated. The Cisco UCS 6324 Fabric Interconnect is built to consolidate LAN and storage traffic onto a single unified fabric, eliminating the capital expenditures (CapEx) and operating expenses (OpEx) associated with multiple parallel networks, different types of adapter cards, switching infrastructure, and cabling within racks. The unified ports allow the fabric interconnect to support direct connections from Cisco UCS to Fibre Channel, FCoE, and Small Computer System Interface over IP (iSCSI) storage devices. For virtualized environments, the Cisco UCS 6324 Fabric Interconnect supports Cisco virtualization-aware networking and Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) architecture. Cisco Data Center VMFEX allows the interconnects to provide policy-based virtual machine connectivity, with network properties moving with the virtual machine and a consistent operational model for both physical and virtual environments. The Cisco UCS 6324 Fabric Interconnect is a 10 Gigabit Ethernet, FCoE, and Fibre Channel switch offering up to 500-Gbps throughput and up to four unified ports and one scalability port.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 6 of 53

Figure 3.

Cisco UCS 6324 Fabric Interconnect

Cisco UCS Manager The Cisco UCS 6324 Fabric Interconnect hosts and runs Cisco UCS Manager in a highly available configuration, enabling the fabric interconnects to fully manage all Cisco UCS elements. The Cisco UCS 6324 Fabric Interconnects supports out-of-band management through a dedicated 10/100/1000-Mbps Ethernet management port. Cisco UCS Manager typically is deployed in a clustered active-passive configuration on two Cisco UCS 6324 Fabric Interconnects connected through the cluster interconnect built into the chassis.

Nimble Storage Adaptive Flash Platform The Nimble Storage Adaptive Flash platform dynamically and intelligently deploys storage resources to meet the growing demands of business-critical applications. It is the first storage solution to eliminate the flash-memory performance and capacity trade-off. Adaptive Flash combines Nimble Storage Cache Accelerated Sequential Layout (CASL) architecture and Nimble Storage InfoSight, the company's innovated data sciences–based approach to the storage lifecycle. Nimble Storage CASL scales performance and capacity transparently and independently. Nimble Storage InfoSight uses the power of deep data analytics to provide customers with precise guidance on the optimal approach to scaling flash memory, CPU, and capacity to meet changing application needs, while helping ensure peak storage health. Nimble Storage Adaptive Flash offers these main benefits: ●

Scale storage performance and capacity independently and non-disruptively.



Achieve enterprise-class flash storage performance and capacity in a small footprint.



Protect your IT investment by eliminating the need for major system upgrades.



Sustain peak health for your storage infrastructure with integrated protection, deep-data analytics, and efficient resiliency.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 7 of 53

Nimble Storage CS300 Array As part of the Nimble Storage Adaptive Flash platform, the Nimble Storage CS300 (Figure 4) is well suited for distributed sites of larger organizations and for midsize IT departments. It offers exceptional performance and capacity per dollar for workloads such as VDI, Microsoft applications, and virtual server consolidation. The Nimble Storage CS300 array offers the following benefits: ●

Adaptive performance: Performance adapts to boot storms and I/O spikes because the flash cache is populated dynamically.



Cost-effective capacity: Inline compression, high-capacity disk, and zero-copy cloning deliver capacity reductions of up to 75 percent.



Business continuity: High availability and integrated data protection reduce downtime from local failures and larger sitewide disasters.



Transparent scaling: Easily scale performance and capacity independently and without downtime.

Figure 4.

Nimble Storage CS300 Array

The Nimble Storage CS300 was configured with multiple volumes presented to the Cisco UCS VDI cluster as shown in the Figure 5. Figure 5.

Nimble Storage Array Configuration

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 8 of 53

The Nimble Storage volumes were configured as follows (Figure 6): ●

4 x 10-GB volumes for VMware ESXi 5.5 iSCSI boot: Eliminates the need for local disks in the Cisco UCS B200 M3 blades



1.25-TB infrastructure volume for the VDI environment: Used by infrastructure virtual machines such as Microsoft Active Directory domain controllers, Microsoft SQL Servers, and Citrix XenDesktop desktop controllers



2-TB VDI machine volume for the Citrix Provisioning Services (PVS) write cache storage



2-TB RDS machine volume for the PVS write cache



500-GB volume for the PVS virtual disks (vDisks): Presented as a share through the Microsoft Windows file server virtual machine



500-GB volume for the user profiles: Presented as a share through the Windows file server virtual machine

Figure 6.

Nimble Storage Volume Management

Each volume is thin-provisioned and attached from the Nimble Storage iSCSI SAN to a VMware Virtual Machine File System (VMFS) datastore. Nimble Storage can set a performance policy during or after a volume is created. A variety of prebuilt policies can be used, or a custom policy can be created. Performance policies specify the configuration of the storage block size, compression, and caching. In Figure 7, the default performance policy is set for the Infra volume. Figure 7.

Nimble Storage Volume Management

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 9 of 53

Backup and recovery are integral components of any production environment. Nimble Storage protects data by supporting instant snapshots for easy backup and restoration, along with efficient replication for disaster recovery. Volumes can be placed on a protection schedule during their creation or at any time afterward. Volume collections offer an efficient way to group volume backup operations, allowing multiple volumes to be on the same snapshot schedule. A volume collection can consist of one or many volumes. In Figure 8, the volume Infra, which houses the VDI infrastructure components, is part of a volume collection called INFRA-DS. This collection contains multiple hourly snapshots. Only the delta of the original snapshot is written to disk after the data has been compressed, further reducing the space required to house the snapshots. Figure 8.

Nimble Storage GUI: Snapshots

Nimble Storage CS300 Volume Monitoring The capability to easily monitor volume activity is crucial to assessing current storage use, compression rates, connectivity, and performance. Nimble Storage allows the administrator many ways to view, modify, and monitor volume performance and use. The Nimble Storage GUI and command-line interface (CLI) give administrators the tools and views they need to quickly and accurately perform volume tasks. Volume size, maintenance processes, snapshots, compression, and connectivity can all be viewed and controlled by both the GUI and CLI, as shown in Figures 9 and 10.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 10 of 53

Figure 9.

Nimble Storage GUI: Volume Overview

Figure 10.

Nimble Storage CLI: List of Volumes

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 11 of 53

Nimble Storage InfoSight takes a new approach to the storage lifecycle, using the power of deep-data analytics and cloud-based management to deliver true operational efficiency across all storage activities. InfoSight, an integral part of the Nimble Storage Adaptive Flash platform, helps ensure the peak health of storage infrastructure by identifying problems and offering solutions in real time. InfoSight provides expert guidance to help organizations deploy the right balance of storage resources—dynamically and intelligently—to meet the changing demands of business-critical applications. Figure 11 shows a volume view for the Nimble Storage CS300 in InfoSight. Figure 11.

Nimble Storage GUI: List of Volumes

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 12 of 53

Nimble Storage CS300, Cisco UCS, and Host Connectivity The iSCSI initiators of the four Cisco UCS blades were defined in four initiator groups with both the A and B path iSCSI qualified names (IQNs). Nimble Storage allows a variety of initiator group formats, to best meet the needs of an organization’s environment and requirements. Figure 12 shows the initiator groups for all VMware ESXi hosts in the VDI environment. The initiator group is constructed by adding the IQN information for each blade’s IQN for both the A and B paths. Figure 12.

Nimble Storage GUI: Initiator Group

VMware vSphere 5.5 VMware provides virtualization software. VMware’s enterprise software hypervisors for servers—VMware ESX, ESXi, and vSphere—are bare-metal hypervisors that run directly on server hardware without requiring an additional underlying operating system. VMware vCenter Server for vSphere provides central management, with complete control and visibility into clusters, hosts, virtual machines, storage, networking, and other critical elements of your virtual infrastructure.

VMware ESXi 5.5 Hypervisor VMware vSphere is the industry- leading virtualization platform for building private cloud infrastructure. It enables IT to meet service-level agreements (SLAs) for the most demanding business-critical applications with lower TCO. VMware vSphere accelerates the shift to cloud computing for existing data centers and also supports compatible public cloud offerings, forming the foundation for the industry’s only hybrid cloud model, making VMware vSphere a trusted platform for any application. VMware ESXi 5.5 is a bare-metal hypervisor, so it installs directly on top of the physical server and partitions it into multiple virtual machines that can run simultaneously, sharing the physical resources of the underlying server. VMware ESXi, introduced in 2007, delivers industry-leading performance and scalability while setting a new standard for reliability, security, and hypervisor management efficiency. The latest release, VMware vSphere 5.5, introduces many new features and enhancements to extend the core capabilities of the VMware vSphere platform, including: ●

VMware ESXi hypervisor enhancements: Hot-pluggable SSD PCI Express (PCIe) devices, support for VMware Reliable Memory Technology, and enhancements for CPU C-states

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 13 of 53



Virtual machine enhancements: Virtual machine compatibility with VMware ESXi 5.5, extended graphics processing unit (GPU) support, and graphics acceleration for Linux guests



VMware vCenter server enhancements: VMware vCenter single sign-on (SSO), Server Appliance and vSphere Web Client, High Availability (HA), High Availability with application-level monitoring (App HA), Distributed Resource Scheduler (DRS), and Big Data Extensions



VMware vSphere storage enhancements: Support for 62 TB of VMware VMDK storage, Microsoft Cluster Server (MSCS) updates, 16-GB end-to-end support, and vSphere flash-memory read cache and replication multipoint-in-time-snapshot retention



VMware vSphere networking enhancements: Lightweight Access Control Protocol (LACP) enhancements, traffic filtering, quality-of-service (QoS) tagging, single root I/O virtualization (SR-IOV) enhancements, enhanced host-level packet capture, and 40-GB NIC support

Citrix XenApp and XenDesktop 7.6 Citrix XenApp and XenDesktop are application and desktop virtualization solutions built on a unified architecture so they're simple to manage and flexible enough to meet the needs of all your organization's users. XenApp and XenDesktop have a common set of management tools that simplify and automate IT tasks. You use the same architecture and management tools to manage public, private, and hybrid cloud deployments as you do for onpremises deployments. Citrix XenApp delivers: ●

XenApp published apps, also known as server-based hosted applications: These are applications hosted from Microsoft Windows servers to any type of device, including Windows PCs, Macs, smartphones, and tablets. Some XenApp editions include technologies that further optimize the experience of using Windows applications on a mobile device by automatically translating native mobile-device display, navigation, and controls to Windows applications; enhancing performance over mobile networks; and enabling developers to optimize any custom Windows application for any mobile environment.



XenApp published desktops, also known as server-hosted desktops: These are inexpensive, locked-down Windows virtual desktops hosted from Windows server operating systems. They are well suited for users, such as call center employees, who perform a standard set of tasks.



Virtual machine–hosted apps: These are applications hosted from machines running Windows desktop operating systems for applications that can’t be hosted in a server environment.



Windows applications delivered with Microsoft App-V: These applications use the same management tools that you use for the rest of your XenApp deployment.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 14 of 53

Citrix XenDesktop delivers: ●

VDI desktops: These virtual desktops each run a Microsoft Windows desktop operating system rather than running in a shared, server-based environment. They can provide users with their own desktops that they can fully personalize.



Hosted physical desktops: This solution is well suited for providing secure access powerful physical machines, such as blade servers, from within your data center.



Remote PC access: This solution allows users to log in to their physical Windows PC from anywhere over a secure XenDesktop connection.



Server VDI: This solution is designed to provide hosted desktops in multitenant, cloud environments.



Capabilities that allow users to continue to use their virtual desktops: These capabilities let users continue to work while not connected to your network.

Some XenDesktop editions include the features available in XenApp. Release 7.6 of XenDesktop includes new features that make it easier for users to access applications and desktops and for Citrix administrator to manage applications: ●

The session prelaunch and session linger features help users quickly access server-based hosted applications by starting sessions before they are requested (session prelaunch) and keeping application sessions active after a user closes all applications (session linger).



Support for unauthenticated (anonymous) users means that users can access server-based hosted applications and server-hosted desktops without presenting credentials to Citrix StoreFront or Receiver.



Connection leasing makes recently used applications and desktops available even when the site database in unavailable.



Application folders in Citrix Studio make it easier to administer large numbers of applications.

Other new features in this release allow you to improve performance by specifying the number of actions that can occur on a site's host connection; display enhanced data when you manage and monitor your site; and anonymously and automatically contribute data that Citrix can use to improve product quality, reliability, and performance. For more information about the features new in this release, see Citrix XenDesktop Release 7.6.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 15 of 53

Figure 13 presents a logical overview of XenDesktop Figure 13.

Logical Architecture of Citrix XenDesktop

Citrix Provisioning Services 7.6 Most enterprises struggle to keep up with the proliferation and management of computers in their environments. Each computer, whether it is a desktop PC, a server in a data center, or a kiosk-type device, must be managed as an individual entity. The benefits of distributed processing come at the cost of distributed management. It costs time and money to set up, update, support, and ultimately decommission each computer. The initial cost of the machine is often dwarfed by operating costs. Citrix PVS takes a very different approach from traditional imaging solutions by fundamentally changing the relationship between hardware and the software that runs on it. By streaming a single shared disk image (vDisk) rather than copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage, even as the number of machines continues to grow, simultaneously providing the efficiency of centralized management and the benefits of distributed processing. In addition, because machines are streaming disk data dynamically and in real time from a single shared image, machine image consistency is essentially ensured. At the same time, the configuration, applications, and even OS of large pools of machines can be completed changed in the time it takes the machines to reboot. Using PVS, any vDisk can be configured in standard-image mode. A vDisk in standard-image mode allows many computers to boot from it simultaneously, greatly reducing the number of images that must be maintained and the

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 16 of 53

amount of storage that is required. The vDisk is in read-only format, and the image cannot be changed by target devices.

Benefits for Citrix XenApp and Other Server Farm Administrators If you manage a pool of servers that work as a farm, such as Citrix XenApp servers or web servers, maintaining a uniform patch level on your servers can be difficult and time consuming. With traditional imaging solutions, you start with a clean golden master image, but as soon as a server is built with the master image, you must patch that individual server along with all the other individual servers. Rolling out patches to individual servers in your farm is not only inefficient, but the results can also be unreliable. Patches often fail on an individual server, and you may not realize you have a problem until users start complaining or the server has an outage. After that happens, getting the server resynchronized with the rest of the farm can be challenging, and sometimes a full reimaging of the machine is required. With Citrix PVS, patch management for server farms is simple and reliable. You start by managing your golden image, and you continue to manage that single golden image. All patching is performed in one place and then streamed to your servers when they boot. Server build consistency is assured because all your servers use a single shared copy of the disk image. If a server becomes corrupted, simply reboot it, and it is instantly back to the known good state of your master image. Upgrades are extremely fast to implement. After you have your updated image ready for production, you simply assign the new image version to the servers and reboot them. You can deploy the new image to any number of servers in the time it takes them to reboot. Just as important, rollback can be performed in the same way, so problems with new images do not need to take your servers or your users out of commission for an extended period of time.

Benefits for Desktop Administrators Because Citrix PVS is part of Citrix XenDesktop, desktop administrators can use PVS’s streaming technology to simplify, consolidate, and reduce the costs of both physical and virtual desktop delivery. Many organizations are beginning to explore desktop virtualization. Although virtualization addresses many of IT’s needs for consolidation and simplified management, deploying it also requires deployment of supporting infrastructure. Without PVS, storage costs can make desktop virtualization too costly for the IT budget. However, with PVS, IT can reduce the amount of storage required for VDI by as much as 90 percent. And with a single image to manage instead of hundreds or thousands of desktops, PVS significantly reduces the cost, effort, and complexity for desktop administration. Different types of workers across the enterprise need different types of desktops. Some require simplicity and standardization, and others require high performance and personalization. XenDesktop can meet these requirements in a single solution using Citrix FlexCast delivery technology. With FlexCast, IT can deliver every type of virtual desktop, each specifically tailored to meet the performance, security, and flexibility requirements of each individual user. Not all desktops applications can be supported by virtual desktops. For these scenarios, IT can still reap the benefits of consolidation and single-image management. Desktop images are stored and managed centrally in the data center and streamed to physical desktops on demand. This model works particularly well for standardized desktops such as those in lab and training environments and call centers and thin-client devices used to access virtual desktops.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 17 of 53

Citrix Provisioning Services Solution Citrix PVS streaming technology allows computers to be provisioned and re-provisioned in real time from a single shared disk image. With this approach, administrators can completely eliminate the need to manage and patch individual systems. Instead, all image management is performed on the master image. The local hard drive of each system can be used for runtime data caching or, in some scenarios, removed from the system entirely, which reduces power use, system failure rate, and security risk. The PVS solution’s infrastructure is based on software-streaming technology. After PVS components are installed and configured, a vDisk is created from a device’s hard drive by taking a snapshot of the OS and application image and then storing that image as a vDisk file on the network. A device used for this process is referred to as a master target device. The devices that use the vDisks are called target devices. vDisks can exist on a PVS, file share, or in larger deployments, on a storage system with which PVS can communicate (iSCSI, SAN, network-attached storage [NAS], and Common Internet File System [CIFS]). vDisks can be assigned to a single target device in private-image mode, or to multiple target devices in standard-image mode. When a target device is turned on, it is set to boot from the network and to communicate with a PVS system. Unlike with thin-client devices, processing takes place on the target device (step 1 in Figure 14). Figure 14.

Citrix Provisioning Services Solution

The target device downloads the boot file from a PVS system (step 2), and then the target device boots. On the basis of the device boot configuration settings, the appropriate vDisk is located and then mounted on the PVS (step 3). The software on that vDisk is streamed to the target device as needed. To the target device, it appears like a regular hard drive in the system. Instead of immediately pulling all the vDisk contents to the target device (as occurs in traditional and in imaging deployment solutions), the data is brought across the network in real time as needed. This approach allows a target device to get a completely new operating system and set of software in the time it takes for it to reboot, without requiring a visit to a workstation. This approach dramatically decreases the amount of network bandwidth required compared to that required by traditional disk imaging tools, making it possible to support a larger number of target devices on your network without affecting overall network performance.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 18 of 53

Citrix Provisioning Services Infrastructure The Citrix PVS infrastructure design directly relates to administrative roles within a PVS farm. The PVS administrator role determines which components that administrator can manage or view in the console. A PVS farm contains several components. Figure 15 provides a high-level view of a basic PVS infrastructure and shows how PVS components might appear within that implementation. Figure 15.

Logical Architecture of Citrix Provisioning Services

Test Configuration The hybrid logical and physical diagrams in Figures 16 and 17 show the test configuration. Figure 16.

Nimble Storage SmartStack Physical Reference Architecture (Front View)

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 19 of 53

Figure 17.

Nimble Storage SmartStack Physical Reference Architecture (Connectivity View)

Figure 18 shows the Nimble Storage SmartStack components. Figure 18.

Nimble Storage SmartStack Software Components

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 20 of 53

The objective of the project was to demonstrate how the Cisco UCS Mini SmartPlay bundle performs using a Nimble Storage CS300 array running Citrix XenDesktop 7.6. This design supports the required infrastructure components for a self-contained running environment, including the required Citrix infrastructure virtual machines, and up to 500 mixed-use-case virtual desktop users at a highly competitive price. Both Microsoft Windows 7 VDI desktops (300 users) and Microsoft Server 2012 RDS servers (200 users) were deployed, demonstrating the flexibility of the solution.

Hardware Components ●

Cisco UCS 5108 Blade Server Chassis (UCSB-5108-AC2)



2 Cisco UCS 6324 Fabric Interconnects



2 Cisco UCS B200 M3 Blade Servers (2 Intel Xeon processor E5-2660 v2 CPUs at 2.2 GHz, with 128 GB of memory per blade server [16 GB x 8 DIMMs at 1866 MHz]) for infrastructure and XenApp RDS virtual machines



2 Cisco UCS B200 M3 Blade Servers (2 Intel Xeon processor E5-2660 v2 CPUs at 2.2 GHz, with 256 GB of memory per blade server [16 GB x 16 DIMMs at 1866 MHz]) for Citrix XenDesktop VDI virtual machines



Cisco VIC 1240 CNA (1 per blade)



Cisco Nexus® Family switch (Layer 3 through 7 connectivity)

Software Components ●

• Cisco UCS Firmware Release 3.0 (0.191)



• VMware ESXi 5.5 Update 1 for host blades



• Citrix XenDesktop 7.6 and XenApp 7.6



• Citrix Provisioning Services 7.6



• Microsoft Windows 7 SP1 32-bit



• Microsoft Windows Server 2012



• Microsoft SQL Server 2012



• Microsoft Office 2010



• Login VSI 3.7

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 21 of 53

Cisco UCS Mini Service Profile Configuration This section presents the steps for configuring the Cisco UCS Mini service profile. 1.

Configure the fabric interconnects. Go to Equipment > Fabric Interconnects > Fabric Interconnect A (Figure 19). The configuration for Fabric Interconnect B is similar.

Figure 19.

2.

Configuration of Fabric Interconnect

Configure Ethernet port A. In the Equipment list, under Fabric Interconnect A, choose Ethernet Ports (Figure 20).

Figure 20.

Configuration of Ethernet Ports (Fabric Interconnect A Ports)

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 22 of 53

3.

Configure Ethernet port B. From the Equipment list, under Fabric Interconnect B, choose Ethernet Ports (Figure 21).

Figure 21.

4.

Configuration of Ethernet Ports (Fabric Interconnect B Ports)

Configure the appliance port as shown in Figure 22 (using port 3 for the storage appliance). The configuration for port 4 is for fabric interconnect B is similar, with a different VLAN.

Figure 22.

Configuration of Appliances Port (Fabric Interconnect A Ports)

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 23 of 53

5.

Set up a total of four servers (Figure 23).

Figure 23.

6.

Total Number of Servers Used in This Configuration

Configure virtual network interface cards (vNICs). Choose Servers > Service Profiles > root > SubOrganizations > UCS-MINI, choose a server, choose iSCSI vNICs, and then choose a vNIC (Figure 24).

Figure 24.

Configuration of vNIC Used in This Configuration

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 24 of 53

7.

Configure the vNIC VLANs for the Infra, Mgmt, VDI, and vMotion Networks (Figure 25).

Figure 25.

8.

Configuration of VLANs for vNICs

Configure the iSCSI vNIC VLAN for the A-side storage (Figure 26).

Figure 26.

Configuration of VLAN for vNICs for A-Side Storage

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 25 of 53

9.

Configure the iSCSI vNIC VLAN for the B-side storage (Figure 27).

Figure 27.

Configuration of VLAN for vNICs for B-Side Storage B

10. Configure the iSCSI boot order for one of the servers. Choose Servers, select a service profile, and open the Boot Order tab (Figure 28). Figure 28.

Note:

iSCSI Boot Configuration of One of the Servers as the vNIC A Primary

iSCSI boot parameters for the B side are similar to those for the A side, with a different IP address

configured.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 26 of 53

Figure 29 shows the iSCSI boot parameters configured for one of the servers for the A-side vNIC. Figure 29.

iSCSI Boot Configuration for One of the Servers for vNIC A

Figure 30 shows the iSCSI boot parameters configured for one of the servers for the B-side vNIC. Figure 30.

iSCSI Boot Configuration for One of the Servers for vNIC B

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 27 of 53

11. Configure the vNIC template for one of the Infra servers. Choose LAN > Policies > Root > vNIC Templates (Figure 31). Figure 31.

vNIC Template Configuration for One of the Servers for vNIC A

12. Create the network control policy (Figure 32). Figure 32.

Creation of Network Control Policy

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 28 of 53

13. Configure a vNIC template for one of the servers (Figure 33). Figure 33.

Configuration of vNIC Template for One of the Servers

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 29 of 53

14. Configure BIOS policy. Choose Servers > Service Profiles > root > Sub-Organizations > UCS-Mini and select a server. Then configure the settings. a.

Choose Advanced > Processor (Figure 34).

Figure 34.

b.

Configuration of BIOS Policy Advanced Processor Settings

Choose Advanced > Intel Directed IO (Figure 35).

Figure 35.

Configuration of BIOS Policy Intel Directed IO Settings

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 30 of 53

c.

Choose Advanced > RAS Memory (Figure 36).

Figure 36.

Configuration of BIOS Policy RAS Memory Settings

Building the Virtual Machines and Environment Software Infrastructure Configuration This section presents the configuration for the software infrastructure components that comprise the solution. Install and configure the infrastructure virtual machines following the guidance provided in Table 1. 1.

Create a Microsoft Active Directory domain along with Domain Name Service (DNS) and Dynamic Host Configuration Protocol (DHCP) services.

2.

On the file server virtual machine, add the file and storage services role. Attach the User-Profiles and PVSvDisk volumes located on the Nimble Storage CS300. Create CIFS shares for the user profiles and Citrix PVS vDisks.

3.

Install Citrix PVS 7.6 on the two PVS virtual machines and configure the store path to use the CIFS share (for example, \\file1\vdisk). Follow the PVS best practices guidance provided here.

4.

Install the Citrix XenDesktop 7.6 delivery controller component on the two XenDesktop virtual machines. Install Citrix License Server, StoreFront, Studio, and Director on at least one of the two virtual machines. See the handbook provided here for XenDesktop 7.0 and the guidance provided here for new XenDesktop deployments.

5.

On the first delivery controller virtual machine, create and configure the XenDesktop site. Add the second delivery controller to the newly configured site. Refer to the guidance provided here for site creation and configuration guidance.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 31 of 53

Table 1.

Test Infrastructure Virtual Machine Configuration

Configuration

Citrix XenDesktop Controller Virtual Machine

Provisioning Server Virtual Machine

Operating system

Microsoft Windows Server 2012 R2

Microsoft Windows Server 2012 R2

Virtual CPU amount

4

4

Memory amount

8 GB

8 GB

Network

VMXNET3

VMXNET3

VM Network vLAN

VM Network vLAN

40 GB

40 GB

Infra-DS volume

Infra-DS volume

Disk-2 size and location



500 GB

Configuration

Microsoft Active Directory Domain Controller Virtual Machine

VMware vCenter Virtual Machine

Operating system

Microsoft Windows Server 2012 R2

Microsoft Windows Server 2012 R2

Virtual CPU amount

4

4

Memory amount

4 GB

16 GB

Network

VMXNET3

VMXNET3

VM Network vLAN

VM Network vLAN

40 GB

70 GB

Infra-DS volume

Infra-DS volume

Configuration

Microsoft SQL Server Virtual Machine

File Server Virtual Machine

Operating system

Microsoft Windows Server 2012 R2

Microsoft Windows Server 2012 R2

Disk-1 (OS) size and location

PVS-vDisk volume using CIFS

Disk size and location

Microsoft SQL Server 2012 SP1 Virtual CPU amount

4

2

Memory amount

12 GB

4 GB

Network

VMXNET3

VMXNET3

VM Network vLAN

VM Network vLAN

60 GB

40 GB

Infra-DS volume

Infra-DS volume

Disk-2 size and location



500 GB

Disk-3 size and location



Disk-1 (OS) size and location

PVS-vDisk volume 500 GB User-Profiles volume

Citrix XenApp and XenDesktop Virtual Desktop Configuration This section presents the configuration for the Citrix Virtual Desktop Agent machines. Follow the guidance provided in Table 2. 1.

Build the RDS golden image. Create a Microsoft Windows Server 2012 virtual machine for testing Citrix XenApp.

2.

Build the VDI golden image. Create a Microsoft Windows 7 virtual machine for testing Citrix XenDesktop.

3.

Configure the RDS and VDI virtual machines to connect to the VDI VLAN and add to the domain.

4.

Install the Citrix Virtual Desktop Agent on both virtual machines.

Note:

Make sure that Optimize Performance is selected.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 32 of 53

5.

Install the Citrix PVS target device agent on both virtual machines.

Note: 6.

Microsoft Windows 7 requires Microsoft hotfix KB 2550978 before the PVS agent can be installed.

Install the user applications on both virtual machines. These are the applications that the users can access from within the base vDisk images.

7.

Run the PVS Imaging Wizard on both virtual machines. Select Dynamic as the vDisk type, and select Optimize for Provisioning Services. After the vDisk has been created and the files are copied to it, change the vDisk mode from Private Image to Standard Image (multidevice, with read-only access) and select Cache in Device RAM with Overflow on Hard Disk. Specify 64 MB for Microsoft Windows 7 (VDI) and 1024 MB for Microsoft Windows Server 2012 (RDS).

Table 2.

Tested Resource Virtual Machine Configuration

Configuration

VDI Virtual Machine

RDS Virtual Machine

Operating system

Microsoft Windows 7 SP1 32-bit

Microsoft Windows Server 2012

Virtual CPU amount

1

5

Memory amount

1.5-GB reserve for all guest memory

16-GB reserve for all guest memory

Network

VMXNET3

VMXNET3

VDI vLAN

VDI vLAN

24 GB (dynamic)

40 GB (dynamic)

PVS-vDisk volume

PVS-vDisk volume

4 GB

30 GB

64 MB

1024 MB

Citrix PVS vDisk size and location

Citrix PVS write cache Disk size Citrix PVS write cache RAM cache size

Note:

See the document here for Citrix XenDesktop system requirements.

Provisioning Citrix XenApp and XenDesktop Virtual Desktop Machines This section presents the process for provisioning VDI and RDS machines. 1.

Create virtual machine templates for the virtual desktop machines. Clone the VDI and RDS golden image virtual machines. Make the following changes to the cloned virtual machines: Remove Hard Disk 1 (virtual disk), verify that the network adapter is configured to the VDI vLAN, and reserve all guest memory on the Resources tab.

2.

Prepare the virtual machine templates for provisioning. Move the cloned golden image virtual machines to the VDI cluster and VDI-WC (RDS-WC for RDS machines) datastores. Convert the virtual machines to templates.

3.

Confirm the clusters and datastores before provisioning.

4.

Provision the RDS machines. Open the Citrix XenDesktop Setup Wizard located on the Citrix PVS administrator console. Specify the following: •

Microsoft Windows Server operating system



8 virtual machines



5 virtual CPUs (vCPUs)



16,384 MB of memory

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 33 of 53



30-GB local write cache disk (thick)

Refer to the guidance here for help with the XenDesktop Setup Wizard. 5.

Provision the VDI machines. Using the XenDesktop Setup Wizard, select the following: •

Microsoft Windows Desktop operating system



A fresh new (random) desktop each time logs in



300 virtual machines



1 vCPU



1536 MB of memory



4-GB local write cache disk (thick)

Note:

Increasing the vCPU count from one to two can improve the user experience for certain workload

scenarios; however, doing so may decrease the scalability of the solution. 6.

After the RDS and VDI machines have been provisioned, verify that: • All virtual machines have been created, placed in the correct cluster and data store, and have been powered on and off as part of the initial setup process a.

7.

Target devices have been successfully created in the device collection groups



Desktop machines have been successfully created in the machine catalogs groups



Computer objects have been successfully created in Microsoft Active Directory

Create the desktop delivery groups. Using Citrix Studio, create two delivery groups that correspond to the VDI and RDS machines.

Citrix XenDesktop Policies and Profile Management Policies and profiles allow the Citrix XenDesktop environment to be easily and efficiently customized. Citrix XenDesktop policies control user access and session environments and provide the most efficient means of controlling connection, security, and bandwidth settings. You can create policies for specific groups of users, devices, or connection types with each policy. Policies can contain multiple settings and typically are defined through Citrix Studio. (The Microsoft Windows Group Policy Management Console [GPMC] can also be used if the network environment includes Microsoft Active Directory and permissions are set for managing group policy objects.) The screenshot of Citrix Studio in Figure 37 shows the Citrix User Profile Management (UPM) policies for VDI and RDS.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 34 of 53

Figure 37.

Citrix User Profile Management Policies for VDI (Left) and RDS (Right) Set in Citrix Studio

The screenshot of Citrix Studio in Figure 38 shows the policies pertaining to Login VSI testing for the solution.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 35 of 53

Figure 38.

Policies Pertaining to Login VSI Testing for the Solution Set in Citrix Studio

Test Methodology User Workload Simulation: Login Virtual Session Indexer A critical factor in validating a desktop virtualization deployment is identifying a real-world user workload that is easy for customers to replicate and standardized across platforms so that customers can realistically test the impact of a variety of worker tasks. To accurately represent a real-world user workload, a tool is needed to measure in-session response time and provide an objective way to measure the expected user experience for individual desktops throughout large-scale test workloads, including login storms. The Login VSI 3.7 methodology, designed for benchmarking server-based computing (SBC) and VDI environments, is completely platform and protocol independent and hence allows customers to easily replicate the testing results in their own environments. Login VSI calculates an index based on the number of simultaneous sessions that can be run on a single machine. Login VSI simulates a medium-sized-workload user (also known as a knowledge worker) running applications such as Microsoft Office 2007 or 2010, Internet Explorer 8 including an Adobe Flash video applet, and Adobe Acrobat Reader. © 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 36 of 53

As in real user sessions, the scripted Login VSI session leaves multiple applications open at the same time. The medium-sized workload is the default workload in Login VSI and was used for this testing. This workload emulated a medium-level knowledge working using Microsoft Office and Internet Explorer, printing, and viewing PDF files. ●

After a session is started, the medium workload repeats every 12 minutes.



During each loop, the response time is measured every 2 minutes.



The medium workload opens up to five applications simultaneously.



The typing rate is 160 milliseconds (ms) for each character.



Approximately 2 minutes of idle time is included to simulate real-world users.



Each loop opens and uses:

◦ Microsoft Outlook 2007 or 2010: 10 messages are browsed. ◦ Microsoft Internet Explorer: One instance is left open (BBC.co.uk); one instance is browsed to Wired.com, Lonely planet.com, and a processor-intensive 480p Adobe Flash application (gettheglass.com).

◦ Microsoft Word 2007 or 2010: One instance is used to measure response time, and one instance is used to review and edit a document.

◦ Bullzip PDF printer and Adobe Acrobat Reader: The Microsoft Word document is printed and reviewed as a PDF file.

◦ Microsoft Excel 2007 or 2010: A very large randomized spreadsheet is opened. ◦ Microsoft PowerPoint 2007 or 2010: A presentation is reviewed and edited. ◦ 7-zip: Using the command-line version, the output of the session is zipped. You can obtain additional information about Login VSI at http://www.loginvsi.com.

Test Procedure The test procedure described here was used for each test cycle in this study to help ensure consistent results. To validate the solution, comprehensive metrics were captured for the entire virtual desktop lifecycle: desktop bootup, user login to virtual desktops (ramp up), user workload simulation (steady state), and user logoff. To generate load within the environment, Login VSI software was used to initiate desktop connections, simulate application workloads, and track application responsiveness. The default medium workload for Login VSI 3.7 was used, representing office productivity tasks for a typical knowledge worker. For each test run, performance monitoring scripts were started to track resource consumption for infrastructure components (Cisco UCS Mini hosts and Nimble Storage I/O controllers). To begin testing, all desktops were taken out of maintenance mode, the virtual machines were started, and the system waited for them to register. The Login VSI launchers initiated the desktop sessions and began user logins, constituting the ramp-up phase. After all users were logged in, the steady-state portion of the test began, in which Login VSI runs an application workload that includes Microsoft Office, Internet Explorer with Adobe Flash, file compression, printing, and PDF viewing. Login VSI loops through specific operations and measures response times at regular intervals. The application response times reported by the Login VSI software determine the maximum number of users that the test environment can support before performance degrades consistently. Because baseline response times can vary depending on the virtualization technology used, the use of a dynamically calculated threshold provides greater accuracy for cross-vendor comparisons. For this reason, the Login VSI software also reports VSImax Dynamic.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 37 of 53

Testing was conducted consisting of single-server and multiple-server scalability tests, with a three-phase process targeted at the following goals: ●

Single-server testing: This test validates single-server scalability under a maximum recommended load for both RDS and VDI workload scenarios. This phase validated a given density level for a single Cisco UCS blade. The maximum recommended density level is that in which CPU utilization reaches a maximum of 90 to 95 percent.



Full-scale testing: This test determines the workload mix and validates multiple-server scalability. First, a ratio of RDS to VDI workloads was defined based on the earlier single-server scalability results. In subsequent testing, the solution was examined using that mixed workload on multiple blades for a user density of 500 sessions. By configuring a mixed workload across all blades, the solution was validated in a diverse, efficient manner. This testing revealed the most cost-effective overall configuration, especially for sites with a smaller number of users requiring different use cases.

Solution Validation The Cisco Test Protocol for Virtual Desktops was employed to help ensure that the solution delivers interoperability, reliability, and an outstanding end-user experience.

Single-Server Citrix XenApp (RDS) Testing, 190 Users Single-server Citrix XenApp (RDS) testing was performed with 190 users (Figure 39). Figure 39.

Single-Server Citrix XenApp (RDS) Testing with 190 Users

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 38 of 53

Figures 40 through 47 show the results of the tests. Figure 40.

Login VSI Test Result for 190 Virtual Desktop Sessions Running on Eight Microsoft Server 2012 RDS Virtual Machines Hosted by a Single, Dedicated Cisco UCS B200 M3 Blade Server

Figure 41.

ESXTOP CPU Total Core Utilization Time Chart for 190 Virtual Desktop Sessions Running on Eight Microsoft Server 2012 RDS Virtual Machines Hosted by a Single, Dedicated Cisco UCS B200 M3 Blade Server

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 39 of 53

Figure 42.

ESXTOP Nonkernel MBytes Chart for 190 Virtual Desktop Sessions Running on Eight Microsoft Server 2012 RDS Virtual Machines Hosted by a Single, Dedicated Cisco UCS B200 M3 Blade Server

Figure 43.

ESXTOP Disk Adapter VMHBA Chart for 190 Virtual Desktop Sessions Running on Eight Microsoft Server 2012 RDS Virtual Machines Hosted by a Single, Dedicated Cisco UCS B200 M3 Blade Server

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 40 of 53

Figure 44.

ESXTOP Network Chart for 190 Virtual Desktop Sessions Running on Eight Microsoft Server 2012 RDS Virtual Machines Hosted by a Single, Dedicated Cisco UCS B200 M3 Blade Server

Figure 45.

Nimble Storage I/O Operations per Second (IOPS) for All Volumes with 190 RDS Users

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 41 of 53

Figure 46.

Nimble Storage Average Response-Time Latency for All Volumes with 190 RDS Users

Figure 47.

Nimble Storage Megabytes-per-Second Throughput for All Volumes with 190 RDS Users

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 42 of 53

Single-Server Citrix XenDesktop (VDI) Testing, 150 Users Single-Server Citrix XenDesktop (VDI) testing was performed with 150 users (Figure 48). Figure 48.

Single-Server Citrix XenDesktop (VDI) Testing with 150 Users

Figures 49 through 56 show the results of the tests. Figure 49.

Login VSI Test Results for 150 Microsoft Windows 7 Virtual Desktop Virtual Machines Hosted by a Single, Dedicated Cisco UCS B200 M3 Blade Server

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 43 of 53

Figure 50.

ESXTOP CPU Total Core Utilization Time Chart for 150 Windows 7 Virtual Desktop Virtual Machines Hosted by a Single, Dedicated Cisco UCS B200 M3 Blade Server

Figure 51.

ESXTOP Nonkernel MBytes Chart for 150 Windows 7 Virtual Desktop Virtual Machines Hosted by a Single, Dedicated Cisco UCS B200 M3 Blade Server

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 44 of 53

Figure 52.

ESXTOP Disk Adapter VMHBA Chart for 150 Windows 7 Virtual Desktop Virtual Machines Hosted by a Single, Dedicated Cisco UCS B200 M3 Blade Server

Figure 53.

ESXTOP Network Chart for 150 Windows 7 Virtual Desktop Virtual Machines Hosted by a Single, Dedicated Cisco UCS B200 M3 Blade Server

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 45 of 53

Figure 54.

Nimble Storage IOPS for All Volumes with 150 VDI Users

Figure 55.

Nimble Storage Average Response-Time Latency for All Volumes with 150 VDI Users

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 46 of 53

Figure 56.

Nimble Storage Megabytes-per-Second Throughput for All Volumes with 150 VDI Users

Full-Scale Mixed-Workload Testing, 500 Users Full-scale mixed-workload testing was performed with 500 users (Figure 57). Figure 57.

Full-Scale Mixed-Workload Testing with 500 Users

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 47 of 53

Figures 58 through 69 show the results of the tests. Figure 58.

Login VSI Test Results for 500 Mixed Virtual Desktop Sessions (200 RDS and 300 VDI) Hosted by Four Cisco UCS B200 M3 Blade Servers

Figure 59.

ESXTOP CPU Total Core Utilization Time Charts for 200 RDS Sessions Hosted by Two Cisco UCS B200 M3 Blade Servers

Figure 60.

ESXTOP Nonkernel MBytes Charts for 200 RDS Sessions Hosted by Two Cisco UCS B200 M3 Blade Servers

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 48 of 53

Figure 61.

ESXTOP Disk Adapter VMHBA Charts for 200 RDS Sessions Hosted by Two Cisco UCS B200 M3 Blade Servers

Figure 62.

ESXTOP Network Charts for 200 RDS Sessions Hosted by Two Cisco UCS B200 M3 Blade Servers

Figure 63.

ESXTOP CPU Total Core Utilization Time Charts for 300 VDI Sessions Hosted by Two Cisco UCS B200 M3 Blade Servers

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 49 of 53

Figure 64.

ESXTOP Nonkernel MBytes Charts for 300 VDI Sessions Hosted by Two Cisco UCS B200 M3 Blade Servers

Figure 65.

ESXTOP Disk Adapter VMHBA Charts for 300 VDI Sessions Hosted by Two Cisco UCS B200 M3 Blade Servers

Figure 66.

ESXTOP Network Charts for 300 VDI Sessions Hosted by Two Cisco UCS B200 M3 Blade Servers

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 50 of 53

Figure 67.

Nimble Storage IOPS for All Volumes with 500 Mixed Virtual Desktops

Figure 68.

Nimble Storage Average Response-Time Latency for All Volumes with 500 Mixed Virtual Desktops

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 51 of 53

Figure 69.

Nimble Storage Megabites-per-Second Throughput for All Volumes with 500 Mixed Virtual Desktops

Conclusion Cisco UCS Mini and Nimble Storage together are an excellent solution for remote and branch offices hosting Citrix virtual desktops. The Cisco UCS B200 M3 Blade Servers add the flexibility needed to run both infrastructure services and Microsoft RDS server virtual machine workloads on the same blade. Customers have the freedom to mix and match the types of virtual desktop sessions that best fit their organizations at the highest level of efficiency. The Nimble Storage CS300 array provides an agile storage platform that meets the requirements of both Microsoft RDS servers and traditional VDI users in addition to the requirements for persistent user data. When you use Cisco UCS Mini plus the onboard Cisco UCS Manager software in combination with Cisco UCS Central Software, you can manage remote and branch office as if they are part of the corporate data center. This approach gives you outstanding managability over the entire Cisco UCS platform throughout the enterprise.

For More Information ●

Cisco UCS B-Series Servers

◦ http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-b-series-blade-servers/index.html ●

Cisco UCS Manager configuration

◦ http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/gui/config/guide/30/b_UCSM_GUI_User_Guide_3_0.pdf ●

Cisco UCS Mini

◦ http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-mini/index.html ●

Citrix XenDesktop and XenApp 7.6 reference documentation

◦ http://support.citrix.com/proddocs/topic/xenapp-xendesktop-76/xad-whats-new.html ◦ http://support.citrix.com/proddocs/topic/xenapp-xendesktop/xad-xenapp-xendesktop-76-landing.html © 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 52 of 53

◦ http://support.citrix.com/proddocs/topic/xenapp-xendesktop-76/xad-architecture-article.html ◦ XenDesktop 7.0 Handbook: http://support.citrix.com/article/CTX139331 ◦ XenDesktop 7.0 Blueprint: http://support.citrix.com/article/CTX138981 ●

Microsoft Windows and Citrix optimization guides for virtual desktops

◦ http://support.citrix.com/article/CTX125874 ◦ http://support.citrix.com/article/CTX140375 ◦ http://support.citrix.com/article/CTX117374 ●

Nimble Storage products and specifications

◦ http://www.nimblestorage.com/products/specifications.php ●

Nimble Storage InfoSight

◦ http://www.nimblestorage.com/infosight/overview.php ●

Nimble Storage and desktop virtualization

◦ http://www.nimblestorage.com/solutions/vdi.php ●

VMware vSphere ESXi and vCenter Server 5.5 documentation

◦ http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf

Printed in USA

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

C11-733903-00

2/15

Page 53 of 53

Suggest Documents