EMC VSPEX END-USER COMPUTING

DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Pr...
Author: Rudolph Craig
1 downloads 3 Views 2MB Size
DESIGN GUIDE

EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection

EMC VSPEX Abstract This Design Guide describes how to design an EMC® VSPEX® End-User Computing solution for VMware Horizon View. EMC XtremIO™, EMC Isilon®, EMC VNX®, and VMware vSphere provide the storage and virtualization platforms. July 2015

Copyright © 2014, 2015 EMC Corporation. All rights reserved. Published in the USA. Published July 2015 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC VSPEX End-User Computing VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection Design Guide

Part Number H13275.2

2

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Contents

Contents Chapter 1

Introduction

8

Purpose of this guide.................................................................................................. 9 Business value ........................................................................................................... 9 Scope ....................................................................................................................... 10 Audience .................................................................................................................. 10 Terminology.............................................................................................................. 11

Chapter 2

Before You Start

12

Deployment workflow ............................................................................................... 13 Essential reading ...................................................................................................... 13 VSPEX Solution Overview ..................................................................................... 13 VSPEX Implementation Guide .............................................................................. 13 VSPEX Proven Infrastructure Guide ...................................................................... 14 EMC Data Protection for VSPEX guide ................................................................... 14 RSA SecurID for VSPEX end-user computing guide ............................................... 14

Chapter 3

Solution Overview

15

Overview .................................................................................................................. 16 VSPEX Proven Infrastructures ................................................................................... 16 Solution architecture ................................................................................................ 17 High-level architecture ......................................................................................... 17 Logical architecture ............................................................................................. 19 Key components ....................................................................................................... 20 Desktop virtualization broker ................................................................................... 21 VMware Horizon View 6.0 .................................................................................... 21 VMware View Composer ...................................................................................... 22 VMware View Persona Management .................................................................... 22 VMware View Storage Accelerator ........................................................................ 22 VMware vRealize Operations Manager for Horizon View ....................................... 23 Virtualization layer ................................................................................................... 24 VMware vSphere .................................................................................................. 24 VMware vCenter Server ........................................................................................ 24 VMware vSphere High Availability ........................................................................ 24 VMware vShield Endpoint .................................................................................... 24 Compute layer .......................................................................................................... 24 EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

3

Contents

Network layer ........................................................................................................... 25 Storage layer ............................................................................................................ 25 EMC XtremIO........................................................................................................ 25 EMC Isilon............................................................................................................ 27 EMC VNX .............................................................................................................. 30 Virtualization management .................................................................................. 34 Data protection layer ................................................................................................ 35 Security layer............................................................................................................ 35 VMware Workspace solution .................................................................................... 35

Chapter 4

Sizing the Solution

37

Overview .................................................................................................................. 38 Reference workload .................................................................................................. 38 VSPEX Private Cloud requirements............................................................................ 39 Private Cloud Storage Layout ............................................................................... 41 XtremIO array configurations .................................................................................... 41 Validated XtremIO configurations ........................................................................ 41 XtremIO Storage Layout ....................................................................................... 41 Expanding existing VSPEX end-user computing environments ............................. 42 Isilon configuration .................................................................................................. 42 VNX configurations ................................................................................................... 42 VNX FAST VP ........................................................................................................ 43 VNX shared file systems....................................................................................... 43 Choosing the appropriate reference architecture ...................................................... 43 Using the Customer Sizing Worksheet.................................................................. 44 Selecting a reference architecture ........................................................................ 46 Fine tuning hardware resources ........................................................................... 47 Summary ............................................................................................................. 47

Chapter 5

Solution Design Considerations and Best Practices

49

Overview .................................................................................................................. 50 Server design considerations ................................................................................... 50 Server best practices ........................................................................................... 51 Validated server hardware ................................................................................... 52 vSphere memory virtualization ............................................................................ 53 Memory configuration guidelines ......................................................................... 54 Network design considerations ................................................................................ 56 Validated network hardware ................................................................................ 57 Network configuration guidelines ........................................................................ 57 Storage design considerations ................................................................................. 61 4

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Contents

Overview.............................................................................................................. 61 Validated storage hardware and configuration ..................................................... 61 vSphere storage virtualization ............................................................................. 62 High availability and failover .................................................................................... 63 Virtualization layer ............................................................................................... 63 Compute layer ..................................................................................................... 63 Network layer....................................................................................................... 64 Storage layer ....................................................................................................... 65 Validation test profile ............................................................................................... 66 Profile characteristics .......................................................................................... 66 Antivirus and antimalware platform profile ............................................................... 67 Platform characteristics ....................................................................................... 67 vShield architecture ............................................................................................. 67 VMware vRealize Operations Manager for Horizon View platform profile ................... 68 Platform characteristics ....................................................................................... 68 vRealize Operations Manager for Horizon View architecture ................................. 68 VSPEX for VMware Workspace solution..................................................................... 69 Key VMware Workspace components ................................................................... 69 VSPEX for VMware Workspace architecture .......................................................... 70

Chapter 6

Reference Documentation

73

EMC documentation ................................................................................................. 74 Other documentation ............................................................................................... 74

Appendix A Customer Sizing Worksheet

76

Customer Sizing Worksheet for end-user computing ................................................. 77

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

5

Contents

Figures Figure 1.

VSPEX Proven Infrastructures .............................................................. 17

Figure 2.

Architecture of the validated solution .................................................. 18

Figure 3.

Logical architecture ............................................................................. 19

Figure 4.

Isilon cluster components ................................................................... 28

Figure 5.

EMC Isilon OneFS Operating System functionality ................................ 28

Figure 6.

Isilon node classes .............................................................................. 30

Figure 7.

New Unisphere Management Suite ...................................................... 32

Figure 8.

Compute layer flexibility ...................................................................... 50

Figure 9.

Hypervisor memory consumption ........................................................ 53

Figure 10.

Virtual machine memory settings ........................................................ 55

Figure 11.

Highly-available XtremIO FC network design example .......................... 58

Figure 12.

Highly-available VNX Ethernet network design example ....................... 59

Figure 13.

Required networks .............................................................................. 60

Figure 14.

VMware virtual disk types .................................................................... 63

Figure 15.

High availability at the virtualization layer ........................................... 63

Figure 16.

Redundant power supplies .................................................................. 64

Figure 17.

VNX Ethernet network layer high availability ........................................ 64

Figure 18.

XtremIO series high availability ........................................................... 65

Figure 19.

VMware Workspace architecture layout ............................................... 69

Figure 20.

VSPEX for VMware Workspace solution: logical architecture ............... 71

Figure 21.

Printable customer sizing worksheet ................................................... 78

Tables

6

Table 1.

Terminology......................................................................................... 11

Table 2.

Deployment workflow .......................................................................... 13

Table 3.

Solution components .......................................................................... 20

Table 4.

VSPEX end-user computing: Design process ........................................ 38

Table 5.

Reference virtual desktop characteristics ............................................ 38

Table 6.

Infrastructure server minimum requirements ....................................... 39

Table 7.

XtremIO X-Brick configurations ............................................................ 41

Table 8.

User data resource requirement on Isilon ............................................ 42

Table 9.

User data resource requirement on VNX .............................................. 43

Table 10.

Sample Customer Sizing Worksheet .................................................... 44

Table 11.

Reference virtual desktop resources .................................................... 45

Table 12.

Server resource component totals ....................................................... 47

Table 13.

Server hardware .................................................................................. 52

Table 14.

Minimum switching capacity ............................................................... 57

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Contents

Table 15.

Storage hardware ................................................................................ 61

Table 16.

Validated environment profile ............................................................. 66

Table 17.

Antivirus platform characteristics ........................................................ 67

Table 18.

Horizon View platform characteristics.................................................. 68

Table 19.

OVA virtual appliances ........................................................................ 70

Table 20.

Minimum hardware resources for VMware Workspace ......................... 71

Table 21.

Customer sizing worksheet .................................................................. 77

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

7

Chapter 1: Introduction

Chapter 1

Introduction

This chapter presents the following topics: Purpose of this guide ................................................................................................. 9 Business value ........................................................................................................... 9 Scope .......................................................................................................................10 Audience ..................................................................................................................10 Terminology .............................................................................................................11

8

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 1: Introduction

Purpose of this guide The EMC® VSPEX® End-User Computing Proven Infrastructure provides customers with a modern system capable of hosting a large number of virtual desktops at a consistent level of performance. This VSPEX End-User Computing solution for VMware Horizon View 6.0 runs on a VMware vSphere virtualization layer backed by the highly available EMC XtremIO™ family, which provides the storage. In this solution, the desktop virtualization infrastructure is layered on a VSPEX Private Cloud for VMware vSphere Proven Infrastructure, and the desktops are hosted on dedicated resources. The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of a large virtual machine environment. XtremIO solutions provide storage for virtual desktops, EMC VNX® solutions provide storage for user data, EMC Avamar® backup and recovery solutions provide data protection for user data, and RSA® SecurID® provides optional secure user authentication. This VSPEX End-User-Computing solution is validated for up to 2,500 full-clone or 3,500 linked-clone virtual desktops for an X-Brick, and for up to 1,250 full-clone or 1,750 linked-clone virtual desktops for a Starter X-Brick. These validated configurations are based on a reference desktop workload and form the basis for cost-effective and custom solutions for customers. XtremIO supports scale-out clusters of up to six X-Bricks. Each additional X-Brick linearly increases performance and virtual desktop capacity. XtremIO X-Bricks are validated to support a higher number of desktops (both full-clone and linked-clone) and the VSPEX validated numbers are particular to the communicated solution only. An end-user computing or virtual desktop infrastructure is a complex system. This Design Guide describes how to design an end-user computing solution according to best practices for VMware Horizon View for VMware vSphere enabled by XtremIO, EMC VNX or EMC Isilon, and EMC Data Protection.

Business value Business applications are moving into a consolidated compute, network, and storage environment. This VSPEX End-User Computing solution with VMware reduces the complexity of configuring every component of a traditional deployment model. The solution reduces the complexity of integration management while maintaining application design and implementation options. It also provides unified administration, while enabling process control and monitoring. The business benefits of the VSPEX End-User Computing solution for VMware Horizon View include: 

An end-to-end virtualization solution to use the capabilities of the unified infrastructure components



Efficient virtualization for varied customer use cases of up to 2,500 full-clone or 3,500 linked-clone virtual desktops for an X-Brick, and up to 1,250 full-clone or 1,750 linked-clone virtual desktops for a Starter X-Brick

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

9

Chapter 1: Introduction



Reliable, flexible, and scalable reference architectures

Scope This Design Guide describes how to plan a simple, effective, and flexible EMC VSPEX end-user computing solution for VMware Horizon View 6.0. It provides deployment examples for virtual desktop storage on XtremIO and user data storage on VNX storage arrays. The same principles and guidelines apply to all VNX models that are validated as part of EMC’s VSPEX program. This guide illustrates how to size Horizon View on the VSPEX infrastructure, allocate resources following best practices, and use all the benefits that VSPEX offers. EMC Data Protection solutions for VMware Horizon View data protection are described in a separate document, EMC Backup and Recovery for VSPEX for End User Computing with VMware Horizon View Design and Implementation Guide. The optional RSA SecurID secure user authentication solution for VMware Horizon View is also described in a separate document, Securing EMC VSPEX End-User

Computing with RSA SecurID: VMware Horizon View 5.2 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Design Guide.

Audience This guide is for internal EMC personnel and qualified EMC VSPEX Partners. The guide assumes that VSPEX partners who deploy this VSPEX Proven Infrastructure for VMware Horizon View have the necessary training and background to install and configure an end-user computing solution based on Horizon View with vSphere as the hypervisor, XtremIO, VNX, or Isilon storage systems, and associated infrastructure. Readers should also be familiar with the infrastructure and database security policies of the customer installation. This guide provides external references where applicable. EMC recommends that partners implementing this solution are familiar with these documents. For details, see Essential reading and Chapter 6: Reference Documentation.

10

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 1: Introduction

Terminology Table 1 lists the terminology used in this guide. Table 1.

Terminology

Term

Definition

Data deduplication

A feature of the XtremIO array that reduces physical storage utilization by eliminating redundant blocks of data

Full clones

Desktops that are deployed using a vSphere template

Linked clones

Desktops that share a common base image within a desktop pool and have a minimal storage footprint

Reference architecture

The validated architecture that supports this VSPEX end-usercomputing solution at four points of scale—an X-Brick capable of hosting up to 2,500 full-clone or 3,500 linked-clone virtual desktops , and a Starter X-Brick capable of hosting up to 1,250 full-clone or 1,750 linked-clone virtual desktops.

Reference workload

For VSPEX end-user computing solutions, a single virtual desktop—the reference virtual desktop—with the workload characteristics indicated in Table 5 on page 38. By comparing the customer’s actual usage to this reference workload, you can determine which reference architecture to choose as the basis for the customer’s VSPEX deployment. Refer to Reference workload for details.

Storage processor (SP)

The compute component of the VNX storage array. SPs move data into, out of, and between arrays.

Storage Controller (SC)

The compute component of the XtremIO storage array. SCs move data into, out of, and between XtremIO arrays.

Virtual Desktop Infrastructure (VDI)

Decouples the desktop from the physical machine. In a VDI environment, the desktop operating system (OS) and applications reside inside a virtual machine running on a host computer, with data residing on shared storage. Users access their virtual desktop from any computer or mobile device over a private network or internet connection.

XtremIO Management Server (XMS)

Used to manage an XtremIO array and deployed as a virtual machine. The XMS is deployed using an Open Virtualization Alliance (OVA) package.

XtremIO Starter X-Brick

A specialized configuration of the XtremIO All-Flash Array that includes 13 SSD drives for this solution

XtremIO X-Brick

A specialized configuration of the XtremIO All-Flash Array that includes 25 SSD drives for this solution

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

11

Chapter 2: Before You Start

Chapter 2

Before You Start

This chapter presents the following topics: Deployment workflow .............................................................................................. 13 Essential reading .....................................................................................................13

12

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 2: Before You Start

Deployment workflow Table 2 shows the process flow required to design and implement your end-user computing solution.1 Table 2.

Deployment workflow

Step

Action

1

Use the Customer Sizing Worksheet to collect customer requirements. Refer to Appendix A of this Design Guide.

2

Use the EMC VSPEX Sizing Tool to determine the recommended VSPEX reference architecture for your end-user computing solution based on the customer requirements collected in Step 1. For more information about the Sizing Tool, refer to the EMC VSPEX Sizing Tool portal. Note: If the Sizing Tool is not available, you can manually size the application using the guidelines in Chapter 4 of this Design Guide.

3

Use this Design Guide to determine the final design for your VSPEX solution. Note: Ensure that all resource requirements are considered and not only the requirements for end-user computing.

4

Order the correct VSPEX reference architecture and Proven Infrastructure. Refer to the VSPEX Proven Infrastructure Guide in Essential reading for guidance on selecting a Private Cloud Proven Infrastructure.

5

Deploy and test your VSPEX solution. Refer to the VSPEX Implementation Guide in Essential reading for guidance. Note: The solution was validated by EMC using the Login VSI tool as described in Chapter 4. Please see www.loginvsi.com for further information.

Essential reading EMC recommends that you read the following documents, available from the VSPEX space in the EMC Community Network or from EMC.com or the VSPEX Proven Infrastructure partner portal. VSPEX Solution Overview

Refer to the following VSPEX Solution Overview document:

VSPEX Implementation Guide

Refer to the following VSPEX Implementation Guide:

EMC VSPEX End User Computing Solutions with VMware vSphere and VMware View

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO

If your solution includes backup and recovery components, refer to the EMC Backup and Recovery for VSPEX for End-User Computing with VMware View Design and Implementation Guide for backup and recovery sizing and implementation guidelines. 1

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

13

Chapter 2: Before You Start

14

VSPEX Proven Infrastructure Guide

Refer to the EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual

EMC Data Protection for VSPEX guide

Refer to the EMC Backup and Recovery for VSPEX for End-User Computing with

RSA SecurID for VSPEX end-user computing guide

Refer to the Securing EMC VSPEX End-User Computing with RSA SecurID: VMware

Machines

VMware View Design and Implementation Guide

Horizon View 5.2 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Design Guide

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview

Chapter 3

Solution Overview

This chapter presents the following topics: Overview ..................................................................................................................16 VSPEX Proven Infrastructures...................................................................................16 Solution architecture ............................................................................................... 17 Key components ......................................................................................................20 Desktop virtualization broker ...................................................................................21 Virtualization layer...................................................................................................24 Compute layer ..........................................................................................................24 Network layer ...........................................................................................................25 Storage layer ...........................................................................................................25 Data protection layer................................................................................................ 35 Security layer ...........................................................................................................35 VMware Workspace solution ....................................................................................35

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

15

Chapter 3: Solution Overview

Overview This chapter provides an overview of the VSPEX End-User Computing for VMware Horizon View on VMware vSphere solution and the key solution technologies. The solution is designed and proven by EMC to provide the virtualization, server, network, storage, and backup resources to support reference architectures for the following validated X-Brick configurations of the XtremIO All-Flash Array: 

XtremIO Starter X-Brick—Includes 13 SSD drives, and supports up to 1,250 fullclone or 1,750 linked-clone virtual desktops.



XtremIO X-Brick—Includes 25 SSD drives, and supports up to 2,500 full-clone or 3,500 linked-clone virtual desktops.

XtremIO X-Bricks are validated to support a higher number of desktops (both full and linked) and the VSPEX validated numbers are only for this solution. Although the desktop virtualization infrastructure components of the solution shown in Figure 3 are designed to be layered on a VSPEX Private Cloud, the reference architectures do not include configuration details for the underlying Proven Infrastructure. Refer to the VSPEX Proven Infrastructure Guide in Essential reading for information on configuring the required infrastructure.

VSPEX Proven Infrastructures EMC has joined forces with the industry-leading providers of IT infrastructure to create a complete virtualization solution that accelerates deployment of the private cloud and VMware Horizon View virtual desktops. VSPEX enables customers to accelerate IT transformations with faster deployment, greater simplicity and choice, higher efficiency, and lower risk, compared to the challenges and complexity of building an IT infrastructure themselves. VSPEX validation by EMC ensures predictable performance and enables customers to select technology that uses their existing or newly acquired IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a virtual infrastructure for customers who want the simplicity of truly converged infrastructures, with more choice in individual components. VSPEX Proven Infrastructures, as shown in Figure 1, are modular, virtualized infrastructures validated by EMC and delivered by EMC VSPEX partners. They include virtualization, server, network, storage, and backup layers. Partners can choose the virtualization, server, and network technologies that best fit a customer’s environment, while the highly available XtremIO and VNX family of storage systems and EMC Data Protection technologies provide the storage and backup layers.

16

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview

Figure 1.

VSPEX Proven Infrastructures

Solution architecture High-level architecture

The EMC VSPEX End-User Computing for VMware Horizon View solution provides a complete system architecture with two XtremIO X-Brick configurations. An X-Brick is capable of supporting up to 2,500 full-clone or 3,500 linked-clone virtual desktops, and a Starter X-Brick is capable of supporting up to 1,250 full-clone or 1,750 linkedclone virtual desktops. The solution supports block storage on XtremIO for virtual desktops and optional file storage on Isilon or VNX for user data. XtremIO X-Bricks are validated to support a high number of both full-clone and linkedclone desktops and the VSPEX validated numbers are relevant only to this solution.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

17

Chapter 3: Solution Overview

Figure 2 shows the high-level architecture of the validated solution.

Figure 2.

Architecture of the validated solution

The solution uses XtremIO, Isilon or VNX, and VMware vSphere to provide the storage and virtualization platforms for Microsoft Windows 7 or Windows 8.1 virtual desktops provisioned by VMware Horizon View Composer. For the solution, we2 deployed the XtremIO array in multiple configurations to support up to 3,500 virtual desktops. We tested two XtremIO arrays: a Starter X-Brick capable of hosting up to 1,250 full-clone or 1,750 linked-clone virtual desktops and an XBrick capable of hosting up to 2,500 full-clone or 3,500 linked-clone virtual desktops. We also deployed Isilon and VNX arrays for hosting user data. The highly available XtremIO array provides the storage for the virtual desktops. The infrastructure services for the solution, as shown in Figure 3, can be provided by existing infrastructure at the customer site, by the VSPEX Private Cloud, or by deploying them as dedicated resources as part of the solution. The virtual desktops, as shown in Figure 3, require dedicated end-user computing resources and are not intended to be layered on a VSPEX Private Cloud. Planning and designing the storage infrastructure for a Horizon View environment is critical because the shared storage must be able to absorb large bursts of I/O. These bursts can lead to periods of erratic and unpredictable virtual desktop performance.

2

18

In this guide, "we" refers to the EMC Solutions engineering team that validated the solution.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview

Users can adapt to slow performance, but unpredictable performance frustrates them and reduces efficiency. To provide predictable performance for end-user computing solutions, the storage system must be able to handle the peak I/O load from the clients while keeping response time to a minimum. This solution uses the XtremIO array to provide the submillisecond response times the clients require, while the real-time, inline deduplication and inline compression features of the platform reduce the amount of physical storage needed. EMC Data Protection solutions enable user data protection and end-user recoverability. This Horizon View solution uses Avamar and its desktop client to achieve this. Logical architecture

The EMC VSPEX End-User Computing for VMware Horizon View solution supports block storage on XtremIO for the virtual desktops. Figure 3 shows the logical architecture of the solution.



Desktop users (PCoIP clients)

Virtual desktop #1

Virtual desktop #n

VMware vSphere virtual desktops

View Manager Server 1

SQL Server

View Manager Server 2 & 3



Active Directory / DNS / DHCP

vCenter Server

VMware vSphere virtual servers VMware vSphere cluster virtual desktops Network EMC Isilon VMware vSphere cluster infrastructure

EMC Avamar

10 GbE IP network 8 Gb FC/10 Gb iSCSI network

Figure 3.

EMC XtremIO EMC VNX

Logical architecture

This solution uses two networks: one storage network (8 Gb FC or 10 GbE iSCSI) for carrying virtual desktop and virtual server OS data and one 10 Gb Ethernet network for carrying all other traffic. The storage network uses 8 Gb Fibre Channel (FC) or 10 Gb Ethernet with iSCSI protocol. Note: The solution also supports 1 Gb Ethernet if the bandwidth requirements are met.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

19

Chapter 3: Solution Overview

Key components This section provides an overview of the key technologies used in this solution, as outlined in Table 3. Table 3.

Solution components

Component

Description

Desktop virtualization broker

Manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images provided to users. This software enables on-demand creation of desktop images, allows image maintenance without affecting user productivity, and prevents unconstrained growth in the environment. VMware Horizon View 6.0 is the desktop broker in this solution.

Virtualization layer

Decouples physical resources from the applications that use them. The application’s view of the resource availability is not tied to the hardware. This enables many key features in the end-user computing concept. This solution uses VMware vSphere for the virtualization layer.

Compute layer

Network layer

Storage layer

Provides CPU and memory resources for the virtualization layer and for the applications running in the infrastructure. The VSPEX program defines the minimum amount of required compute layer resources but allows the customer to implement the solution using any server hardware that meets the requirements. Connects users to resources they need and connects the storage layer to the compute layer. The VSPEX program defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but allows the customer to implement the solution using any network hardware that meets the requirements. A critical resource for the implementation of the end-user computing environment, the storage layer must be able to absorb large bursts of activity without affecting the user experience. This solution uses XtremIO and Isilon or VNX series arrays to handle the workload.

Data protection

An optional component that provides data protection in the event that data in the primary system is deleted, damaged, or unusable. This solution uses Avamar for data protection.

Security layer

An optional component that provides consumers with options to control environment access and ensure that only authorized users access the system. This solution uses RSA SecurID to provide user authentication.

VMware Workspace

20

Optional support for VMware Workspace deployments.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview

Desktop virtualization broker Desktop virtualization encapsulates and hosts desktop services on centralized computing resources at remote data centers. This enables end users to connect to their virtual desktops from different types of devices. Devices can include desktops, laptops, thin clients, zero clients, smartphones, and tablets. In this solution, we used VMware Horizon View to provision, manage, broker, and monitor the desktop virtualization environment. VMware Horizon View 6.0

VMware Horizon View is a leading desktop virtualization solution that delivers desktop services from the cloud to end users. VMware Horizon View 6.0 integrates with vSphere to provide: 

Storage resource optimization—View Composer optimizes storage utilization and performance by reducing the footprint of virtual desktops.



Thin provisioning support—Horizon View 6.0 enables storage resource allocation when virtual desktops are provisioned. This results in better utilization of the storage infrastructure and reduced capital expenditure (CAPEX) and operating expenditure (OPEX).



Desktop virtual machine space reclamation—Horizon View 6.0 can reclaim free disk space in Windows 8.1 virtual desktops. This ensures that the storage space required for linked clone desktops is at a minimum throughout the desktop lifecycle.

The Horizon View 6.0 release includes the following user experience enhancements: 

Ability to stream applications directly to View clients using Microsoft Windows RDS servers



Ability to create multi-site, federated View pods to support disaster recovery or load balancing initiatives



A virtualized graphics processing unit (GPU) to support hardware-accelerated 3D graphics



Desktop access through HTML5 as well as iOS and Android applications

Refer to VMware Horizon View 6.0 Release Notes for more details. The VMware Horizon View editions are bundled solutions that include vSphere Desktop Edition and vCenter Desktop Server. For solution validation, we deployed VMware Horizon Enterprise Edition, which includes vCenter Desktop, vSphere Desktop, View Manager, View Composer, View Persona Management, Workspace, and VMware ThinApp.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

21

Chapter 3: Solution Overview

VMware View Composer

VMware View Composer works directly with vCenter Server to deploy, customize, and maintain the state of the virtual desktops when using linked clones. View Composer also enables the following capabilities: 

Tiered storage support to enable the use of dedicated storage resources for the placement of both the read-only replica and linked-clone disk images



An optional stand-alone View Composer server to minimize the impact of virtual desktop provisioning and maintenance operations on the vCenter server

This solution uses View Composer to deploy dedicated virtual desktops running Windows 7 or Windows 8.1 as linked clones. VMware View Persona Management

VMware View Persona Management preserves user profiles and synchronizes them with a remote profile repository. View Persona Management does not require the configuration of Windows roaming profiles, eliminating the need to use Active Directory to manage Horizon View user profiles. View Persona Management provides the following benefits over traditional Windows roaming profiles:

VMware View Storage Accelerator



Horizon View dynamically downloads a user’s remote profile when the user logs in to a Horizon View desktop.



During login, Horizon View downloads only the files that Windows requires, such as user registry files. It then copies other files to the local desktop when the user or an application opens them from the local profile folder.



Horizon View copies recent changes in the local profile to the remote repository at a configurable interval.



During logout, Horizon View copies to the remote repository only the files that the user updated since the last replication.



You can configure View Persona Management to store user profiles in a secure, centralized repository.

VMware View Storage Accelerator reduces the storage load associated with virtual desktops by caching the common blocks of desktop images into local vSphere host memory. For this, Storage Accelerator uses Content Based Read Cache (CBRC), which is implemented inside the vSphere hypervisor. When enabled for the Horizon View virtual desktop pools, the host hypervisor scans the storage disk blocks to generate digests of the block contents. These blocks are cached in the host-based CBRC when the hypervisor reads them. Subsequent block reads with the same digest are served from the in-memory cache directly. This significantly improves the performance of the virtual desktops, especially during boot storms, user login storms, or antivirus scanning storms, when a large number of blocks with identical content are read.

22

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview

VMware vRealize Operations Manager for Horizon View

VMware vRealize Operations Manager for Horizon View provides end-to-end visibility into the health, performance, and efficiency of virtual desktop infrastructure (VDI) environments. It enables desktop administrators to proactively ensure the best enduser experience, avert incidents, and eliminate bottlenecks. Designed for VMware Horizon View, this version of vRealize Operations Manager improves IT productivity and lowers the cost of owning and operating VDI environments. Key features include: 

Patented self-learning analytics that adapt to your environment and continuously analyze thousands of metrics for server, storage, networking, and end-user performance



Comprehensive dashboards that simplify performance and health monitoring, identify bottlenecks, and improve Horizon View infrastructure efficiency



Dynamic thresholds and smart alerts that warn administrators and provide more specific information about impending performance issues



Automated root-cause analysis, session lookup, and event correlation for faster troubleshooting of end-user problems



Integrated approach to performance, capacity, and configuration that supports holistic VDI operation management



Designed and optimizationed specifically for VMware Horizon View

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

23

Chapter 3: Solution Overview

Virtualization layer VMware vSphere

VMware vSphere is the industry leading virtualization platform. It provides flexibility and cost savings by enabling the consolidation of large, inefficient server farms into nimble, reliable infrastructures. The core VMware vSphere components are the VMware vSphere hypervisor and VMware vCenter Server. This solution uses VMware vSphere Desktop Edition, which is for customers who want to purchase vSphere licenses only for desktop virtualization. vSphere Desktop provides the full range of features and functionalities of the vSphere Enterprise Plus edition and comes with unlimited vRAM entitlement.

VMware vCenter Server

VMware vCenter Server is a centralized platform for managing vSphere environments. It provides administrators with a single interface for all aspects of monitoring, managing, and maintaining the virtual infrastructure. vCenter is also responsible for managing advanced features such as vSphere High Availability (HA), vSphere Distributed Resource Scheduler (DRS), vSphere vMotion, and vSphere Update Manager.

VMware vSphere High Availability

VMware vSphere HA provides uniform, cost-effective failover protection against hardware and OS outages: 

If the virtual machine OS has an error, the virtual machine can be automatically restarted on the same hardware.



If the physical hardware has an error, the affected virtual machines can be automatically restarted on other servers in the cluster.

With vSphere HA, you can configure policies to determine which machines are restarted automatically and under what conditions these operations should be performed. VMware vShield Endpoint

VMware vShield Endpoint offloads virtual desktop antivirus and antimalware scanning operations to a dedicated secure virtual appliance delivered by VMware partners. Offloading scanning operations improves desktop consolidation ratios and performance by eliminating antivirus storms, streamlining antivirus and antimalware deployment, and monitoring and satisfying compliance and audit requirements through detailed logging of antivirus and antimalware activities.

Compute layer VSPEX defines the minimum amount of compute layer resources required, but allows the customer to implement the requirements using any server hardware that meets these requirements. For details, refer to Chapter 5 of this guide.

24

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview

Network layer VSPEX defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but allows the customer to use any network hardware that meets these requirements. For details, refer to Chapter 5 of this Guide.

Storage layer The storage layer is a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in a data center storage processing system. This VSPEX solution uses XtremIO storage arrays to provide virtualization at the storage layer. The XtremIO platform provides the required storage performance, increases storage efficiency and management flexibility, and reduces the total cost of ownership. This solution also uses the VNX family of arrays to provide user data storage. EMC XtremIO

The XtremIO All-Flash Array is an all-new design with a revolutionary architecture. It brings together all the necessary requirements to enable the agile data center: linear scale-out, inline all-the-time data services, and rich data center services for the workloads. The basic hardware building block for these scale-out arrays is the X-Brick. Each XBrick has two active-active controller nodes and a disk array enclosure packaged together. The Starter X-Brick with 13 SSDs can be expanded to a full X-Brick with 25 SSDs without any downtime. Up to six X-Bricks can be combined in a single scale-out cluster to increase performance and capacity in a linear fashion. The XtremIO platform maximizes the use of flash storage media. Key attributes of this platform are: 

Incredibly high levels of I/O performance, particularly for random I/O workloads that are typical in virtualized environments



Consistently low (sub-millisecond) latency



True inline data reduction—removes redundant information in the data path and writes only unique data to the array, which lowers the required storage capacity



A full suite of enterprise array capabilities, such as integration with VMware through VAAI, N-way active controllers, high availability, strong data protection, and thin provisioning

XtremIO storage systems include the following components: 

Host adapter ports—Provide host connectivity through fabric into the array



Storage controllers (SCs)—The compute component of the storage array. SCs handle all aspects of data moving into, out of, and between arrays



Disk drives—Solid state drives that contain the host and application data

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

25

Chapter 3: Solution Overview



Infiniband switches—A computer network communications link used in multi-XBrick configurations that is switched, high throughput, low latency, scalable, and quality-of-service and failover-capable

XtremIO Operating System (XIOS) XtremIO Operating System (XIOS) manages the XtremIO storage cluster. XIOS ensures the system remains balanced. It delivers the highest performance levels without any administrator intervention by: 

Ensuring all SSDs are evenly loaded, providing the highest possible performance and endurance for demanding workloads.



Eliminating the complex configuration needed for traditional arrays. There is no need to set RAID levels, determine drive group sizes, set stripe widths, set caching policies, or build aggregates.



Automatically and optimally configures every volume at all times. I/O performance on existing volumes and data sets automatically increases with large cluster sizes. Every volume is capable of receiving the full performance potential of the entire XtremIO system.

Standards-based enterprise storage system The XtremIO system interfaces with vSphere hosts using standard FC and iSCSI block interfaces. The system supports complete high-availability features, including support for native VMware multipath I/O, protection against failed SSDs, nondisruptive software and firmware upgrades, no single point of failure (SPOF), and hotswappable components. Real-time, inline data reduction The XtremIO storage system deduplicates and compresses data, including desktop images, in real time, allowing a massive number of virtual desktops to reside in a small and economical amount of flash storage. Data reduction on the XtremIO array does not adversely affect IOPS or latency performance; rather it enhances the performance of the end-user computing environment. Scale-out design The X-Brick is the fundamental building block of a scaled out XtremIO clustered system. Using a Starter X-Brick, virtual desktop deployments can start small and grow to nearly any scale required by upgrading the Starter X-Brick to an X-Brick, and then configuring a larger XtremIO cluster if required. The system expands capacity and performance linearly as building blocks are added, making EUC sizing and management of future growth extremely simple. VAAI integration The XtremIO array is fully integrated with vSphere through vStorage APIs for Array Integration (VAAI). All API commands are supported, including ATS, Clone Blocks/Full Copy/XCOPY, Zero Blocks/Write Same, Thin Provisioning, and Block Delete. This, in combination with the array’s inline data reduction and in-memory metadata management, enables nearly instantaneous virtual machine provisioning and cloning and makes it possible to use large volume sizes for management simplicity.

26

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview

Massive performance The XtremIO array handles very high, sustained levels of small, random, mixed read and write I/O, as is typical in virtual desktops, and does so with consistent extraordinarily low latency. Fast provisioning XtremIO arrays use writeable snapshot technology that is space-efficient for both data and metadata. XtremIO snapshots are free from limitations of performance, features, topology, or capacity reservations. With their unique in-memory metadata architecture, XtremIO arrays can instantly clone desktop environments of any size. Ease of use The XtremIO storage system requires only a few basic setup steps and requires no tuning or ongoing administration in order to achieve and maintain high performance levels. The XtremIO system can be deployed in less than an hour after delivery. Security with Data at Rest Encryption (D@RE) XtremIO arrays securely encrypt all data stored on the all-flash array, delivering protection – especially for full-clone desktops – for regulated use cases in sensitive industries such as healthcare, finance, and government. Data center economics A single X-brick can host thousands of desktops in just 6U of rack space, while requiring approximately 750 W of power. EMC Isilon

Isilon scale-out network attached storage (NAS) is ideal for storing large amounts of user data and Windows profiles in a Horizon View infrastructure. It provides a simple, scalable, and efficient platform to store massive amounts of unstructured data and enable various applications to create a scalable and accessible data repository without the overhead associated with traditional storage systems. Key attributes of the Isilon platform are: 

Isilon is Multi-Protocol, supporting NFS, CIFS, HTTP, FTP, HDFS for Hadoop and Data Analytics, and REST for Object and Cloud computing.



At the Client/Application layer, the Isilon NAS architecture supports a wide range of operating system environments.



At the Ethernet level, Isilon utilizes a 10 GbE network.



Isilon’s OneFS operating system is a single file system/single volume architecture, which makes it extremely easy to manage, regardless of the number of nodes in the storage cluster.



Isilon storage systems scale from a minimum of three nodes up to 144 nodes, which are all connected by an Infiniband communications layer.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

27

Chapter 3: Solution Overview

Figure 4.

Isilon cluster components

Isilon OneFS The Isilon OneFS operating system provides the intelligence behind all Isilon scaleout storage systems. It combines the three layers of traditional storage architectures—file system, volume manager, and data protection—into one unified software layer, creating a single intelligent file system that spans all nodes within an Isilon cluster.

Figure 5.

Isilon OneFS Operating System functionality

OneFS provides a number of important advantages:

28



Simple to Manage as a result of Isilon’s single file system, single volume, global namespace architecture



Massive Scalability with the ability to scale to 20 PB in a single volume



Unmatched Efficiency with over 80% storage utilization, automated storage tiering, and Isilon SmartDedupe



Enterprise data protection including efficient backup and disaster recovery, and N+1 through N+4 redundancy



Robust security and compliance options with: 

Role-based access control



Secure Access Zones



SEC 17a-4 compliant WORM data security

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview





Data At Rest Encryption (DARE) with Self-Encrypting Drives (SEDs)



Integrated File System Auditing support

Operational Flexibility with multi-protocol support including native HDFS support; Syncplicity® support for secure mobile computing; and support for object and cloud computing including OpenStack Swift

Isilon offers a full suite of data protection and management software to help you protect your data assets, control costs, and optimize storage resources and system performance for your Big Data environment. Data protection 

SnapshotIQ: to protect data efficiently and reliably with secure, near instantaneous snapshots while incurring little to no performance overhead, and speed recovery of critical data with near-immediate, on-demand snapshot restores



SyncIQ: to replicate and distribute large, mission-critical data sets to multiple shared storage systems in multiple sites for reliable disaster recovery capability



SmartConnect: to enable client connection load balancing and dynamic NFS failover and failback of client connections across storage nodes to optimize use of cluster resources



SmartLock: to protect your critical data against accidental, premature, or malicious alteration or deletion with Isilon’s software-based approach to write once-read many (WORM) and meet stringent compliance and governance needs such as SEC 17a-4 requirements

Data management 

SmartPools: to implement a highly efficient, automated tiered storage strategy to optimize storage performance and cost



SmartDedupe: for data deduplication to reduce storage capacity requirements and associated costs without impacting performance



SmartQuotas: to assign and manage quotas that seamlessly partition and thin provision storage into easily managed segments at the cluster, directory, subdirectory, user, and group levels



InsightIQ: to gain innovative performance monitoring and reporting tools that can help you maximize performance of your Isilon scale-out storage system



Isilon for vCenter: to manage Isilon storage functions from vCenter

Isilon Scale-out NAS Product Family The available Isilon nodes today are broken into several classes, according to their functionality: 

S-Series: IOPS-intensive applications



X-Series: High-concurrency and throughput-driven workflows



NL-Series: Near-primary accessibility, with near-tape value



Performance Accelerator: Independent scaling for ultimate performance EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

29

Chapter 3: Solution Overview



Backup Accelerator: High-speed and scalable backup and restore solution

Figure 6.

EMC VNX

Isilon node classes

The VNX flash-optimized unified storage platform is ideal for storing user data and Windows profiles in a VMware Horizon View infrastructure, and delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s virtualized application environments. VNX storage includes the following components: 

Host adapter ports (for block)—Provide host connectivity through fabric into the array.



Data Movers (for file)—Front-end appliances that provide file services to hosts (optional if providing CIFS/SMB or NFS services).



Storage processors (SPs)—The compute component of the storage array. SPs handle all aspects of data moving into, out of, and between arrays.



Disk drives—Disk spindles and solid state drives (SSDs) that contain the host/application data and their enclosures.

Note: The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and input/output (I/O) ports. It enables the CIFS (SMB) and NFS protocols on the VNX array.

30

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview

EMC VNX series VNX includes many features and enhancements designed and built upon the first generation’s success. These features and enhancements include: 

More capacity and better optimization with VNX MCx™ technology components—Multicore Cache, Multicore RAID, and Multicore FAST Cache



Greater efficiency with a flash-optimized hybrid array



Better protection by increasing application availability with active/active storage processors



Easier administration and deployment with the new Unisphere® Management Suite

VSPEX is built with VNX to deliver even greater efficiency, performance, and scale than ever before. Flash-optimized hybrid array VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. Flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and boosts the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance. Data is generally accessed most frequently at the time it is created; therefore, new data is first stored on flash drives to provide the best performance. As the data ages and becomes less active over time, FAST VP tiers the data from high-performance to high-capacity drives automatically based on customer-defined policies. This functionality has been enhanced with four times better granularity and with new FAST VP solid-state disks (SSDs) based on enterprise multilevel cell (eMLC) technology to lower the cost per gigabyte. FAST Cache uses flash drives as an expanded cache layer for the array to dynamically absorb unpredicted spikes in system workloads. Frequently accessed data is copied to the FAST Cache in 64 KB increments. Subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives, which, in turn dramatically improves the response times for the active data and reduces data hot spots that can occur within the LUN. All VSPEX use cases benefit from the increased efficiency provided by the FAST Suite. Furthermore, VNX provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

31

Chapter 3: Solution Overview

Unisphere Management Suite EMC Unisphere® is the central management platform for the VNX series, providing a single, combined view of file and block systems, with all features and functions available through a common interface. Unisphere is optimized for virtual applications and provides industry-leading VMware integration, automatically discovering virtual machines and ESX servers and providing end-to-end, virtual-to-physical mapping. Unisphere also simplifies configuration of FAST Cache and FAST VP on VNX platforms. The new Unisphere Management Suite extends Unisphere’s easy-to-use interface to include VNX Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in Figure 7, the suite also includes Unisphere Remote for centrally managing thousands of VNX and VNXe systems with new support for XtremSW Cache.

Figure 7.

New Unisphere Management Suite

VMware Storage APIs for Storage Awareness VMware vSphere Storage API for Storage Awareness (VASA) is a VMware-defined API that displays storage information through vCenter. Integration between VASA technology and VNX makes storage management in a virtualized environment a seamless experience. EMC VNX Virtual Provisioning EMC VNX Virtual Provisioning™ enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage only as needed. Thick LUNs provide high performance and predictable performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning.

32

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview

Pools and pool LUNs are the building blocks for advanced data services such as FAST VP, VNX Snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and user capacity threshold setting. VNX file shares In many environments, it is important to have a common location in which to store files accessed by many users. CIFS or NFS file shares, available from a file server, provide this ability. VNX storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. For more information about VNX file shares, refer to EMC VNX Series: Configuring and Managing CIFS on VNX on EMC Online Support. EMC SnapSure EMC SnapSure™ is a VNX File software feature that enables you to create and manage checkpoints that are point-in-time logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. A PFS consists of blocks; when a block within the PFS is modified, a copy containing the original contents of the block is saved to a separate volume called the SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. SnapSure reads the original blocks from the PFS in the SavVol, and the unchanged PFS blocks remaining in the PFS, according to a bitmap and blockmap data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint. A checkpoint reflects the state of a PFS at the time the checkpoint is created. SnapSure supports the following checkpoint types: 

Read-only checkpoints—Read-only file systems created from a PFS



Writeable checkpoints—Read/write file systems created from a read-only checkpoint

SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data. Note: Each writeable checkpoint is associated with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint.

Using VNX SnapSure provides more details.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

33

Chapter 3: Solution Overview

Virtualization management

VMware Virtual Storage Integrator (VSI) for VMware vSphere Web Client The VMware Virtual Storage Integrator (VSI) for VMware vSphere Web Client is a plugin for VMware vCenter. It enables administrators to view, manage, and optimize storage for VMware ESX/ESXi servers and hosts and then map that storage to the hosts. VSI consists of a graphical user interface and the EMC Solutions Integration Service (SIS), which provides communication and access to the storage systems. Depending on the platform, tasks that you can perform with VSI include: 

Storage provisioning



Cloning



Block deduplication



Compression



Storage mapping



Capacity monitoring



Virtual desktop infrastructure (VDI) integration

Using the Storage Access feature, a storage administrator can enable virtual machine administrators to perform management tasks on a set of storage pools. The current version of VSI supports the following EMC storage systems and features: 







34

EMC ViPR™ software-defined storage 

View properties of NFS and VMFS datastores and RDM volumes



Provision NFS and VMFS datastores and RDM volumes

VNX storage for ESX/ESXi hosts 

View properties of NFS and VMFS datastores and RDM volumes



Provision NFS and VMFS datastores and RDM volumes



Compress and decompress storage system objects on NFS and VMFS datastores



Enable and disable block deduplication on VMFS datastores



Create fast clones and full clones of virtual machines on NFS datastores

EMC Symmetrix® VMAX® storage systems 

View properties of VMFS datastores and RDM volumes



Provision VMFS datastores and RDM volumes

XtremIO storage systems 

View properties of ESX/ESXi datastores and RDM disks



Provision VMFS datastores and RDM volumes



Create full clones using XtremIO native snapshots



Integrate with VMware Horizon View and Citrix XenDesktop

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 3: Solution Overview

Refer to the EMC VSI for VMware vSphere product guides on EMC Online Support for more information.

Data protection layer Avamar delivers the protection confidence needed to accelerate deployment of VSPEX end-user computing solutions. Avamar empowers administrators to centrally back up and manage policies and enduser computing infrastructure components, while allowing end users to efficiently recover their own files from a simple and intuitive web-based interface. By moving only new, unique sub-file data segments, Avamar delivers fast daily full backups, with up to 90 percent reduction in backup times, while reducing the required daily network bandwidth by up to 99 percent. All Avamar recoveries are single-step for simplicity. The EMC Backup and Recovery for VSPEX for End User Computing with VMware Horizon View Design and Implementation Guide provides more information.

Security layer RSA SecurID two-factor authentication can provide enhanced security for the VSPEX end-user computing environment by requiring the user to authenticate with two pieces of information, collectively called a passphrase. SecurID functionality is managed through RSA Authentication Manager, which also controls administrative functions such as token assignment to users, user management, and high availability. The Securing EMC VSPEX End-User Computing with RSA SecurID: VMware Horizon

View 5.2 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Design Guide provides details for planning the security layer.

VMware Workspace solution VMware Workspace combines applications into a single, aggregated workspace, and provides the flexibility for employees to access the workspace on any device, no matter where they are based. VMware Workspace reduces the complexity of administration by enabling IT to centrally deliver, manage, and secure these assets across devices. With some added infrastructure, the VSPEX End-User Computing for VMware Horizon View solution supports VMware Workspace deployments.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

35

Chapter 3: Solution Overview

36

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 4: Sizing the Solution

Chapter 4

Sizing the Solution

This chapter presents the following topics: Overview ..................................................................................................................38 Reference workload..................................................................................................38 VSPEX Private Cloud requirements ...........................................................................39 XtremIO array configurations ...................................................................................41 Isilon configuration ..................................................................................................42 VNX configurations ..................................................................................................42 Choosing the appropriate reference architecture .....................................................43

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

37

Chapter 4: Sizing the Solution

Overview This chapter describes how to design a VSPEX End-User Computing for VMware Horizon View solution and size it to fit the customer’s needs. It introduces the concepts of a reference workload, building blocks, the validated end-user computing maximums, and describes how to use these to design your solution. Table 4 outlines the high-level steps you need to complete when sizing the solution. Table 4.

VSPEX end-user computing: Design process

Step

Action

1

Use the Customer Sizing Worksheet in Appendix A to collect the customer requirements for the end-user computing environment.

2

Use the EMC VSPEX Sizing Tool to determine the recommended VSPEX reference architecture for your end-user computing solution based on the customer requirements collected in Step 1. Note: If the Sizing Tool is not available, you can manually size the end-user computing solution using the guidelines in this chapter.

Reference workload VSPEX defines a reference workload to represent a unit of measure for quantifying the resources in the solution reference architectures. By comparing the customer’s actual usage to this reference workload, you can extrapolate which reference architecture to choose as the basis for the customer’s VSPEX deployment. For VSPEX end-user computing solutions, the reference workload is a single virtual desktop—the reference virtual desktop—with the workload characteristics indicated in Table 5. To determine the equivalent number of reference virtual desktops for a particular resource requirement, use the VSPEX Customer Sizing Worksheet to convert the total actual resources required for all desktops into the reference virtual desktop format. Table 5.

38

Reference virtual desktop characteristics

Characteristic

Value

Virtual desktop OS

Microsoft Windows 7 Enterprise Edition (32-bit) or Microsoft Windows 8.1 Enterprise Edition (32-bit)

Virtual processors per virtual desktop

1

RAM per virtual desktop

2 GB

Average IOPS per virtual desktop at steady state

10

Internet Explorer

10 for Windows 7 or 11 for Windows 8.1

Office

2010

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 4: Sizing the Solution Characteristic

Value

Adobe Reader

XI

Adobe Flash Player

11 ActiveX

Doro PDF printer

1.8

Workload generator

Login VSI 4.1.2

Workload type

Office worker

This desktop definition is based on user data residing on shared storage. The I/O profile is defined using a test framework that runs all desktops concurrently with a steady load generated by the constant use of office-based applications such as browsers and office productivity suites. This solution is verified with performance testing conducted using Login VSI (www.loginvsi.com), the industry standard load testing solution for virtualized desktop environments. Login VSI provides proactive performance management solutions for virtualized desktop and server environments. Enterprise IT departments use Login VSI products in all phases of their virtual desktop deployment—from planning to deployment to change management—for more predictable performance, higher availability and a more consistent end user experience. The world's leading virtualization vendors use the flagship product, Login VSI, to benchmark performance. With minimal configuration, Login VSI products works in VMware Horizon View, Citrix XenDesktop and XenApp, Microsoft Remote Desktop Services (Terminal Services) and any other Windows-based virtual desktop solution. For more information, download a trial at www.loginvsi.com.

VSPEX Private Cloud requirements This VSPEX End User Computing Proven Infrastructure requires multiple application servers. Unless otherwise specified, all servers use Microsoft Windows Server 2012 R2 as the base operating system. Table 6 lists the minimum requirements of each infrastructure server required. Table 6.

Infrastructure server minimum requirements

Server

CPU

RAM

IOPS

Storage capacity

Domain controllers (each)

2 vCPUs

4 GB

25

32 GB

SQL Server

2 vCPUs

6 GB

100

200 GB

vCenter Server

4 vCPUs

8 GB

100

80 GB

View controllers (each)

4 vCPUs

12 GB

50

32 GB

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

39

Chapter 4: Sizing the Solution

The requirements for the optional VMware vRealize Operations Manager for Horizon View and VMware Workspace components are available in the following sections of this document:

40

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 4: Sizing the Solution

Private Cloud Storage Layout



VMware vRealize Operations Manager for Horizon View platform profile



VSPEX for VMware Workspace solution

This solution requires the following volumes of the indicated size for storing the indicated virtual machines: 

A 1 TB volume to host the infrastructure virtual machines, which can include the VMware vCenter Server, View Connection Servers, Microsoft Active Directory Server, and Microsoft SQL Server.



For up to 1,750 desktop configurations, a 1.8 TB volume to host the vRealize Operations Manager for Horizon View virtual machines and databases



For up to 3,500 desktop configurations, a 3.6 TB volume to host the vRealize Operations Manager for Horizon View virtual machines and databases

Talk to your EMC sales representative for more information about larger configurations.

XtremIO array configurations We validated the VSPEX XtremIO end-user computing configurations on the Starter X-Brick and X-Brick platforms, which vary according to the number of SSDs they include and their total available capacity. For each array, EMC recommends a maximum VSPEX end-user computing configuration as outlined in this section. Validated XtremIO configurations

The following XtremIO validated disk layouts were created to provide support for a specified number of virtual desktops at a defined performance level. This VSPEX solution supports two XtremIO X-Brick configurations, which are selected based on the number of desktops being deployed: 

XtremIO Starter X-Brick—Supports up to 1,250 full-clone or 1,750 linkedclone virtual desktops



XtremIO X-Brick—Supports up to 2,500 full-clone or 3,500 linked-clone virtual desktops

The XtremIO storage configuration required for this solution is in addition to the storage required by the VSPEX private cloud that supports the solution’s infrastructure services. For more information about the VSPEX Private Cloud storage pool, refer to the VSPEX Proven Infrastructure Guide in Essential reading. XtremIO Storage Layout

Table 7 shows the number of XtremIO volumes the solution presents to the vSphere servers as VMFS datastores for virtual desktop storage. Table 7.

XtremIO X-Brick configurations

XtremIO configuration

Number of desktops

Type of desktop

Number of volumes

Volume size

Starter X-Brick

1,250

Full-clone

10

5 TB

1,750

Linked-clone

14

1 TB

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

41

Chapter 4: Sizing the Solution XtremIO configuration

Number of desktops

Type of desktop

Number of volumes

Volume size

X-Brick

2,500

Full-clone

20

5 TB

3,500

Linked-clone

28

1 TB

Expanding existing The EMC VSPEX End-User Computing solution supports a flexible implementation model where it is easy to expand your environment as the needs of the business VSPEX end-user change. computing environments To support future expansion, the XtremIO Starter X-Brick can be non-disruptively upgraded to an X-Brick by installing the XtremIO expansion kit, which adds an additional twelve 400 GB SSD drives. A Starter X-Brick upgraded in this way will support up to 2,500 full-clone or 3,500 linked-clone virtual desktops. To support more than 3,500 linked-clone virtual desktops or more than 2,500 fullclone virtual desktops, XtremIO supports scaling out online by adding more X-Bricks. Each additional X-Brick increases performance and virtual desktop capacity linearly. Two X-Brick, Four X-Brick, or Six X-Brick XtremIO clusters are all valid configurations.

Isilon configuration This solution uses the Isilon system for storing user data, home directories, and profiles. A three-node Isilon cluster supports 2,500 users’ data with the reference workload validated in this solution. Each node has 36 drives (2 EFD and 34 SATA) and two 10 GbE ports. Table 8 provides detailed information: Table 8.

User data resource requirement on Isilon Isilon configuration

Number of reference virtual desktops

Node number

Node type

Max capacity/user (GB)

1~2,500

3

X410

36

2,501~3,500

4

X410

35

3,501~5,000

5

X410

30

Table 8 shows recommendation of Isilon configuration with total CIFS calls as fulfillment baseline. Each X410 node used in this solution can provide 30 TB of capacity. Add more nodes if more capacity per user is needed. This solution is also capable of supporting other Isilon node types. Refer to the VSPEX Sizing Tool or check with your EMC sales representative for more information.

VNX configurations This solution also supports using VNX series storage arrays for user data storage, with FAST Cache enabled for the related storage pools. The VNX5400™ can support up to 1,750 users with the reference workload validated in this solution. The VNX5600™

42

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 4: Sizing the Solution

can support up to 3,500 users with the same reference workload. Table 9 shows the detailed requirements for 1,250 – 3,500 users. Table 9.

User data resource requirement on VNX

Number of users

VNX model

SSD for FAST Cache

Number of 2 TB NL-SAS drives

Max capacity/User (GB)

1,250

5400

2

16

15

1,750

5400

2

32

22

2,500

5600

4

40

19

3,500

5600

4

48

17

Table 9 shows recommendation of VNX configuration with total CIFS calls as fulfillment baseline. Each 6+2 2 TB NL-SAS RAID 6 group used in this solution can provide 10 TB capacity. Add more 6+2 2 TB NL-SAS RAID 6 groups if more capacity per user is needed. Refer to the VSPEX Sizing Tool or check with your EMC sales representative for more information about larger scale requirements. VNX FAST VP

If multiple drive types are implemented, enable FAST VP to automatically tier data to balance differences in performance and capacity. Note: FAST VP can provide performance improvements when implemented for user data and roaming profiles. Do not use FAST VP for virtual desktop datastores.

VNX shared file systems

In this validated solution, virtual desktops use four shared file systems—two for the VMware Horizon View Persona Management repositories and two to redirect user storage that resides in home directories. In general, redirecting users’ data out of the base image to Isilon or VNX enables centralized administration, data protection, and makes the desktops more stateless. Each file system is exported to the environment through a CIFS share. Each Persona Management repository share and home directory share serves an equal number of users.

Choosing the appropriate reference architecture To choose the appropriate reference architecture for a customer environment, you need to determine the resource requirements of the environment and then convert these requirements to an equivalent number of reference virtual desktops that have the characteristics defined in Table 5 on page 38. This section describes how to use the Customer Sizing Worksheet to simplify the sizing calculations and additional factors you should take into consideration when deciding which architecture to deploy.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

43

Chapter 4: Sizing the Solution

Using the Customer Sizing Worksheet

The Customer Sizing Worksheet helps you to assess the customer environment and calculate the sizing requirements of the environment. Table 10 shows a completed worksheet for an example customer environment. Appendix A provides a blank Customer Sizing Worksheet that you can print out and use to help size the solution for a customer. Table 10.

vCPUs

Memory

IOPS

Equivalent reference virtual desktops

No. of users

Total reference desktops

Resource requirements

2

8 GB

12

---

---

---

Equivalent reference virtual desktops

2

4

2

4

200

800

Resource requirements

2

4 GB

8

---

---

---

Equivalent reference virtual desktops

2

2

1

2

200

400

Resource requirements

1

2 GB

8

---

---

---

Equivalent reference virtual desktops

1

1

1

1

1,200

1,200

User type Heavy users

Moderate users

Typical users

Sample Customer Sizing Worksheet

2,400

Total

To complete the Customer Sizing Worksheet, follow these steps: 1.

Identify the user types planned for migration into the VSPEX end-user computing environment and the amount of user type.

2.

For each user type, determine the compute resource requirements in terms of vCPUs, memory (GB), storage performance (IOPS), and storage capacity.

3.

For each resource type and user type, determine the equivalent reference virtual desktop requirements—that is, the number of reference virtual desktops required to meet the specified resource requirements.

4.

Determine the total number of reference desktops needed from the resource pool for the customer environment.

Determining the resource requirements

CPU The reference virtual desktop outlined in Table 5 on page 38 assumes that most desktop applications are optimized for a single CPU. If one type of user requires a desktop with multiple virtual CPUs, modify the proposed virtual desktop count to accommodate the additional resources. For example, if you virtualize 100 desktops, but 20 users require two CPUs instead of one, your pool needs to provide the capability of 120 virtual desktops.

44

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 4: Sizing the Solution

Memory Memory plays a key role in ensuring application functionality and performance. Each group of desktops will have different targets for the available memory. Like the CPU calculation, if a group of users requires additional memory resources, simply adjust the number of planned desktops to accommodate the additional resource requirements. For example, if there are 200 desktops to be virtualized, but each one needs 4 GB of memory instead of the 2 GB that the reference virtual desktop provides, plan for 400 virtual desktops.

IOPS The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications that should be representative of the majority of virtual desktop implementations.

Storage capacity The storage capacity requirement for a desktop can vary widely depending on the types of applications in use and specific customer policies. The virtual desktops in this solution rely on additional shared storage for user profile data and user documents. This requirement is optional and is met by adding storage hardware defined in the solution. It can also be met by using existing file shares. Determining the equivalent reference virtual desktops With all of the resources defined, you determine the number of equivalent reference virtual desktops by using the relationships indicated in Table 11. Round all values up to the closest whole number. Table 11.

Reference virtual desktop resources

Resource

Value for reference virtual desktop

Relationship between requirements and equivalent reference virtual desktops

CPU

1

Equivalent reference virtual desktops = resource requirements

Memory

2

Equivalent reference virtual desktops = (resource requirements)/2

IOPS

10

Equivalent reference virtual desktops = (resource requirements)/10

For example, the heavy user type in Table 10 on page 44 requires two virtual CPUs, 12 IOPS, and 8 GB of memory for each desktop. This converts to two reference virtual desktops of CPU, four reference virtual desktops of memory, and two reference virtual desktops of IOPS. The number of reference virtual desktops required for each user type then equals the maximum required for an individual resource. For example, the number of equivalent reference virtual desktops for the heavy user type in Table 10 is four, as this number meets all resource requirements—IOPS, vCPU, and memory. EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

45

Chapter 4: Sizing the Solution

To calculate the total number of reference desktops for a user type, multiply the number of equivalent reference virtual desktops for that user type by the number of users. Determining the total reference virtual desktops After the worksheet is completed for each user type that the customer wants to migrate into the virtual infrastructure, you compute the total number of reference virtual desktops required in the resource pool by calculating the sum of the total reference virtual desktops for all user types. In the example in Table 10, the total is 2,400 virtual desktops. Selecting a reference architecture

This VSPEX end-user computing reference architecture supports two separate points of scale with two XtremIO X-Brick configurations: 

A Starter X-Brick, which was used to host 1,250 full-clone or 1,750 linked-clone virtual desktops



A full X-Brick, which was used to host 2,500 full-clone or 3,500 linked-clone virtual desktops

The total value for reference virtual desktops from the completed Customer Sizing Worksheet can be used to verify that this reference architecture would be adequate for the customer requirements. In the example in Table 10 on page 44, the customer requires 2,400 virtual desktops from the pool. Therefore, this reference architecture provides sufficient resources for current needs as well as some room for growth. However, there might be other factors to consider when verifying that this reference architecture will perform as intended. For example: 

Concurrency The reference workload used to validate this solution assumes that all desktop users will be active at all times. We tested the 2,500-desktop reference architecture with 2,500 desktops, all generating workloads in parallel, all booted at the same time, and so on. If the customer expects to have 2,500 users, but only 50 percent of them are logged on at any given time due to time zone differences or alternate shifts, the reference architecture might be able to support additional desktops in this case.



Heavier desktop workloads The reference workload is considered a typical office worker load. However, some customers’ users might have a more active profile. If a company has 2,500 users and, due to custom corporate applications, each user generates 50 predominantly write IOPS as compared to the 10 IOPS used in the reference workload, this customer will need 125,000 IOPS (2,500 users x 50 IOPS per desktop). This configuration would be underpowered in this case because the proposed I/O load is greater than the array maximum of 100,000 write IOPS. This company would need to deploy an additional X-Brick or reduce their current I/O load or total number of desktops to ensure the storage array performs as required.

46

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 4: Sizing the Solution

Fine tuning hardware resources

In most cases, the Customer Sizing Worksheet will suggest a reference architecture adequate for the customer‘s needs. However, in some cases you might want to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this document but the following sections can help you further customize your solution at this point. Storage resources This solution has validated two different XtremIO X-Brick configurations, a Starter XBrick to support either 1,250 full-clone or 1,750 linked-clone virtual desktops, and an X-Brick to support either 2,500 full-clone or 3,500 linked-clone virtual desktops. XtremIO X-Bricks are validated to support a higher number of desktops (both fullclone and linked-clone) and the VSPEX validated numbers are particular to the communicated solution only. The XtremIO array requires no tuning, and the number of SSDs available in the array is fixed. Use the VSPEX Sizing Tool or Customer Sizing Worksheet to verify that the XtremIO array can provide the necessary levels of capacity and performance. Server resources For the server resources in the solution, it is possible to customize the hardware resources more effectively. To do this, first add the resource requirements for the server components, as shown in Table 12. Note the addition of the Total CPU resources and Total memory resources columns to the worksheet. Table 12.

Server resource component totals

User types

vCPUs

Memory (GB)

Number of users

Total CPU resources

Total memory resources

Heavy users

Resource requirements

2

8

200

400

1,600

Moderate users

Resource requirements

2

4

200

400

800

Typical users

Resource requirements

1

2

1,200

1,200

2,400

2,000

4,800

Total

The example in Table 12 requires 2,000 virtual vCPUs and 4,800 GB of memory. The reference architectures assume five desktops per physical processor core and no memory over-provisioning, which, in this example, converts to 500 physical processor cores and 4,800 GB of memory. Use these calculations to more accurately determine the total server resources required. Note: Keep high availability requirements in mind when customizing the resource pool hardware.

Summary

The requirements stated in the solution are what EMC considers the minimum set of resources to handle the workloads based on the stated definition of a reference EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

47

Chapter 4: Sizing the Solution

virtual desktop. In any customer implementation, the system load will vary over time as users interact with the system. If the customer virtual desktops differ significantly from the reference definition and vary in the same resource group, you might need to add more of that resource to the system.

48

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

Chapter 5

Solution Design Considerations and Best Practices

This chapter presents the following topics: Overview ..................................................................................................................50 Server design considerations ...................................................................................50 Network design considerations ................................................................................56 Storage design considerations ................................................................................61 High availability and failover ...................................................................................63 Validation test profile .............................................................................................. 66 Antivirus and antimalware platform profile .............................................................. 67 VMware vRealize Operations Manager for Horizon View platform profile .................68 VSPEX for VMware Workspace solution ....................................................................69

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

49

Chapter 5: Solution Design Considerations and Best Practices

Overview This chapter describes best practices and considerations for designing the VSPEX end-user computing solution. For more information on deployment best practices for various components of the solution, refer to the vendor-specific documentation.

Server design considerations EMC designs VSPEX solutions to run on a wide variety of server platforms. VSPEX defines the minimum CPU and memory resources required, but not a specific server type or configuration. The customer can use any server platform and configuration that meets or exceeds the minimum requirements. For example, Figure 8 shows how a customer could implement the same server requirements by using either white-box servers or high-end servers. Both implementations achieve the required number of processor cores and amount of RAM but the number and type of servers differ.

Figure 8.

50

Compute layer flexibility

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

The choice of a server platform is not only based on the environment’s technical requirements, but also on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and other factors. For example: 

From a virtualization perspective, if a system’s workload is understood, features like memory ballooning and transparent page sharing can reduce the aggregate memory requirement.



If the virtual machine pool does not have a high level of peak or concurrent usage, you can reduce the number of vCPUs. Conversely, if the deployed applications are highly computational in nature, you can increase the number of vCPUs and amount of memory.

The server infrastructure must meet the following minimum requirements:

Server best practices



Sufficient CPU cores and memory to support the required number and types of virtual machines



Sufficient network connections to enable redundant connectivity to the system switches



Sufficient excess capacity to enable the environment to withstand a server failure and failover

For this solution, EMC recommends that you consider the following best practices for the server layer: 

Use identical server units Use identical or at least compatible servers, which will ensure that they share similar hardware configurations. VSPEX implements hypervisor level highavailability technologies that might require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.



Use recent processor technologies For new deployments, use recent revisions of common processor technologies. It is assumed that these will perform as well as, or better than, the systems used to validate the solution.



Implement high availability to accommodate single server failures Implement the high-availability features available in the virtualization layer to ensure that the compute layer has sufficient resources to accommodate single server failures. This will also allow you to implement minimal-downtime upgrades. High availability and failover provides further details. Note: When implementing hypervisor layer high availability, the largest virtual machine you can create is constrained by the smallest physical server in the environment.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

51

Chapter 5: Solution Design Considerations and Best Practices



Monitor resource utilization and adapt as needed For example, the reference virtual desktop and required hardware resources in this solution assume that there are no more than five virtual CPUs for each physical processor core (5:1 ratio). In most cases, this provides an appropriate level of resources for the hosted virtual desktops; however, this ratio might not be appropriate in all cases. EMC recommends monitoring CPU utilization at the hypervisor layer to determine if more resources are required and then adding as needed.

Validated server hardware

Table 13 identifies the server hardware and the configurations validated in this solution. Table 13.

Server hardware

Servers for virtual desktops

Configuration

CPU

 1 vCPU per desktop (5 desktops per core)  250 cores across all servers for 1,250 virtual desktops  350 cores across all servers for 1,750 virtual desktops  500 cores across all servers for 2,500 virtual desktops  700 cores across all servers for 3,500 virtual desktops

Memory

 2 GB RAM per virtual machine  2.5 TB RAM across all servers for 1,250 virtual desktops  3.5 TB RAM across all servers for 1,750 virtual desktops  5 TB RAM across all servers for 2,500 virtual machines  7 TB RAM across all servers for 3,500 virtual desktops  2 GB RAM reservation per vSphere host

Network

 3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server.

Notes:

52



The 5:1 vCPU to physical core ratio applies to the reference workload defined in this Design Guide. When deploying VMware vShield Endpoint or Avamar, add CPU and RAM as needed for components that are CPU or RAM intensive. Refer to the relevant product documentation for information on vShield Endpoint and Avamar resource requirements.



In addition to the servers you deploy to meet the minimum requirements in Table 13, the infrastructure requires one more server to support VMware vSphere HA.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

vSphere memory virtualization

vSphere has a number of advanced features that help optimize performance and overall use of resources. This section describes the key features for memory management and considerations for using them with your VSPEX solution. Figure 9 illustrates how a single hypervisor consumes memory from a pool of resources. vSphere memory management features such as memory overcommitment, transparent page sharing, and memory ballooning can reduce total memory usage and increase consolidation ratios in the hypervisor.

Figure 9.

Hypervisor memory consumption

Memory virtualization techniques allow the vSphere hypervisor to abstract physical host resources such as memory to provide resource isolation across multiple virtual machines, while avoiding resource exhaustion. In cases where advanced processors (such as Intel processors with EPT support) are deployed, memory abstraction takes place within the CPU. Otherwise, it occurs within the hypervisor using shadow page tables.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

53

Chapter 5: Solution Design Considerations and Best Practices

vSphere provides the following memory management techniques: 

Memory over-commitment Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vSphere host. Using sophisticated techniques such as ballooning and transparent page sharing, vSphere is able to handle memory over-commitment without any performance degradation. However, if more memory is used than is present on the server, vSphere might resort to swapping portions of a virtual machine's memory.



Non-Uniform Memory Access (NUMA) vSphere uses a NUMA load-balancer to assign a home node to a virtual machine. Memory access is local and provides the best performance possible because memory for the virtual machine is allocated from the home node. Applications that do not directly support NUMA also benefit from this feature.



Transparent page sharing Virtual machines running similar operating systems and applications typically have identical sets of memory content. Page sharing allows the hypervisor to reclaim the redundant copies and return them to the host’s free memory pool for reuse.



Memory compression vSphere uses memory compression to store pages that would otherwise be swapped out to disk through host swapping, in a compression cache located in the main memory.



Memory ballooning This relieves host resource exhaustion by allocating free pages from the virtual machine to the host for reuse, with little to no impact on the application’s performance.



Hypervisor swapping This causes the host to force arbitrary virtual machine pages out to disk.

For further information, refer to the VMware white paper Understanding Memory Resource Management in VMware vSphere 5.0. Memory configuration guidelines

Proper sizing and configuration of the solution requires care when configuring server memory. This section provides guidelines for allocating memory to virtual machines and takes into account vSphere overhead and the virtual machine memory settings. vSphere memory overhead There is some memory space overhead associated with virtualizing memory resources. This overhead has two components:

54



The system overhead for the VMkernel



Additional overhead for each virtual machine

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

The overhead for the VMkernel is fixed, whereas the amount of additional memory for each virtual machine depends on the number of virtual CPUs and the amount of memory configured for the guest OS. Virtual machine memory settings Figure 10 shows the memory settings parameters in a virtual machine, including: 

Configured memory—Physical memory allocated to the virtual machine at the time of creation



Reserved memory—Memory that is guaranteed to the virtual machine



Touched memory—Memory that is active or in use by the virtual machine



Swappable—Memory that can be de-allocated from the virtual machine if the host is under memory pressure from other virtual machines using ballooning, compression, or swapping.

Figure 10. Virtual machine memory settings

EMC recommends that you follow these best practices for virtual machine memory settings: 

Do not disable the default memory reclamation techniques. These lightweight processes minimally impact workloads.



Intelligently size memory allocation for virtual machines. Over-allocation wastes resources, while under-allocation impacts performance and can affect other virtual machines’ sharing resources. Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources. In severe cases, when hypervisor swapping occurs, virtual machine performance is adversely affected. Having performance baselines of your virtual machine workloads assists in this process.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

55

Chapter 5: Solution Design Considerations and Best Practices

Allocating memory to virtual machines Server capacity is required for two purposes in the solution: 

To support the required infrastructure services such as authentication/ authorization, DNS, and database. For further details on the hosting requirements for these infrastructure services, refer to the VSPEX Private Cloud Proven Infrastructure Guide listed in Essential Reading.



To support the virtualized desktop infrastructure: In this solution, each virtual desktop has two GB of memory, as defined in Table 5 on page 38. The solution is validated with statically assigned memory and no over-commitment of memory resources. If memory over-commitment is used in a real-world environment, regularly monitor the system memory utilization and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results.

Network design considerations VSPEX solutions define minimum network requirements and provide general guidance on network architecture while allowing the customer to choose any network hardware that meets the requirements. If additional bandwidth is needed, add capability to both the storage array and the hypervisor host to meet the requirements. The options for network connectivity to the server depend on the type of server. For reference purposes in the validated environment, EMC assumes that each virtual desktop generates 10 IOPS with an average size of 4 KB. This means that each virtual desktop is generating at least 40 KB/s of traffic on the storage network. For an environment rated for 1,250 virtual desktops, this means a minimum of approximately 50 MB/s, which is well within the bounds of modern networks. However, this does not consider other operations. Additional bandwidth is needed for: 

User network traffic



Virtual desktop migration



Administrative and management operations

The requirements for each of these operations depend on how the environment is used. It is not practical to provide concrete numbers in this context. However, the networks described for the reference architectures in this solution should be sufficient to handle average workloads for these operations. Regardless of the network traffic requirements, always have at least two physical network connections that are shared by a logical network to ensure that a single link failure does not affect the availability of the system. Design the network so that the aggregate bandwidth is sufficient to accommodate the full workload in the event of a failure.

56

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

The network infrastructure must meet the following minimum requirements:

Validated network hardware



Redundant network links for the hosts, switches, and storage



Support for link aggregation



Traffic isolation based on industry best practices

Table 14 identifies the hardware resources for the network infrastructure validated in this solution. Table 14.

Minimum switching capacity

Storage type XtremIO for virtual desktop storage

VNX for optional user data storage

Isilon for optional user data storage

Configuration 

2 physical switches



2 x FC/FCoE or 2 x 10 GbE ports per VMware vSphere server, for storage network



2 x FC or 2 x 10 GbE ports per SC, for desktop data



2 physical switches



2 x 10 GbE ports per vSphere server



1 x 1 GbE port per Control Station for management



2 x 10 GbE ports per Data Mover for data



2 physical switches



2 x 10 GbE ports per vSphere server



1 x 1 GbE port per node for management



2 x 10 GbE ports per Data Mover for data

Notes:  The solution can use 1 GbE network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.  This configuration assumes that the VSPEX implementation is using rack-mounted servers. For implementations based on blade servers, ensure that similar bandwidth and high availability capabilities are available.

Network configuration guidelines

This section provides guidelines for configuring a redundant, highly available network. The guidelines take into account network redundancy, link aggregation, traffic isolation, and jumbo frames. The configuration examples are for IP-based networks, but similar best practices and design principles apply to FC storage networks. Network redundancy The infrastructure network requires redundant network links for each vSphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is also required regardless of whether the network infrastructure for the solution already exists or is deployed with other solution components.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

57

Chapter 5: Solution Design Considerations and Best Practices

Figure 11 provide examples of highly available storage network topologies.

Figure 11. Highly-available XtremIO FC network design example

Figure 12 shows a highly available network setup example for user data with a VNX family storage array. The same high availability principal applies to an Isilon configuration as well. In either case, each node will have two links to switches.

58

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

Figure 12. Highly-available VNX Ethernet network design example

Link aggregation VNX and Isilon provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses3. In this solution, we configured the Link Aggregation Control Protocol (LACP) on the VNX or Isilon array to combine multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. We distributed all network traffic across the active links. Traffic isolation This solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.

3

A link aggregation resembles an Ethernet channel but uses the LACP IEEE 802.3ad standard. This standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

59

Chapter 5: Solution Design Considerations and Best Practices

VLANs segregate network traffic to enable traffic of different types to move over isolated networks. In some cases, physical isolation might be required for regulatory or policy compliance reasons; in many cases, logical isolation using VLANs is sufficient. This solution requires a minimum of two VLANs for client access and management. Figure 13 shows the design of these VLANs with VNX. An Isilon configuration shares the same design principals.

Figure 13. Required networks

The client access network is for users of the system, or clients, to communicate with the infrastructure, including the virtual machines and the CIFS shares hosted by the VNX or Isilon array. The management network provides administrators with dedicated access to the management connections on the storage array, network switches, and hosts. Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. You can implement additional networks but they are not required. Note: The figure demonstrates the network connectivity requirements for a VNX array using 10 GbE network connections. Create a similar topology when using 1 GbE network connections.

60

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

Storage design considerations Overview

XtremIO offers inline deduplication, inline compression, inline security-at-rest features, and native thin provisioning native. Storage planning requires that you determine the: 

Volume size



Number of volumes



Performance requirements

Each volume must be greater than the logical space required by the server. An XtremIO cluster can fulfill the solution’s performance requirements. Validated storage hardware and configuration

vSphere supports more than one method of using storage when hosting virtual machines. We tested the configurations described in Figure 3 using FC, and the storage layouts described adhere to all current best practices. A customer or architect with the necessary training and background can make modifications based on their understanding of the system’s usage and load if required. Table 15.

Storage hardware

Purpose

Configuration

XtremIO shared storage

Common: 

2 x FC and 2 x10 GbE interfaces per storage controller



1 x 1 GbE interface per storage controller for management

For 1,250 full-clone or 1,750 linked-clone virtual desktops: 

Starter X-Brick configuration with 13 x 400 GB flash drives

For 2,500 full-clone or 3,500 linked-clone virtual desktops:  Optional; Isilon shared storage disk capacity

X-Brick configuration with 25 x 400 GB flash drives

Only required if deploying an Isilon cluster to host user data.  3 x X410 node  2 x 800 GB EFD  34 x 1 TB SATA

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

61

Chapter 5: Solution Design Considerations and Best Practices Purpose

Configuration

Optional; VNX shared storage disk capacity

For 1,250 full-clone virtual desktops: 

2 x 200 GB EFD



16 x 2 TB NL-SAS

For 1,750 linked-clone virtual desktops: 

2 x 200 GB EFD



32 x 2 TB NL-SAS

For 2,500 full-clone virtual desktops: 

4 x 200 GB EFD



40 x 2 TB NL-SAS

For 3,500 linked-clone virtual desktops:

vSphere storage virtualization



4 x 200 GB EFD



48 x 2 TB NL-SAS

This section provides guidelines for setting up the storage layer of the solution to provide high availability and the expected level of performance. VMware vSphere provides host-level storage virtualization. It virtualizes the physical storage and presents the virtualized storage to the virtual machine. A virtual machine stores its OS and all other files related to the virtual machine activities in a virtual disk. The virtual disk can be one or multiple files. VMware uses a virtual SCSI controller to present the virtual disk to the guest OS running inside the virtual machine. The virtual disk resides in either a VMware Virtual Machine File system (VMFS) datastore or an NFS datastore. An additional option, raw device mapping (RDM), allows the virtual infrastructure to connect a physical device directly to a virtual machine. Figure 14 shows the various VMware virtual disk types, including:

62



VMFS—A cluster file system that provides storage virtualization optimized for virtual machines. It can be deployed over any SCSI-based local or network storage.



Raw device mapping —Allows a virtual machine direct access to a volume on the physical storage and uses either FC or iSCSI.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

Figure 14. VMware virtual disk types

High availability and failover This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive single-unit failures with minimal impact to business operations. This section describes the high availability features of the solution. Virtualization layer EMC recommends configuring high availability in the virtualization layer and automatically allowing the hypervisor to restart virtual machines that fail. Figure 15 illustrates the hypervisor layer responding to a failure in the compute layer.

Figure 15. High availability at the virtualization layer

By implementing high availability at the virtualization layer, the infrastructure will attempt to keep as many services running as possible, even in the event of a hardware failure. Compute layer

While the choice of servers to implement in the compute layer is flexible, it is best to use the enterprise class servers designed for data centers. This type of server has redundant power supplies, as shown in Figure 16. You should connect these to separate Power Distribution Units (PDUs) in accordance with your server vendor’s best practices.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

63

Chapter 5: Solution Design Considerations and Best Practices

Figure 16. Redundant power supplies

We also recommend that you configure high availability in the virtualization layer. This means that you must configure the compute layer with enough resources to ensure that the total number of available resources meets the needs of the environment, even with a server failure. Figure 15 demonstrates this recommendation. Network layer

Both Isilon and VNX family storage arrays provide protection against network connection failures at the array. Each vSphere host has multiple connections to user and storage Ethernet networks to guard against link failures, illustrated in the VNXbased example shown in Figure 17. A storage requirement for the network is to spread these connections across multiple Ethernet switches. This principal of network high availability also applies to Isilon.

Figure 17. VNX Ethernet network layer high availability

Having no single points of failure in the network layer ensures that the compute layer will be able to access storage and communicate with users even if a component fails.

64

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

Storage layer

The XtremIO array is designed for five 9s availability by using redundant components throughout the array as shown in Figure 18. All of the array components are capable of continued operation in case of hardware failure. XtremIO Data Protection (XDP) delivers the superior protection of RAID 6, while exceeding the performance of RAID 1 and the capacity utilization of RAID 5, ensuring against data loss due to drive failures.

Figure 18. XtremIO series high availability

EMC storage arrays are highly available by default. Use the installation guides to ensure that there are no single points of failure.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

65

Chapter 5: Solution Design Considerations and Best Practices

Validation test profile Profile characteristics

Table 16 shows the desktop definition and storage configuration parameters that we validated with the environment profile. Table 16.

Validated environment profile

Profile characteristic

Value

XtremIO

3.0.2

Hypervisor

vSphere 5.5 Update 2

Virtual desktop OS

Windows 7 Enterprise (32-bit) or Windows 8.1 Enterprise (32-bit)

vCPU per virtual desktop

1

Number of virtual desktops per CPU core

5

RAM per virtual desktop

2 GB

Desktop provisioning method

Full clones or linked clones

Average IOPS per virtual desktop at steady state

10 IOPS

Internet Explorer

10 for Windows 7 or 11 for Windows 8.1

Office

2010

Adobe Reader

XI

Adobe Flash Player

11 ActiveX

Doro PDF printer

1.8

Workload generator

Login VSI

Workload type

Office worker

Number of datastores to store virtual desktops

Number of virtual desktops per datastore Disk and RAID type for XtremIO virtual desktop datastores

66



10 for 1,250 virtual desktops



14 for 1,750 virtual desktops



20 for 2,500 virtual desktops



28 for 3,500 virtual desktops



400 GB eMLC SSD drives

125

XtremIO proprietary data protection XDP that delivers RAID 6-like data protection but better than the performance of RAID 10

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

Antivirus and antimalware platform profile Platform characteristics

Table 17 shows how we sized the solution based on the VMware vShield Endpoint platform requirements. Table 17.

Antivirus platform characteristics

Platform component

Technical information

VMware vShield Manager appliance

Manages the vShield Endpoint service installed on each vSphere host 1 vCPU, 3 GB RAM, and 8 GB hard disk space

VMware vShield Endpoint service

This service is installed on each desktop hosted on vSphere and uses up to 512 MB of RAM on the host.

VMware Tools vShield Endpoint component

A component of the VMware tools suite that enables integration with the vSphere host vShield Endpoint service. The vShield Endpoint component of VMware tools is installed as an optional component of the VMware tools software package and should be installed on the master virtual desktop image.

vShield Endpoint thirdparty security plug-in

vShield architecture

A third party plug-in and associated components are required to complete the vShield Endpoint solution. Requirements vary based on individual vendor specifications. Refer to the vendor documentation for specific details.

The individual components of the VMware vShield Endpoint platform and the vShield third-party security plug-in each have specific CPU, RAM, and disk space requirements. The resource requirements vary based factors such as the number of events being logged, log retention needs, the number of desktops being monitored, and the number of desktops present on each vSphere host.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

67

Chapter 5: Solution Design Considerations and Best Practices

VMware vRealize Operations Manager for Horizon View platform profile Platform characteristics

Table 18 shows how we sized the solution stack based on the VMware vRealize Operations Manager for Horizon View platform requirements. Table 18.

Horizon View platform characteristics

Platform component

Technical information

vRealize Operations Manager vApp

The vApp consists of a UI virtual appliance and an Analytics virtual appliance. For up to 1,750 virtual desktops:  UI appliance requirements: 4 vCPU, 11 GB RAM, 200 GB hard disk space.  Analytics appliance requirements: 4 vCPU, 14 GB RAM, 1.6 TB hard disk space, and 3,000 IOPS. For up to 3,500 virtual desktops:  UI appliance requirements: 8 vCPU, 13 GB RAM, 400 GB hard disk space.  Analytics appliance requirements: 8 vCPU, 21 GB RAM, 3.2 TB hard disk space, and 6,000 IOPS.

vRealize Operations Manager for Horizon View architecture

68

The individual components of vRealize Operations Manager for Horizon View have specific CPU, RAM, and disk space requirements. The resource requirements vary based on the number of desktops being monitored. The numbers provided in Table 18 assume that a maximum of 1,750 or 3,500 desktops will be monitored.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices

VSPEX for VMware Workspace solution With some added infrastructure, the VSPEX End-User Computing for Horizon View solution supports VMware Workspace deployments. It requires Active Directory and Domain Name System (DNS). Key VMware Workspace components

VMware Workspace is a vApp, distributed as an Open Virtual Appliance (.OVA) file, which can be deployed through vCenter. The OVA file contains the virtual appliances (VAs) shown in the basic VMware Workspace architecture in Figure 19.

Figure 19.

VMware Workspace architecture layout

Table 19 describes the function of each virtual appliance.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

69

Chapter 5: Solution Design Considerations and Best Practices Table 19.

VSPEX for VMware Workspace architecture

70

OVA virtual appliances

Virtual appliance

Description

Configurator (configurator-va)

The Configurator appliance provides the central wizard UI and distributes settings across all other appliances in the vApp. It provides central control of network, gateway, vCenter, and SMTP settings.

Connector (connector-va)

The Connector appliance provides user authentication services; it can also bind with an Active Directory and synchronize according to a defined schedule.

Manager (service-va)

The Manager appliance provides the web-based VMware Workspace administrator user interface, which controls the application catalog, user entitlements, workspace groups, and reporting service.

Gateway (gateway-va)

The Gateway appliance enables single user-facing domain access to VMware Workspace. As the central aggregation point for all user connections, the Gateway routes requests to the appropriate destination and proxies requests on behalf of user connections.

Figure 20 shows the logical architecture of the VSPEX for VMware Workspace solution.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 5: Solution Design Considerations and Best Practices



Desktop users (PCoIP clients)

Virtual desktop #1

Virtual desktop #n

VMware vSphere virtual desktops

View Manager Server 1

SQL Server

View Manager Server 2 & 3



Active Directory / DNS / DHCP

vCenter Server

VMware vSphere virtual servers – infrastructure VMware vSphere cluster virtual desktops Network

EMC Isilon

VMware vSphere cluster infrastructure

Configurator-va

Connector-va

Service-va

Data-va

Gateway-va

EMC VNX

VMware vSphere virtual servers – VMware Workspace data 10 GbE IP network

EMC Avamar

8 Gb FC 10 Gb iSCSI network

Figure 20. VSPEX for VMware Workspace solution: logical architecture

The customer is free to select any server and networking hardware that meets or exceeds the minimum requirements, while the recommended storage delivers a highly available architecture for a VMware Workspace deployment. Server requirements Table 20 details the minimum supported hardware requirements for each virtual appliance in the VMware Workspace vApp. Table 20.

Minimum hardware resources for VMware Workspace

vApp

vCPU

Memory (GB)

Disk space (GB)

Configurator-va

1

1

5

Service-va

6

8

36

Connector-va

2

4

12

Gateway-va

6

32

9

Note: For high availability during failure scenarios, it might be necessary to restart virtual machines on different hardware. Those physical servers will need to have resources available. Follow the specific recommendations Server design considerations to enable this functionality.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

71

Chapter 5: Solution Design Considerations and Best Practices

Networking requirements The networking components can be implemented using 1 GbE or 10 GbE IP networks, provided bandwidth and redundancy are sufficient to meet the minimum requirements of the solution.

72

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 6: Reference Documentation

Chapter 6

Reference Documentation

This chapter presents the following topics: EMC documentation .................................................................................................74 Other documentation ............................................................................................... 74

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

73

Chapter 6: Reference Documentation

EMC documentation The following documents, available on the EMC Online Support or EMC.com websites provide additional and relevant information. If you do not have access to a document, contact your EMC representative. 

EMC XtremIO Storage Array User Guide



EMC XtremIO Storage Array Operations Guide



EMC XtremIO Storage Array Software Installation and Upgrade Guide



EMC XtremIO Storage Array Hardware Installation and Upgrade Guide



EMC XtremIO Storage Array Security Configuration Guide



EMC XtremIO Storage Array Pre-Installation Checklist



EMC XtremIO Storage Array Site Preparation Guide



EMC VNX5400 Unified Installation Guide



EMC VNX5600 Unified Installation Guide



EMC VSI for VMware vSphere Web Client Product Guide



EMC VNX Installation Assistant for File/Unified Worksheet



EMC VNX FAST Cache: A Detailed Review White Paper



Deploying Microsoft Windows 8 Virtual Desktops Applied Best Practices Guide



EMC PowerPath/VE for VMware vSphere Installation and Administration Guide



EMC PowerPath Viewer Installation and Administration Guide



EMC VNX Unified Best Practices for Performance Applied Best Practices Guide

Other documentation The following documents, available on the VMware website, provide additional and relevant information:

74



VMware vSphere Installation and Setup Guide



VMware vSphere Networking Guide



VMware vSphere Resource Management Guide



VMware vSphere Storage Guide



VMware vSphere Virtual Machine Administration Guide



VMware vCenter Server and Host Management Guide



Installing and Administering VMware vSphere Update Manager



Preparing the Update Manager Database



Preparing vCenter Server Databases



Understanding Memory Resource Management in VMware vSphere 5

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Chapter 6: Reference Documentation



VMware Horizon View Administration Guide



VMware Horizon View Architecture Planning Guide



VMware Horizon View Installation Guide



VMware Horizon View Integration Guide



VMware Horizon View Profile Migration Guide



VMware Horizon View Security Guide



VMware Horizon View Upgrades Guide



VMware Horizon View 6.0 Release Notes



VMware Horizon View Optimization Guide for Windows 7 and Windows 8 White Paper



Installing and Configuring VMware Workspace Portal



Upgrading VMware Workspace Portal



VMware Workspace Portal Administrator’s Guide



VMware Workspace Portal User Guide



VMware vRealize Operations Manager Administration Guide



VMware vRealize Operations Manager for View Installation Guide



VMware vRealize Operations Manager Installation Guide



VMware vRealize Operations Installation and Configuration Guide for Windows and Linux



VMware vShield Administration Guide



VMware vShield Quick Start Guide

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

75

Chapter 6: Reference Documentation

Appendix A

Customer Sizing Worksheet

This appendix presents the following topic: Customer Sizing Worksheet for end-user computing ............................................... 77

76

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Appendix A: Customer Sizing Worksheet

Customer Sizing Worksheet for end-user computing Before selecting a reference architecture on which to base a customer solution, use the Customer Sizing Worksheet to gather information about the customer’s business requirements and to calculate the required resources. Table 21 shows a blank worksheet. A standalone copy of the worksheet is attached to this Design Guide in Microsoft Word format to enable you to easily print a copy. Table 21.

User Type

Customer sizing worksheet

vCPUs

Resource requirements

Memory (GB)

IOPS

Equivalent reference virtual desktops

No. of users

Total reference desktops

---

---

---

---

---

---

---

---

---

---

---

---

Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Total

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

77

Chapter 6: Reference Documentation

To view and print the worksheet: 1.

In Adobe Reader, open the Attachments panel as follows: 

Select View > Show/Hide > Navigation Panes > Attachments

or 

Click the Attachments icon as shown in Figure 21.

Figure 21.

2.

78

Printable customer sizing worksheet

Under Attachments, double-click the attached file to open and print the worksheet.

EMC VSPEX End-User Computing: VMware Horizon View 6.0 and VMware vSphere with EMC XtremIO Design Guide

Suggest Documents