VSP Best Practice Guide for HNAS Solutions

Hitachi USP-V/VSP Best Practice Guide for HNAS Solutions By Francisco Salinas (Global Services Engineering) MK-92HNAS025-00 © 2011-2013 Hitachi, L...
Author: Gwendoline Dean
298 downloads 0 Views 1MB Size
Hitachi USP-V/VSP Best Practice Guide for HNAS Solutions

By Francisco Salinas (Global Services Engineering)

MK-92HNAS025-00

© 2011-2013 Hitachi, Ltd. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi, Ltd. Hitachi, Ltd., reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. This document contains the most current information available at the time of publication. When new or revised information becomes available, this entire document will be updated and distributed to all registered users. Some of the features described in this document might not be currently available. Refer to the most recent product announcement for information about feature and product availability, or contact Hitachi Data Systems Corporation at https:// portal.hds.com. Notice: Hitachi, Ltd., products and services can be ordered only under the terms and conditions of the applicable Hitachi Data Systems Corporation agreements. The use of Hitachi, Ltd., products is governed by the terms of your agreements with Hitachi Data Systems Corporation. Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries. Archivas, BlueArc, Dynamic Provisioning, Essential NAS Platform, HiCommand, HiTrack, ShadowImage, Tagmaserve, Tagmasoft, Tagmasolve, Tagmastore, TrueCopy, Universal Star Network, and Universal Storage Platform are registered trademarks of Hitachi Data Systems Corporation. AIX, AS/400, DB2, Domino, DS8000, Enterprise Storage Server, ESCON, FICON, FlashCopy, IBM, Lotus, OS/390, RS6000, S/390, System z9, System z10, Tivoli, VM/ ESA, z/OS, z9, zSeries, z/VM, z/VSE are registered trademarks and DS6000, MVS, and z10 are trademarks of International Business Machines Corporation. All other trademarks, service marks, and company names in this document or website are properties of their respective owners. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.

ii

Hitachi USP-V/VSP

Notice Hitachi Data Systems products and services can be ordered only under the terms and conditions of Hitachi Data Systems’ applicable agreements. The use of Hitachi Data Systems products is governed by the terms of your agreements with Hitachi Data Systems. This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (http://www.openssl.org/). Some parts of ADC use open source code from Network Appliance, Inc. and Traakan, Inc. Part of the software embedded in this product is gSOAP software. Portions created by gSOAP are copyright 2001-2009 Robert A. Van Engelen, Genivia Inc. All rights reserved. The software in this product was in part provided by Genivia Inc. and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the author be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage. The product described in this guide may be protected by one or more U.S. patents, foreign patents, or pending applications.

Notices and Disclaimer The performance data contained herein was obtained in a controlled isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While Hitachi Data Systems Corporation has reviewed each item for accuracy in a specific situation, there is no guarantee that the same results can be obtained elsewhere. All designs, specifications, statements, information and recommendations (collectively, "designs") in this manual are presented "AS IS," with all faults. Hitachi Data Systems Corporation and its suppliers disclaim all warranties, including without limitation, the warranty of merchantability, fitness for a particular purpose and non-infringement or arising from a course of dealing, usage or trade practice. In no event shall Hitachi Data Systems Corporation or its suppliers be liable for any indirect, special, consequential or incidental damages, including without limitation, lost profit or loss or damage to data arising out of the use or inability to use the designs, even if Hitachi Data Systems Corporation or its suppliers have been advised of the possibility of such damages. This document has been reviewed for accuracy as of the date of initial publication. Hitachi Data Systems Corporation may make improvements and/or changes in product and/or programs at any time without notice. No part of this document may be reproduced or transmitted without written approval from Hitachi Data Systems Corporation.

Hitachi USP-V/VSP

iii

Notice of Export Controls Export of technical data contained in this document may require an export license from the United States government and/or the government of Japan. Contact the Hitachi Data Systems Legal Department for any export compliance questions.

Document Revision Level Revision

Date

Description

MK-92HNAS025-00

March 2013

First publication

Contact Hitachi Data Systems 2845 Lafayette Street Santa Clara, California 95050-2627 https://portal.hds.com

North America: 1-800-446-0744

Contributors The information included in this document represents the expertise, feedback, and suggestions of a number of skilled practitioners. The author would like to recognize and sincerely thank the following contributors and reviewers of this document (listed alphabetically): •

Bent Knudsen



Nathan King



Gary Mirfield



Gokula Rangarajan

Reference Hitachi VSP Architecture Guide

iv

Hitachi USP-V/VSP

Table of Contents Intended audience ................................................................................................................................................................ 5 Overview ............................................................................................................................................................................... 5 USP-V and VSP series configuration best practices ......................................................................................................... 5

Port I/O request limit (queue depth) ............................................................................................................... 5 General recommendations for USP-V/VSP systems..................................................................................... 5 Other recommendations ................................................................................................................................ 6 Multi-LUN RAID groups ............................................................................................................................................ 6 Universal Volume Manager best practices ......................................................................................................................... 6

General recommendations ............................................................................................................................ 6 Virtualizing AMS/HUS storage behind USP-V and VSP systems ................................................................. 6 Recommendations for configuring virtualization settings for AMS/HUS storage ...................................................... 7 VSP system option mode to improve sequential write I/O performance................................................................... 7

Avoiding LDEV carving of UVM eLUNs ......................................................................................................... 8 Preventing oversubscription of SATA on an AMS 2000 system.................................................................... 8 Microprocessor sharing on a USP-V system ................................................................................................. 8 External LUN size restrictions ........................................................................................................................ 8 Hitachi Dynamic Provisioning and UVM ........................................................................................................ 9 Recommendations for creating multiple LUs per AMS/HUS RAID group ................................................................. 9 Direct and switch attach connectivity best practices ..................................................................................................... 10

Single node, direct-attached ........................................................................................................................ 10 Two node cluster, direct-attached ................................................................................................................ 11 Two node cluster, USP-V switch-attached .................................................................................................. 12 Two node cluster, VSP switch-attached ...................................................................................................... 13 Two node cluster, VSP performance configuration, switch-attached .......................................................... 14 Virtualizing an AMS 2500 behind a USP-V system .......................................................................................................... 15

Intended audience The intended audience for this guide is customers, authorized service providers, and Hitachi Data Systems (HDS) personnel.

Overview The Hitachi Virtual Storage Platform (VSP) and Hitachi Universal Storage Platform V (USP-V) storage systems can be configured numerous ways. When attaching Hitachi Network Attached Storage (HNAS) to these systems, you cannot treat the HNAS system like other typical SAN clients. The HNAS system is capable of heavily driving the storage array and disks. The HNAS practices outlined in this document describe how to configure the HNAS system to achieve the best results. Consult the HNAS Configuration Guidelines document for additional information about HNAS practices.

USP-V and VSP series configuration best practices Port I/O request limit (queue depth) Each Fibre Channel (FC) port on a VSP system has an I/O request limit (queue depth) of 2048 requests. The USP-V system has an I/O request limit of 4096 requests when not sharing microprocessors (MPs) with another FC port. See the section called MP Sharing on USP-V for more information. HDS recommends that you treat each USP-V port as if it has an I/O request limit of 2048.   

You can map up to 64 active LUNs on a FC port while maintaining a queue depth value of 32 per LUN. HDS recommends that you have one additional path per LUN for redundancy; however, no more than two paths are required. Dedicate the FC ports to HNAS.

General recommendations for USP-V/VSP systems 

      



Use a HNAS superflush setting of 3x128 for each USP-V/VSP system drive regardless whether the system drive is internal or external. HNAS supports larger superflush settings; however, these settings should only be used when instructed by HDS Global Solutions and Services (GSS). For RAID groups presented to HNAS in USP-V/VSP systems, use 7D+1P for SAS with sequential workloads and 3D+1P for random workloads when not using HDP. Configure only one LDEV/LU per RAID group when possible. Configure any SATA or 7.2K RPM SAS drive in RAID-6, 6D+2P to achieve the best random performance. Map the paths for each LUN from separate front end director (FED) boards; that means 1A, 2A). Dedicate the RAID and parity groups to HNAS for optimal performance. Use the standard (default) host mode for USP-V and VSP systems for HNAS. Do not share the following resources with any non-HNAS clients: -

Hitachi Adaptable Modular Storage (AMS) FC ports (when connected through an Universal Volume Manager (UVM) system).

-

USP-V/VSP FC or UVM ports.

-

RAID groups or Hitachi Dynamic Provisioning (HDP) pools.

Set the preferred path when connecting to USP-V or VSP; however, this setting is not required. 5





When mapping logical devices (LDEVs) as LUNs to HNAS, do not reuse LUN numbers on the host groups. This makes it easier to identify the LDEV to system disk (SD) mappings. For example, when mapping 128 LDEVs, map the first 64 LDEVs as LUNs 0-63 on ports 1A and 2A, and the remaining LDEVs as LUNS 64-127 on ports 1B and 2A. HNAS supports 512 LUNs per HNAS cluster. To optimize scalability, create larger LUNs as opposed to smaller LUNs.

Other recommendations Multi-LUN RAID groups With the increasing size of drive capacities, it is likely that multiple LDEVs will need to be created from a single RAID group to address all of the available capacity within that RAID group. In this case, create as few LDEVs as possible. The LDEVs should also be mapped on HNAS to same system drive group (SDG).

Figure 1 - Multi-LUN RAID groups configuration

Universal Volume Manager best practices General recommendations Warning: Use caution when configuring volumes to avoid negative performance impacts. HDS recommends that you do not use the Universal Volume Manager (UVM) system for workloads that have high performance requirements. For best performance, HDS strongly recommends that you enable cache mode on the USP-V or VSP system for external storage presented to HNAS or used by HNAS. You can allocate cache size for HNAS within a cache partition.

Virtualizing AMS/HUS storage behind USP-V and VSP systems Each Adaptable Modular Storage system/Hitachi Unified Storage system (AMS\HUS) FC port has an I/O request limit (queue depth) of 512 requests. Note that new HUS 0935A microcode can be configured to support 1024 requests when using HNAS code 11.2.33.xx or higher. This setting allows for 16 active LUNs on a FC port, with each LUN having a queue depth of 32. In UVM mode, the maximum per port USP-V/VSP I/O request limit is 384. In that scenario, HDS recommends that you use 24 LUNs per AMS port, with 12 being active LUNs and 12 being failover LUNs. Use two paths, one from each AMS controller. For the best performance, do not configure more than two paths per LUN. See the configuration example in Figure 1 - Multi-LUN RAID groups configuration.

6

Recommendations for configuring virtualization settings for AMS/HUS storage 

 

Important: The Universal Volume Manager User’s Guide recommends that you limit the maximum number of command tags going to an individual AMS 2000 to 500 tags for the best performance. The AMS 2500 Rev. 1 controllers require a special setting when attaching as external storage to USP-V. See the section called Virtualizing an AMS 2500 behind USP-V for more details. The USP-V/VSP system assigns a queue limit value of eight to each external LUN. On the VSP, you can modify the external LUN queue limit to be from two to 128. HDS recommends that you set the queue limit value to 32.

In an environment that has no strict performance requirements, you can use more than 16 active LUNs from an external AMS/HUS system. In that scenario, you can reduce the queue limit to 16 or less for each of the external LUNs.

Figure 2 - External LUN port mapping

The USP-V system also has an external port queue depth of 256; however, the depth can be reduced to 128 when the system is in microprocessor (MP) sharing mode. When virtualizing storage for use by HNAS, HDS recommends that you not share MPs. Be advised that not reducing the queue depth can reduce the number of FC ports that can be used on a feature.

VSP system option mode to improve sequential write I/O performance For VSP storage systems with cache mode set to ON, you can turn on System Option Mode 872 to ensure the order of data transferred from VSP to external storage is handled in a more efficient manner for sequential detection by the external storage controller. A heavy sequential write 7

workload results in an overall improvement in system I/O performance. This option can only be set by a Hitachi Data Systems GSS Engineer. Contact HDS GSS to schedule an appointment.

Avoiding LDEV carving of UVM external LUNs To maintain optimal performance levels and not oversubscribe an external storage array, HDS recommends that you do not carve external LUNs (eLUNs) into multiple USP-V or VSP system LDEVs. Carving the eLUNs in this way can potentially allow HNAS to submit more command tags than the eLUN can handle.

Preventing oversubscription of SATA on an AMS 2000 system SATA storage can be easily oversubscribed in an AMS system due to per drive queue limits. When attached behind a USP-V or VSP system, this scenario can lead to high response times and high AMS cache utilization (write pending), which can reduce performance.To maintain consistent performance without oversubscribing the disks, follow these recommendations:  

Avoid creating multiple logical units (LUs) per RAID group--create only one LU per RAID group. If this is not possible, follow the next recommendation. When attaching SATA LU through a UVM system to a USP-V/VSP system, reduce the queue depth on the USP-V/VSP so that a RAID group receives no more than 32 command tags. See the Virtual Storage Platform or Universal Storage Platform User’s Guides for more information about command queue settings, specifically the section about editing external WWN settings.

Microprocessor sharing on a USP-V system The FED boards have microprocessors (MP) that control the FC ports. Each MP owns two ports on the 16-port feature (2x 8-port boards). In Figure 3 – Microprocessor sharing on USP-V system FEDs, you can see that MP00 controls ports 1A and 5A and MP01 controls 3A and 7A. When both ports are active, the port I/O request limit (queue depth) is reduced by one half for each port. Take this into account when you are direct or switch attaching an HNAS to the USP-V. This restriction does not apply to 4-port boards (8-port feature).

Figure 3 – Microprocessor sharing on USP-V system FEDs

External LUN size restrictions The USP-V system supports a maximum external LUN size of 3.99 TB. This limit means that multiple LUNs must be created for higher capacity drives (greater than 1 TB) on virtualized AMS/HUS storage systems. In this scenario, HDS recommends that SD groups be used on HNAS to prevent head thrashing of the AMS external disk. See Figure 4 - Virtualizing multi-LUN RAID groups for an example of virtualizing multi-LUN RAID groups. 8

Figure 4 - Virtualizing multi-LUN RAID groups The VSP system can recognize a maximum external LUN size of 59.99 TB; however, the largest Open-V is 3.99 TB. This means that a large external LUN (greater than 4 TB) must be carved into multiple 3.99 TB Open-V LDEVs so that they can be assigned to the HNAS system or as dynamically provisioned pool volumes.

Hitachi Dynamic Provisioning and UVM HNAS systems support Hitachi Dynamic Provisioning (HDP); however, thin-provisioned dynamically provisioned volumes (DP-Vols) are not supported with HNAS systems. When placing AMS/HUS LUs into a USP-V/VSP HDP pool, use a one-to-one correlation between backend AMS/HUS LUs and front end VSP DP-Vols. For example, if there are 10 AMS LUs used in an HDP pool, then create 10 VSP DP-Vols from that pool. Note: This correlation rule only applies when there is a single LU per AMS RAID group.

Recommendations for creating multiple LUs per AMS/HUS RAID group  



For high performance sequential read workloads, the use of HDP or virtualization (UVM) may cause reduced performance. All LUs created from a RAID group must be dedicated to the same HDP pool and all DP-Vols created from that HDP pool must be allocated to HNAS system. This means that you can not share any resources that are associated with the HNAS. When creating DP-Vols, use a ratio of one DP-Vol to one AMS external RAID group to prevent oversubscription of the AMS 2000 system. Example: There are 10x 8D+2P 2 TB SATA RAID groups on an AMS 2500. To use UVM, 3 LUs are carved from each RAID group. All of the eLUNs are put into an HDP pool on a VSP system. Because there are 10 RAID groups, HDS recommends that you create 10 DP-Vols.

Be aware of the following LUN size limits:  

USP-V system maximum internal LUN size is 2.99 TB, and 4 TB using HDP. VSP system maximum internal LUN size is 3.99 TB, 60 TB using HDP, and 4 TB if using HDP with any Hitachi replication products, including ShadowImageI, TrueCopy, and Hitachi Universal Replication. 9

Note: These LUN size limits apply to SyncDR (Metro Cluster) implementations as well because True Copy Synchronous is used.

Direct and switch attach connectivity best practices HDS recommends that you use the connectivity shown in the figures in this section when you connect HNAS systems to Hitachi USP-V and VSP storage systems.

Single node, direct-attached

Figure 5 - Connectivity for a single node, direct-attached configuration

10

Two node cluster, direct-attached

Figure 6 - Connectivity for a two node cluster, direct-attached configuration

Table 1 – Connectivity Node ports

USP-V ports

Node 1 hport 1: 1A

Node 2 hport 1: 1C

Node 1 hport 2: 1B

Node 2 hport 2: 1D

Node 1 hport 3: 2A

Node 2 hport 3: 2C

Node 1 hport 4: 2B

Node 2 hport 4: 2D

Notes: 



To correctly failover between nodes, the HNAS system must see the same view of the storage from both nodes. The system interprets the first digit in the VSP port number as the controller, and both nodes need to see LUNs from the same controller on the same hport. For example, LUN 0 on USP-V/VSP port 1A connects to node 1 hport 1, and LUN 0 on USP-V/VSP port 1C connects to Node 2 hport 1. The HNAS 3100 and 3200 servers only support direct-attached in single node configurations.

11

Two node cluster, USP-V switch-attached

Figure 7 - Connectivity for a two node cluster, USP-V switch-attached configuration For correct failover, each node must recognize the exact same view of the storage. For example, node 1 recognizes port 1A from hport1 and node 2 recognizes port 1A from hport 1. Certain FC ports are not used because their use reduces the I/O request limit for the other FC port owned by the same MP. See the section called Microprocessor Sharing on USP-V for more information.

Table 2 – Zoning – Both nodes must see the exact same storage ports Zone

Node ports

USP-V ports

Zone 1

Node 1 hport 1

1A, 2B

Zone 2

Node 1 hport 3

2A, 1B

Zone 3

Node 2 hport 1

1A, 2B

Zone 4

Node 2 hport 3

2A, 1B

It is not necessary to create multiple host storage domains for each HNAS hport on a specific USP-V FC port. A single host storage domain is sufficient. Figure 7 - Connectivity for a two node cluster, USP-V switch-attached configuration shows connectivity using two HNAS hports. To achieve the highest performance, HDS recommends that you use all four hports on an HNAS system. 12

Two node cluster, VSP switch-attached

Figure 8 - Connectivity for a two node cluster, VSP switch-attached configuration

Table 3 – Zoning – Both nodes must see the exact same storage ports Zone

Node ports

VSP ports

Zone 1

Node 1 hport 1

1A, 2B

Zone 2

Node 1 hport 3

2A, 1B

Zone 3

Node 2 hport 1

1A, 2B

Zone 4

Node 2 hport 3

2A, 1B

Note: It is not necessary to create multiple host storage domains for each HNAS hport on a specific VSP FC port. A single host storage domain is sufficient.

13

Two node cluster, VSP performance configuration, switch-attached

Figure 9 - Connectivity for a two node cluster, VSP switch-attached configuration

Table 4 – Zoning – Both nodes must see the exact same storage ports Zone

Node ports

VSP ports

Zone 1

Node 1 hport 1

1A

Zone 2

Node 1 hport 3

2A

Zone 3

Node 2 hport 1

1A

Zone 4

Node 2 hport 3

2A

Zone 5

Node 1 hport 2

2B

Zone 6

Node 1 hport 4

1B

Zone 7

Node 2 hport 2

2B

Zone 8

Node 2 hport 4

1B

Note: It is not necessary to create multiple host storage domains for each HNAS hport on a specific VSP FC port. A single host storage domain is sufficient. 14

Virtualizing an AMS 2500 behind a USP-V system The following information is taken from the Hitachi Universal Volume Manager User’s Guide. See the full guide for complete details. If an AMS 2500 system is externally connected to a USPV/VM system, you must specify the CPU load reduction for the Cross-CTL I/O Mode port option. 





The mode applies when using the UVM system feature with an AMS 2500 system with firmware 0890H or later. When using a UVM system with an AMS 2500 system, the USPV/VM system uses a round-robin approach for the I/Os to the AMS 2500 system. In a dual-core AMS 2500 system, this results in half the target LUs being handled by a core that does not own the LU. This scenario results in higher CPU utilization. When you enable the CPU load reduction for Cross-CTL I/O Mode, the processing order between the cores is tuned, which results in a lower CPU load. The Cross-CTL I/O Mode port option applies to the Rev. 01 controllers and not to the Rev. 02 controllers because of the hardware architecture. As of firmware level 0893/B, the CPU load reduction for Cross-CTL I/O Mode is guarded from Rev. 02 controllers; therefore, with Rev. 02 controllers, this selection is not visible for the port options. This option is supported by the following statement from the 0893/B ECN. 12) CPU Load Reduction for Cross-CTL I/O Mode guard for DF800EH– Severity Low. Prior to level 0893B and starting at 0890/B, the CPU load reduction option was selectable for Rev. 02 controllers, but if the option was turned on, it could block the controller. If the AMS 2500 system is already connected and does not have this option set, setting the option is a disruptive procedure. If multiple paths are connected, it is a minor disruption because you can remove one path and then add the path again with the correct option selected while the remaining paths stay connected.

15

Hitachi Data Systems Corporate Headquarters 2845 Lafayette Street Santa Clara, California 95050-2639 U.S.A. www.hds.com Regional Contact Information Americas +1 408 970 1000 [email protected] Europe, Middle East, and Africa +44 (0)1753 618000 [email protected] Asia Pacific +852 3189 7900 [email protected]

MK-92HNAS025-00