FusionSphere Performance: Best Practices for Database Applications

white paper Intel® Cloud Builders FusionSphere Performance: Best Practices for Database Applications 1 Overview FusionSphere is a key technical plat...
63 downloads 3 Views 4MB Size
white paper

Intel® Cloud Builders

FusionSphere Performance: Best Practices for Database Applications 1 Overview FusionSphere is a key technical platform in Huawei’s cloud computing-based data center solutions. FusionSphere virtualizes physical resources, such as CPUs, memory, and storage, into a group of logical resources. The logical resources can be centrally managed, flexibly scheduled, and dynamically allocated. Multiple isolated VMs that run simultaneously can be created on a single physical server based on these logical resources. Database application software, such as Oracle Real Application Clusters (RAC) and SQL Server, deployed on the FusionSphere virtualization platform features high performance, high security, high reliability, and easy expansion. This document describes how to optimize performance for databases, such as Oracle RAC and SQL Server deployed on the FusionSphere platform, and provides suggestions on it.

White Paper: FusionSphere Performance: Best Practices for Database Applications

2

Table of Contents

2 General Configurations

1 Overview. . . . . . . . . . . . . . . . . . . . . . . 1

2.1 BIOS

2 General Configurations . . . . . . . . . 2

FusionSphere platform supports multiple hardware platforms. The basic input/output system (BIOS) configurations may vary depending on the hardware in use. Table 2-1 lists recommended configurations for related BIOS configuration items.

2.1 BIOS . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Hardware-Assisted Virtualization . . . . . . . . . . . . . . . . . . 2 2.3 VM. . . . . . . . . . . . . . . . . . . . . . . . . 3 3 Computing Resource Configuration . . . . . . . . . . . . . . . . . . . . 4 3.1 Introduction. . . . . . . . . . . . . . . . 4 3.2 Hyper-Threading. . . . . . . . . . . 4 3.3 NUMA. . . . . . . . . . . . . . . . . . . . . . 5 3.4 x2APIC. . . . . . . . . . . . . . . . . . . . . 6 3.5 Transparent Huge Memory Page. . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.6 CPU QoS . . . . . . . . . . . . . . . . . . 10 3.7 Memory QoS. . . . . . . . . . . . . . . 10 4 Storage Configuration. . . . . . . . . . 11 4.1 Introduction. . . . . . . . . . . . . . . 11 4.2 RDM . . . . . . . . . . . . . . . . . . . . . . 11 5 Network Configuration. . . . . . . . . 12

2.2 Hardware-Assisted Virtualization 2.2.1 Configuration Suggestion CPUs of the latest generation are recommended because they support CPU and memory management unit (MMU) virtualization, such as Intel® Virtualization Technology (Intel VT-x). FusionSphere supports hardware-assisted

virtualization by default. Using CPUs that support the hardware-assisted virtualization feature can achieve optimal performance in the FusionSphere system. 2.2.2 Configuration Method Most Intel CPUs have the hardwareassisted virtualization function, called Intel: VT-x. The BIOS management interface of the server displays the configuration items shown in Figure 2-1. (The configuration method varies depending on hardware devices. The ATAE R3 board is used as an example in Figure 2-1.) Perform the following steps to check whether the in-use CPUs support hardware-assisted virtualization:

TABLE 2-1 BIOS CONFIGURATION ITEM LIST ITEM

VALUE

DESCRIPTION

NUMA Support

Enabled

Enables the non-uniform memory access (NUMA) feature.

Hardware Prefetcher

Enabled

Enables the hardware prefetch function (the CPU prefetches the next command to improve system performance).

Adjacent Cache Line Prefetch

Enabled

Prefetches the adjacent cache data.

Intel® HT Technology

Enabled

Enables the Intel hyper-threading technology.

Intel® Virtualization Technology

Enabled

Enables the Intel virtualization technology.

Intel® Speedstep® Technology

Enabled

Enables the Intel SpeedStep technology to support the overclocking running.

Intel® TurboMode Technology

Enabled

Enables the Intel turbo acceleration mode.

Intel C-STATE Technology

Disabled

Enables the Intel C-State power-saving function.

Intel® Virtualization Technology (Intel VT-d)

Enabled

Enables the Intel VT-d technology to support device passthrough.

VT Support

Enabled

Enables the Intel VT technology.

Local x2APIC

Enabled

Enables Local x2APIC technology on condition that the virtualization layer and the guest operating system (OS) support this technology. The Local x2APIC technology requires ACPI4.0 and does not support remapping.

ACPI Selection

ACPI4.0

The Local x2APIC technology requires ACPI4.0 and does not support remapping.

White Paper: FusionSphere Performance: Best Practices for Database Applications

Step 1 Users can check whether CPUs support hardware-assisted virtualization from the following two websites: Intel: http://ark.intel.com/zh-cn/Products/VirtualizationTechnology Step 2 Run the cat /proc/cpuinfo | grep flags | head -n1 command on the server to view the CPU information and check whether flags contains vmx (Intel). If flags contains (as shown in Figure 2-2) vmx (Intel), the CPUs support hardware-assisted virtualization. (vmx is irrelevant to that the hardware-assisted virtualization function is enabled or disabled in the BIOS. You can switch to the BIOS management interface to check whether the hardware-assisted virtualization function is enabled or disabled.) Note: The hardware-assisted virtualization function is enabled by default. However, you are required to check this in the BIOS.

3

2.3 VM

2.3.2 Configuration Method

2.3.1 Configuration Suggestions

Disable irrelevant processes run on the VM.

• Install the PV driver on VMs. The paravirtualized (PV) driver contains the VM disk driver, network interface card (NIC) driver, and balloon driver. After the PV driver is installed, the system can provide optimal performance in disk and network non-passthrough mode. Disable infrequently-used database service processes, such as anacron, apmd, atd, autofs, cups, cupsconfig, gpm, isdn, iptables, kudzu, netfs, and portmap, based on site requirements to save VM CPU and memory resources. • Perform operations, such as scheduled tasks, backup, and anti-virus scans, during off-peak hours to avoid excessive CPU usage. Otherwise, the database performance will be deteriorated.

----End

Figure 2-1 BIOS configuration

Figure 2-2 CPU Information

1. Run the following command to show services that have been enabled: chkconfig --list |grep 3:on 1. Run the following commands to disable unwanted services and set them as default disabled services: chkconfig atd off chkconfig autofs off service atd stop service autofs stop The configuration method is also applicable to Windows VMs.

White Paper: FusionSphere Performance: Best Practices for Database Applications

3 Computing Resource Configuration

3.2 Hyper-Threading

3.1 Introduction

Use hardware that supports the hyperthreading technology and enable the HT technology in the BIOS.

Table 3-1 lists the CPU optimal performance configuration in the FusionSphere system. For details, see the FusionSphere Performance Optimization Guide.

3.2.1 Configuration Suggestion

TABLE 3-1 COMPOSING RESOURCE CONFIGURATIONS ITEM

VALUE

Hyper-threading (HT)

Enabled

Transparent huge page (THP)

Enabled

Host NUMA

Enabled

Guest NUMA

Enabled

X2APIC

Enabled

CPU QOS

Disabled

Memory overcommitment

Disabled

Memory reservation

All reserved

3.2.2 Configuration Method

Figure 3-1 Hyper-threading configuration

4

Note: The hyper-threading technology allows two threads to run on one physical CPU core. In this case, one physical CPU core functions as two logical cores, improving the CPU efficiency.

White Paper: FusionSphere Performance: Best Practices for Database Applications

3.3 NUMA 3.3.1 Configuration Suggestions NUMA is a memory management technology designed for Service Management Point (SMP) systems. The memory access duration depends on where the CPU memory is stored. With this feature enabled, a CPU can access local memory faster than memory on another CPU or shared CPU. The prerequisites of enabling the NUMA function in the FusionSphere system is that physical memory modules are symmetrically distributed (the physical memory model and size, and the number of memory modules are symmetrically distributed based on suggestions provided by hardware vendors) and the NUMA function has been enabled in the BIOS. The NUMA architecture in the FusionSphere system contains host NUMA and guest NUMA.

Host NUMA automatically allocates VM CPU and memory resources to the same NUMA node and balances CPU workload among NUMA nodes. If the number of vCPUs on a VM is greater than that of CPU cores of a NUMA node, the host NUMA function is invalid. Guest NUMA presents the memory and vCPU resources to the VM and shows the NUMA topology in the VM to enable VM application processes to preferably use memory resources on one NUMA node, thereby improving the memory access efficiency. If the number of vCPUs on a VM is less than that of CPU cores of a single node, the guest NUMA is invalid. If the number of vCPUs on a VM is a multiple of that of CPU cores, the guest NUMA evenly allocates the vCPUs to N nodes. If the number of vCPUs on a VM is greater than that of CPU cores but is not a multiple of that of the CPU cores, the guest NUMA evenly allocates the vCPUs to each node of a physical server.

Figure 3-2 Host NUMA configuration

5

Host NUMA is enabled by default. Guest NUMA is disabled by default. • If the number of VM CPU cores is less than that of a single node of a server and the VM memory size is smaller than the single node memory size of the server, enable only host NUMA • If the number of VM CPU cores is greater than that of single node cores of a server and the VM memory size is larger than the single node memory size of the server, enable both host NUMA and guest NUMA. 3.3.2 Configuration Method Host NUMA takes effect only when NUMA Support is enabled in the BIOS. For details, see Figure 3-2. Guest NUMA is disabled by default in the FusionSphere system. If you need to enable it, log in to FusionCompute and select Guest NUMA on the Basic Configuration page shown in Figure 3-3.

White Paper: FusionSphere Performance: Best Practices for Database Applications

6

Figure 3-3 Guest NUMA configuration

3.4 x2APIC 3.4.1 Configuration Suggestion The x2APIC feature can prevent performance deterioration caused by process

scheduling in the hypervisor, improving computing virtualization performance. The virtualization platform and VM OS must support the x2APIC feature.

The x2APIC feature is enabled by default in the FusionSphere virtualization platform.

TABLE 3-2 LISTS VM OSS THAT SUPPORT THE X2APIC FEATURE VM OS

FUSIONSPHERE VIRTUALIZATION PLATFORM

Windows 7 and later

Supported

SUSE Linux Enterprise 11 Service Pack 2 (SP2) and later

Supported

White Paper: FusionSphere Performance: Best Practices for Database Applications

3.4.2 Configuration Method For a Linux VM 1. Run the following command on the VM host to check whether flags displayed in the CPU information contains x2apic: # cat /proc/cpuinfo | grep flags | head -n1 If flags contains x2apic, the CPU support the x2APIC function. 2. Run the following command on the VM to check whether the configuration has taken effect. # dmesg | grep –i x2apic

If Enabling x2apic or Enabled x2apic is displayed in the command output, the x2APIC feature is successfully enabled. For a Windows VM The x2APIC feature is enabled in a Windows VM system by default. Perform the following steps to check whether the x2APIC feature is enabled: 1. Log in to the host accommodating the Windows VM that runs the SQL Server. 2. Run the following command to view

7

the VM configuration file: virsh dumpxml VM name 3. Check whether the VM configuration file contains viridian and whether viridian is set to 1. 1 3.5 Transparent Huge Memory Page 3.5.1 Configuration Suggestion You are suggested to increase the memory page size to reduce the number of mapping entries in a mapping table, improving the CPU search efficiency.

Figure 3-4 Checking whether the CPU supports the X2APIC function

Figure 3-5 Verifying whether the x2APIC function takes effect

White Paper: FusionSphere Performance: Best Practices for Database Applications

The transparent huge page feature is enabled in the FusionSphere system by default to improve the memory access performance. 3.5.2 Configuration Method For a Linux VM An Oracle database is used as an example. The parameter values are provided only for reference. In practice, configure them based on specific service requirements. 1. Run the following command to check the size of the memory page: # grep Hugepagesize /proc/ meminfo Information similar to the following is displayed: Hugepagesize:

2048 kB

The preceding command output shows that the huge memory page size is 2048 KB. 2. Run the following command to check whether the huge memory page has been assigned: # cat /proc/meminfo|grep Huge Pages_Total Information similar to the following is displayed: HugePages_Total=0 The preceding command output shows that the huge memory page is not assigned. 3. Run the following commands to configure the number (nr_hugepages) of huge memory pages required to be assigned: The nr_hugepages value can be calculated based on the following formula: nr_hugepages ≥ System global area size (MB)/Huge memory page size (MB)

Note: System global area (SGA) is a common Oracle data buffer area. For details, see the Oracle database guide.

# echo “vm.nr_hugepages=8192” >> /etc/sysctl.conf # sysctl –p Restart the VM and run the following command to check whether the huge memory page configuration takes effect: grep HugePages_Total /proc/meminfo Information similar to the following is displayed: HugePages_Total: 8192 The preceding command output shows that the system has been assigned 8192 huge memory pages (8192 x 2048 KB = 16384 MB). 4. Open the /etc/security/limits.conf file using the vi editor and add the following two lines in the file to configure the memlock parameter of Oracle user: oracle soft memlock 16777216 oracle hard memlock 16777216 The memlock size can be calculated based on the following formula: Memlock size ≥ Number of huge memory pages x 1024 Note: In this example, the memlock size is set to a double of the number of huge memory pages multiplied by 1024, namely, 16777216. (2 x 8192 x 1024).

Switch to the Oracle user and run the following command to check the memlock value: ora_test@oracle[/home/oracle]> ulimit –l Information similar to the following is displayed: 16777216 Run the following commands to start the database: ora_test@oracle[/home/oracle]> sqlplus / as sysdba

8

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Jan 25 09:50:33 2010 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to an idle instance. idle> startup ORACLE instance started. Total System Global Area 167772160 bytes Fixed Size 1218292 bytes Variable Size 67111180 bytes Database Buffers 92274688 bytes Redo Buffers 7168000 bytes Database mounted. Database opened. idle> exit Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production With the Partitioning, OLAP and Data Mining options Run the following command to check whether the huge memory page has been used: # grep HugePages_Free /proc/meminfo Information similar to the following is displayed: HugePages_Free: 5589 The number of huge memory pages that is displayed is smaller than the total number of huge memory pages. This indicates that the Oracle database uses the huge memory page function. To disable the AMM feature, set the following parameters, in which ga_target, memory_target, and memory_max_target must be set to 0.

White Paper: FusionSphere Performance: Best Practices for Database Applications

9

CAUTION For the Oracle database 11g, the automatic memory management (AMM) feature must be disabled. Otherwise, the huge memory page cannot be used. Configurations for other parameters are the same as those for the Oracle database 10g. After the huge memory page is configured, restart the database. To make the huge memory page feature take effect.

ALTER SYSTEM SET sga_max_ size=16g SCOPE=SPFILE; ALTER SYSTEM SET sga_target=0 SCOPE=SPFILE; ALTER SYSTEM SET PGA_AGGREGATE_TARGET=8g SCOPE=SPFILE; ALTER SYSTEM SET memory_target=0 SCOPE=SPFILE; ALTER SYSTEM SET memory_max_ target=0 SCOPE=SPFILE;

For a Windows VM Perform the following operations to enable the huge memory page function on a Windows VM: (Only Windows Server 2003 and later are applicable.) 1. Choose Control panel > Administrative tools > Local security policy 2. On the Local security policy page, choose Local policies > User rights assignment.

Figure 3-6 CPU QoS configuration

3. Double-click Lock pages in memory to add the user and group. 4. Restart the server. On the Lock pages in memory page, check whether the added user is included in the user group.

White Paper: FusionSphere Performance: Best Practices for Database Applications

3.6 CPU QoS 3.6.1 Configuration Suggestion Disable the CPU QoS feature for database applications. CPU QoS ensures optimal allocation of computing resources for VMs and prevents resource contention between VMs due to different service requirements. Therefore, CPU QoS can effectively increase resource utilization and reduce costs.

Set CPU QoS values during VM creation based on the planned VM services. Computing capabilities of VMs with different CPU QoS settings vary. The system ensures the VM CPU QoS by setting the minimum computing capability and the resource allocation priorities. 3.6.2 Configuration Method Perform the following operations to set Properties during the VM creation: In the CPU Resource Control area, drag Reserved slider to the right and select No limit.

Figure 3-7 Memory QoS configuration

10

3.7 Memory QoS 3.7.1 Configuration Suggestion To achieve the optimal performance, you are suggested to disable the memory overcommitment policy and set the reserved memory of the VM to the actually assigned memory size. 3.7.2 Configuration Method Perform the following operations to set Properties during the VM creation: In the CPU Resource Control area, drag Reserved slider to the right.

White Paper: FusionSphere Performance: Best Practices for Database Applications

4 Storage Configuration 4.1 Introduction • Storage hardware is configured based on the applicable storage performance best practice documents provided by vendors. • The Oracle database log disk and data disk are deployed on different redundant array of independent

ASM

11

disks (RAID) groups. Multiple data disks are used for a large amount of data and distributed on different RAID groups. • Raw device mapping (RDM) is used as the VM storage. I/O queue depths are adjusted, preventing I/O from being blocked under heavy traffic.

re d o lo g

data

u n do lo g

4.2 RDM 4.2.1 Configuration Suggestion The RDM feature allows VMs to identify Small Computer System Interface (SCSI) disks and issue SCSI commands to hosts, which pass through the commands to storage devices. Therefore, VMs can provide high performance for I/O-sensitive services, such as Oracle RAC and MSCS. Figure 4-1 shows the RDM I/O path.

oracle

Block/Schedule

VM

SCSI SCSI Front Driver

IO Ring

SCSI Back Driver Block/Schedule

Dom0 SCSI Block/Schedule

LU N

LU N

LU N

Figure 4-1 PVSCSI I/O path

LU N

SAN

White Paper: FusionSphere Performance: Best Practices for Database Applications

4.2.2 Configuration Method The RDM feature is supported by default in the FusionSphere system. This feature can be used by mounting a single logical unit number (LUN) on domain 0 to the VM. Perform the following operations to configure the data store: 1. Create a single LUN on the IP storage area network (SAN) storage device and map the LUN to the host. 2. Add data stores to the host using RDM. 3. Create a disk on the data store and mount the disk to the VM. Note: Only one disk can be created on the data store and shared by multiple VMs.

4. Run the following command on the CAN node to view the VM disk configuration: # virsh dumpxml [vmid] Information similar to the following is displayed:

5 Network Configuration Configuration Suggestion • The network interface card (NIC) performance must match the switch

Figure 4-2 Selecting storage device

12

performance, and the network bandwidth is sufficient. For example, if you use 10GE NICs, the switch must support for 10GE bandwidth. • The VM management network, service network, and heartbeat network are deployed in different network segments of a database cluster. The service network segment and heartbeat network segment should be separated from other services to avoid being affected by other services. • Heartbeat network delay has adverse impact on network performance. Therefore, the NIC is deployed on the heartbeat network in passthrough mode to reduce network delay.

White Paper: FusionSphere Performance: Best Practices for Database Applications

Disclaimers Copyright © Huawei Technologies Co., Ltd. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions: Huawei logo and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Notice: The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information, and recommendations in this document are provided “AS IS” without warranties, guarantees or representations of any kind, either express or implied. The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied. INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel’s Web site at www.intel.com. Copyright © 2016 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

* Other names and brands may be claimed as the property of others.

Printed in USA

0316/HDW/MM/PDF

Please Recycle