Nutanix Tech Note. Virtualizing Oracle Databases on Converged Infrastructure with Web- Scale Technology

  Nutanix Tech Note Virtualizing Oracle Databases on Converged Infrastructure with WebScale Technology This tech note reviews the performance advan...
Author: Thomasina Horn
14 downloads 0 Views 1MB Size
 

Nutanix Tech Note Virtualizing Oracle Databases on Converged Infrastructure with WebScale Technology

This tech note reviews the performance advantages of and best practices for Nutanix converged infrastructure for Oracle databases, including Oracle RAC.

©2014 All Rights Reserved, Nutanix Corporation

 

Table  of  Contents   Executive Summary ................................................................................................................................................... 3 Audience and Purpose ............................................................................................................................................. 3 Oracle Database on Converged Infrastructure .............................................................................................. 4 Enhancing Availability .............................................................................................................................................. 6 Guidelines for Platform Selection ........................................................................................................................ 7 Performance Testing of Oracle ............................................................................................................................. 8 Appendix: Oracle on Nutanix Best Practice Checklist ................................................................................. 11

©2014 All Rights Reserved, Nutanix Corporation

 

Executive Summary The Nutanix Virtual Computing Platform is a highly resilient converged compute and storage platform that brings the benefits of web-scale infrastructure to the enterprise. Designed for supporting virtual environments, including VMware vSphere and Hyper-V, the Nutanix architecture employs a storage controller in a VM, called the Nutanix Controller VM (CVM). This VM is run on every Nutanix server node in a cluster to form a highly distributed, sharednothing, web-scale infrastructure. This document highlights why the software-defined storage architecture used by the Nutanix Virtual Computing Platform is the ideal platform for virtualized instances of Oracle databases and Oracle Real Application Cluster (RAC). For business-critical transactional and analytical Oracle databases, Nutanix delivers the performance, scalability, and availability desired by IT, including database administrators. Features include: • • • •

Localized I/O and use of flash for index and key database files for low latency operations A highly distributed approach to handle both random and sequential workloads Non-disruptive scalability by adding new nodes without system downtime Nutanix VMCaliber data protection and disaster recovery to automate backups

Testing of simulated real-world workloads and conditions for Oracle demonstrated that the mid-range, four-node Nutanix Virtual Computing Platform (NX-3450) was able to deliver following results using just 2U of rack space: • • • •

100% Random: 100,000 read operations and 50,000 write operations using 4KB blocks across four nodes 100% Random: 4,000 IOPS per node with 50% reads using 16KB block 100% Sequential: 1.4GB/s write and 3GB/s read throughput across four nodes Maintained average response times of less than 0.5 second for TPC-C like test when VMs were migrated using vMotion

Audience and Purpose This technical note is intended for datacenter and database administrators and architects responsible for architecting designing, building, and maintaining infrastructure for Oracle databases and Oracle RAC. Some familiarity with virtualization (i.e., VMware vSphere), Oracle, and Oracle RAC is assumed. This document provides: • •

• •

An overview of the Nutanix Virtual Computing platform and the benefits of using Nutanix infrastructure for Oracle databases and Oracle RAC The storage performance delivered using Nutanix using a transactional benchmark, including scenarios involving multi-NIC vMotion, VMware Virtual Distributed Switch, and Network IO Control (NIOC) Guidelines for selecting the right Nutanix platform Recommended best practices for configuring Oracle databases, virtualization stacks, and Nutanix Virtual Computing Platform

©2014 All Rights Reserved, Nutanix Corporation

 

Oracle Database on Converged Infrastructure This tech note will review key aspects of Oracle database performance and the benefits of using the Nutanix Virtual Computing Platform for critical Oracle databases, including Oracle RAC workloads. The Nutanix platform offers the ability to run both Oracle and other VM workloads simultaneously on the same platform, while isolating Oracle on dedicated hosts for licensing purposes. Density for Oracle deployments will be primarily driven by the computing and storage requirements of the database. Test validation has shown it is preferred to increase the number of Oracle DB VMs on the Nutanix platform to take full advantage of its performance and capabilities, rather than scaling large numbers of DB instances or schemas within a single VM. From an I/O standpoint, the Nutanix platform handles the throughput and transaction requirements of demanding Oracle transactional and analytical databases based on its patented Nutanix Distributed File System (NDFS). The Nutanix Virtual Computing Platform is a purpose-built infrastructure solution for virtualization and cloud environments. It brings together the many benefits and economics of web-scale architectures from companies such as Microsoft (Azure), Google, and Facebook to the enterprise. The Nutanix solution includes highly dense storage and server compute (CPU and memory) in a single platform building block. Each building block is based on industrystandard and high performing Intel processor technology, and delivers a unified, scale-out architecture with no single point of failure (SPOF). The Nutanix platform doesn’t rely on traditional SAN or NAS storage, or on expensive storage network interconnects. Because of the scale-out nature of the Nutanix platform, it is ideally suited to scale out database implementations, such as when using Oracle RAC. What sets Nutanix apart from other storage solutions is its simplicity. This is not only demonstrated in its ease and speed of deployment, but also in its consumer-grade simple operations. There are no LUNs to manage, no RAID configurations, no Fibre Channel switches to manage, or zoning or masking to configure, no registered state change notifications (RSCN), and no complicated storage multipathing to setup. All storage management is VMcentric, dealing with virtual disks. Storage I/O from a virtual disk is seen as what it is (sequential or random) and is optimally handled by NDFS without any I/O blender effect. There is one shared pool of storage across the cluster, which includes flash-based SSDs for high performance and low-latency, and highcapacity HDDs for affordable capacity. The different types of storage devices in the storage pool are automatically tiered using intelligent algorithms to make sure the most frequently used data is available in memory or in flash. Figure 1 shows an overview of the Nutanix Virtual Computing Platform Architecture, including each hypervisor host (VMware ESXi or Microsoft Hyper-V), Oracle VMs (User VMs), Storage Controller VM (Nutanix Controller VM), and its local disks. Each Controller VM is directly connected to the local storage controller and its associated disks, bypassing the normal hypervisor I/O path and ensuring optimal performance.

©2014 All Rights Reserved, Nutanix Corporation

  By using local storage controllers on each ESXi host and Nutanix unique data locality functionality, access to data through NDFS is local, which avoids data having to always be transferred over the network. This improves latency and reduces network congestion – preserving more network bandwidth for real application traffic. NDFS ensures that writes are replicated and data is distributed within the platform for redundancy. The degree of data redundancy is determined via a user-defined Redundancy Factor. The local storage controller on each host ensures that storage performance, as well as storage capacity, increases when additional nodes are added to the Nutanix Virtual Computing Platform.

Figure 1: Nutanix scales without the bottlenecks of traditional storage architectures.

The simplified storage layout and localization of data to each node where the VM is executing provides a number of performance benefits to each type of Oracle database. Nutanix Features and Benefits Oracle OLTP Transactional Database

• Localized I/O for low latency operations • Flash for index and key database files • Handles both random and sequential workloads with ease

Oracle OLAP Analytical Database

• High performance queries and reports with localized I/O • Abundant sequential read and write throughput • Scales with ease to accommodate growth

Table 1. Nutanix performance benefits for OLTP and OLAP SQL databases.

While the storage is local to each node in the distributed scale-out architecture, it appears to the hypervisor as shared storage, and therefore integrates perfectly with the virtualization layer. In the case of VMware vSphere, this includes support for features such as VMware DRS, VMware High Availability, and VMware Fault Tolerance. Because the storage appears as shared across all hosts, it is able to support the multi-writer disk sharing techniques required to run Oracle RAC. The combination of SSD and HDD local storage, in addition to intelligent automated tiering, balances cost and performance. NDFS resiliency techniques also eliminate the performance penalties associated with legacy RAID solutions. Data localization allows for performance and QoS to be provided per host, so noisy VMs do not significantly impact the performance of their neighbors. This allows for large, mixed-workload vSphere clusters that are efficient from a capacity and performance standpoint, and also resilient to failure. The typical performance of the midrange Nutanix model (NX-3450) delivers combined 100,000 4KB random read operations, 50,000 4KB random write operations, 1.4GB/s

©2014 All Rights Reserved, Nutanix Corporation

  sequential write, and 3GB/s sequential read throughput. Mixed application workloads and I/O workload patterns may have unique performance profiles. Nutanix recommends using VMware and Oracle best practices for mixing different application workloads on their respective hypervisors.

Figure 2: Oracle can run alongside different workloads on Nutanix.

The nature of the Nutanix Virtual Computing Platform architecture and NDFS simplifies the storage layout. Figure 3 illustrates an example layout, which is standard in a Virtual Computing Platform environment. It is comprised of a single NFS datastore, or SMB 3.0 share in the case of ESXi and Hyper-V, respectively. There is no need to worry about multiple LUNs or associated queue depths. While the following figure shows the layout for VMware vSphere, the equivalent can be applied for Microsoft Hyper-V.

Figure 3: Oracle VM disk layout on Nutanix Virtual Computing Platform.

Enhancing Availability To ensure IT organizations are delivering on their promise of protecting data and keeping

©2014 All Rights Reserved, Nutanix Corporation

  critical Oracle VMs available, irrespective of using ASM or host file systems such as VxFS, database administrators should leverage Oracle RMAN. The Nutanix platform can supplement Oracle RMAN and Data Guard with VMCaliber policies for snapshot-based backups. For backup and archiving, months’ and years’ worth of space-efficient snapshots can be stored locally or on a secondary Virtual Computing Platform, eliminating the need for separate backup storage. Policies can be set to efficiently replicate Oracle VMs over the WAN to another Nutanix system to protect against more catastrophic disasters. Custom runbooks using VMware SRM, Nutanix Prism APIs, and Nutanix Windows Powershell Commandlets can be created for automated failover of complex applications. DBAs can also deploy full functioning copies of Oracle environments, including the database in minutes, using VM-level cloning or replication on the same or separate system. This gives individuals their own high-performance environments for testing, development, quality assurance, reporting, or training without compromising the availability or performance of production databases.

Guidelines for Platform Selection For the most efficient Oracle database licensing, Nutanix recommends using either NX-3461 or NX-6280 platforms. The specific model should be chosen based upon compute, storage, and licensing requirements of the Oracle database under consideration. •



Nutanix NX-3061 nodes are recommended for running small- to medium-sized Oracle databases that fit within the storage capacity provided by the Nutanix node. (See the Nutanix web site for platform specifications, including storage capacity information) For larger Oracle Databases (i.e., those exceeding the storage capacity of a mid-range NX-3061 node), higher capacity NX-6080 nodes should be considered

From a software perspective, Nutanix Pro Edition is recommended for deploying Oracle DB and Oracle RAC. The Pro Edition delivers capabilities that will enhance the operating environment, including Time Stream and Nutanix SRA and VSS integration. (More information on Nutanix software features is available on the Nutanix web site.) Deployment Recommendations v Ideally, keep the working set in the SSD tier, and keep database size within the capacity limits of the Nutanix node. v Choose a Nutanix model that can fit the full database storage on a single node. (Note: for larger databases which cannot fit on a single node, ensure there is ample bandwidth between nodes.) v For I/O-heavy ORADB workloads, utilize higher memory node models and assign a larger PGA/SGA, v Utilize a node that will be 2x the memory size of largest single VM, v Utilize a node that fits your organization’s licensing constraints,

©2014 All Rights Reserved, Nutanix Corporation

  Monitoring the redo log, archive log, and daily or weekly backups and measuring how much data changes is helpful in determining the active working set. However, that will not capture hot data blocks that are read frequently and not changed that often. Nutanix recommends the use of AWR reports to determine the change rate and I/O patterns of Oracle Databases between two statistics snapshot periods. AWR Reports and Stats Pack reports will provide data on both read and write I/O and how many database blocks have changed. You can then use this data to more accurately determine the active working set size of your databases. It is recommended that the periods chosen include relevant business period cyclical peaks.

Performance Testing A series of tests were run to demonstrate the capability of the Virtual Computing Platform for Oracle DB and Oracle RAC. These tests were also used to validate the best practices provided in this document. The tests used Benchmark Factory for Databases version 6.9.3 and a TPC-C like scenario with a scale factor of 5000 as a means of generating load on the platform. During the TPC-C like tests and load conditions, the Oracle Databases were migrated using vMotion. Testing was conducted on a single Nutanix NX-3450 configured with four nodes and 256GB RAM per node; using VMware vSphere 5.5 as the hypervisor. (Note: the NX-3461 recommended earlier in this document is the successor to the NX-3450, and should deliver better performance due to increased compute resources.) All network traffic was distributed over 2 x 10GbE ports per host. Each host was redundantly connected to 2x 10GbE switches. Three of the four nodes ran Oracle RAC VMs, while the fourth node ran the test harness and load generator. Each VM running as an Oracle RAC node was configured with 112GB RAM (100GB reserved), 8 vCPUs, and Oracle Linux 6.5 64bit with the Red Hat compatible kernel. The Nutanix Controller VM on each host was configured with 32GB RAM. During testing, large sequential reads peaked at over 1GB/s combined and mixed random read and write operations peaked at over 12,000 IOPS combined across the three active nodes, with an average of 16KB I/O size. This demonstrates that the typical performance (mixed 50% read 100% random 16KB IO over large data sets) is 4000 IOPS per Nutanix Node for a mid-range system. The storage performance equates to 6,000 IOPS per rack unit (RU) and 12 IOPS per watt of power for mixed random 16KB IO.

©2014 All Rights Reserved, Nutanix Corporation

 

Figure 4: Oracle RAC - Multi-NIC vMotion on Virtual Computing Platform

State Single vMotion (Host B > Host C)

Times

Before (90s) During

63s

After (90s) Double vMotion (Host B > Host C, Host C > Host B)

Before (90s) During

92s

After (90s)

Triple vMotion Before (90s) (Host B > Host C, During Host C > Host D, After (90s) Host D > Host B) Table 2. CPU Utilization During vMotion.

175s

Host B

Host C

Host D

52.81

43.13

42.70

58.06

55.94

42.66

27.79

71.64

42.90

51.48

43.31

43.29

71.82

77.42

34.15

52.15

37.90

41.41

54.13

41.44

43.04

72.68

56.77

61.92

31.90

30.69

21.87

As with any vMotion live migration event, performance was reduced and average response times were increased during the period of the migration. However, no user sessions were disconnected and the average response time was less than 0.5 seconds for the entire test. Multi-NIC Throughput During vMotion Operations (Mb/s) Host B

Host C

Host D

Mbps

Transmit

Receive

Transmit

Receive

Transmit

Receive

Single vMotion Double vMotion Triple vMotion

957.14

14,831.21

15,362.18

429.89

332.81

447.73

12,640.37

13,225.61

12,929.66

12,264.02

266.41

254.83

16,737.57

14,395.43

13,973.31

13,250.91

13,360.47

17,363.56

Table 3. Multi-NIC Throughput During vMotion Operations (Mb/s)

The multi-NIC vMotion configuration was based on a vSphere Distributed Switch (VDS) with Network I/O Control (NOIC) enabled and vMotion traffic set to low priority. This ensures quality of service and priority for all other traffic types, including the Oracle RAC Cluster Interconnects and Nutanix CVMs.

©2014 All Rights Reserved, Nutanix Corporation

 

The Nutanix Virtual Computing Platform can co-exist with existing storage investments and offload workloads from existing storage platforms, freeing up both capacity and performance until the legacy environment is due for refresh. It is easy to migrate into the Virtual Computing Platform using live migration of VMs and storage. The performance capability, linear scalability, and uncompromising simplicity of the Nutanix platform make it a very good option for database appliances and Oracle Database as a Service initiatives. Nutanix has created a series of documents on virtualizing Oracle and other critical applications. These reports can be found at www.nutanix.com under the resources section.

©2014 All Rights Reserved, Nutanix Corporation

 

Appendix: Oracle on Nutanix Best Practice Checklist The Oracle DB on Nutanix best practices can be summarized into the following high-level items. Note that the majority of best practice configurations and optimizations come at the Oracle DB and Linux level. General Best Practices 1)

Perform a current state analysis to identify workloads and sizing

2) Spend time up front to architect a solution that meets both current and future needs 3) Design to deliver consistent performance, reliability, and scale 4) Don’t undersize, don’t oversize, right size 5) Start with a PoC, test, optimize, iterate, scale 6) Use only Certified Operating Systems for Oracle Database 7) When using Oracle Database with SAP, follow SAP guidelines in addition to these guidelines Core Components: Oracle DB 1)

Performance and Scalability a) Use the Oracle Validated Package for your Database Version and Oracle standard OS recommendations as the starting point for the installation i)

oracle-rdbms-server-11gR2-preinstall; or

ii) oracle-rdbms-server-12cR1-preinstall b) Configure Oracle Initialization Parameter Parallel_Threads_per_CPU=1, and DB_File_MultiBlock_Read_Count = 512 c) SGA = 50% - 75% of Allocated RAM for OLTP, 30% for DSS/OLAP d) PGA Size Depends on Number of Connections and required sort area, starting point 15% for OLTP, 50% for DSS/OLAP e) Utilize multiple disks for Redo Log, Archive Log, Database Table Spaces i)

For each storage area, start with a minimum of 2 disks for small environments or 4 or more for larger environments (Excludes OS and App Binaries disks)

ii) Look for IO Wait contention and scale number of disks as necessary f)

Use Oracle Automatic Storage Management for Database Files, Redo Log and Archive Log storage, each group of files being in a different ASM Disk Group i)

If you choose to use LVM instead of ASM, it is recommended to stripe volumes (do not concatenate) over multiple disks and use a 512KB stripe size. This will reduce the chance of sequential IO being seen as random, which can happen with smaller stripe sizes. The Logical Volumes (LV) and Physical Volumes (PV) for data files should be kept separate from LV and PVs used for redo logs and archive logs.

g) Utilize a 4MB ASM Allocation Unit Size for ASM Disk Groups h) Configure ASM Disk Groups for External Redundancy i)

Use Disk Mode Independent Persistent for all ASM disks

©2014 All Rights Reserved, Nutanix Corporation

  j)

Split Redo Log, Archive Log, Database Table Space over separate vSCSI controllers

k) Set the Linux Maximum IO Size to match ASM AU Size in rc.local, e.g., i)

l)

for disk in sdk sdl sdn sdp sdq; do echo 4096 > /sys/block/$disk/queue/max_sectors_kb echo $disk “ set max_sectors_kb to 4096” done

Enabled Huge Pages for Oracle DB SGA, requires modifications to sysctl.conf (vm.nr_hugepages and vm.hugetlb_shm_group) and limits.conf (oracle – memlock)

m) Use Automatic Shared Memory Management n) Add the following options to the boot loader (grub) configuration i)

iommu=soft elevator=noop apm=off transparent_hugepage=never numa=off powersaved=off

o) For very high performance database systems add the following options to the boot loader (grub) configuration in addition to those listed above (PVSCSI required) i)

vmw_pvscsi.cmd_per_lun=256 vmw_pvscsi.ring_pages=32

p) Add the following lines to sysctl.conf to reduce swapping i)

vm.overcommit_memory = 1

ii) vm.dirty_background_ratio = 5 iii) vm.dirty_ratio = 15 iv) vm.dirty_expire_centisecs = 500 v) vm.dirty_writeback_centisecs=100 vi) vm.swappiness = 0 q) On high performance networks increase network receive and transmit queues, these should be added to rc.local i)

/sbin/ethtool -G ethX rx 4096 tx 4096 (VMXNET3 Required)

r) Add options = -x to /etc/sysconfig/ntpd s) Oracle Redo Files and Groups i)

Configure sufficient Redo Groups, number of Redo Files, and size of files to meet RPO and RTO requirements and transaction throughput

ii) Configure Redo files on a separate vSCSI controller from other database files iii) Configure at least two disks in ASM Redo Disk Group initially and add disks as performance requirements dictate t) Database Data Files i)

Configure one or more database files per configured vCPU to maximize IO parallelism

ii) Pre-allocate and configure file size appropriately, adjust as necessary making sure files are grown at an equal rate iii) Enable Auto Extend as a fail safe and set next size to be a multiple of the ASM AU iv) At a maximum, keep below 80% of disk capacity utilization v) Use multiple data files and disks

©2014 All Rights Reserved, Nutanix Corporation

  (1) Look for contention for in-memory allocation, such as buffer busy, if contention then increase number of files (2) Look for I/O subsystem contention, if contention then spread the data files across more disks by adding additional disk(s) to Oracle ASM Disk Group(s) u) Utilize Oracle Database AWR and ADDM Reports to identify performance bottlenecks and tune accordingly v) Scale number of ORADB VMs vs. large number of ORADB instances and schemas per VM w) More memory = higher performance and less read IO, if seeing memory pressures, increase VM memory, avoid swapping x) Size Linux swap partition to be big enough to handle an unexpected load, monitor swapping and if consistently above 0 KB used increase VM memory 2) Availability a) In most cases, vSphere HA will provide an adequate level of availability (99.9%+) and uptime for mission critical/tier-1 applications b) For mission-critical/tier-1 applications that require higher levels of availability: i)

Utilize Oracle DataGuard or Archive Log Shipping; or

ii) Utilize Oracle DataGuard with Fast Start Failover; or iii) Utilize Oracle GoldenGate; or iv) Utilize Oracle RAC, which can be combined with previous options c) Use Oracle RMAN and Oracle RMAN integrated solutions for backup and recovery d) Take consistent database snapshots/backups, frequency should be derived from required RPOs leveraging Nutanix Snapshots e) When using REDO Log Multiplexing, ensure the same number of disks are provisioned for the primary and secondary REDO Log ASM disk groups and that the disk groups are on different Virtual SCSI controllers 3) Manageability a) Standardize, monitor, and maintain b) Leverage ORADB application monitoring solutions (e.g., Enterprise Manager 12c) integrated to virtualization monitoring solutions such as vCenter Operations Manager c) Create standardized data file sizes and data file growth sizes d) Pre-allocate data files and manage size proactively e) Create standardized ORADB VM templates f)

Utilize consistent disk quantities and layout schemes for ORADB VMs

g) Leverage Centralized Database and OS Authentication – LDAP Oracle RAC 1)

Use Oracle 11g R2 11.2.0.2 or above (11.2.0.3 or above recommended)

2) Use multi-writer flag for disk sharing, see VMware KB 1034165 3) Create all shared disks as Thick Provisioned Eager Zero

©2014 All Rights Reserved, Nutanix Corporation

  4) Use Disk Mode Independent Persistent for all ASM disks 5) Use two ASM disk groups for OCR Voting Disks 6) Use three disks each for the OCR Voting ASM Disk Groups and configure with High Redundancy 7) Ensure the Oracle and Grid users are part of appropriate operating system groups such as oinstall, dba, asmdba, and asmadmin groups, with oinstall as the primary group 8) Use Jumbo Frames for Oracle RAC Interconnect Networks 9) Use VLANs to separate Oracle RAC Interconnect Traffic 10) Configure two NICs for the RAC Cluster Private Interconnect and one NIC for public communications 11) For each Oracle RAC Private Interconnect Port Group, set a different NIC’s as primary, for example 10G1 Primary on Port Group 1, 10G2 Primary on Port Group 2 12) Add the following settings for each Oracle RAC Private Interconnect NIC to sysctl.conf where X is the interface number: net.ipv4.conf.ethX.rp_filter = 2 13) If using a vSphere Distributed Switch, leverage Network IO Control to ensure quality of service for different traffic types 14) Configure Single Client Access Name and GNS and use SCAN Name for all client connectivity 15) Use NTP and a consistent and reliable time source to ensure consistent time between RAC nodes 16) Use a clustered mount point, such as OCFS, or NFS as a database backup location for RMAN if using backup to disk as part of data protection VMware vSphere 1)

Follow VMware performance best practices

2) Size vSphere Clusters for N+1 redundancy 3) Use percentage value for HA admission control = 1/Hosts per Cluster 4) Avoid vCPU core oversubscription initially (for tier-1 workloads) 5) For small ORADB VMs keep vCPUs

Suggest Documents