W H I T E

P A P E R

Solaris 9 Powered by VERITAS

VERITAS Foundation Suite™ and Database Edition™ 3.5 on Solaris 9

VERSION INCLUDES TABLE OF CONTENTS STYLES

1

TABLE OF CONTENTS TABLE OF CONTENTS .............................................................................................................................................2 Introduction.................................................................................................................................................................3 Benchmarks used & Types of tests performed......................................................................................................3 PostMark............................................................................................................................................................3 fsck.....................................................................................................................................................................3 SPEC-SFS.........................................................................................................................................................3 OLTP..................................................................................................................................................................3 Results summary ...................................................................................................................................................3 PostMark: Performance and CPU utilization of small-file updates....................................................................3 Fsck - file system check and recovery...............................................................................................................4 SPECSFS- File system performance ................................................................................................................4 OLTP- File system performance in a database environment ............................................................................4 Test configurations .....................................................................................................................................................4 PostMark ................................................................................................................................................................4 Machine configuration........................................................................................................................................4 Software configurations .....................................................................................................................................5 Test details ........................................................................................................................................................5 Fsck........................................................................................................................................................................6 Machine configuration........................................................................................................................................6 Software configurations .....................................................................................................................................6 Test details ........................................................................................................................................................7 SPEC-SFS .............................................................................................................................................................8 Machine configuration........................................................................................................................................8 Software configurations .....................................................................................................................................8 Test details ........................................................................................................................................................8 OLTP......................................................................................................................................................................9 Machine configuration........................................................................................................................................9 Software configuration .......................................................................................................................................9 Test details ........................................................................................................................................................9 Results......................................................................................................................................................................10 PostMark ..............................................................................................................................................................10 RAID-0 .............................................................................................................................................................10 RAID 1+0 .........................................................................................................................................................12 RAID-5 .............................................................................................................................................................15 Fsck Benchmark ..................................................................................................................................................17 SPEC-SFS ...........................................................................................................................................................19 Detailed Results...............................................................................................................................................20 OLTP tests ...........................................................................................................................................................24 Database performance ....................................................................................................................................24 Database throughput .......................................................................................................................................25 Snapshots........................................................................................................................................................25 Conclusion................................................................................................................................................................26

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

2

INTRODUCTION This paper evaluates the performance of VERITAS Foundation Suite 3.5 (VERITAS File System (VxFS) version 3.5 on top of VERITAS Volume Manager (VxVM) version 3.5), henceforth referred to as the VERITAS stack, compared to the performance of Sun UNIX File System (UFS) and Solaris Volume Manager (formerly known as Solstice Disk Suite or SDS), henceforth referred to as the Sun stack, running on the Solaris 9 GA operating system. The industry standard benchmarks used to evaluate the performance of these technologies include the PostMark version 1.5 file server benchmark, file system checks (fsck) tests and the Standard Performance Evaluation Corporation (SPEC) System File Server (SFS) benchmark. The performance of the VxVM and VxFS technologies are also evaluated through the VERITAS Database Edition 3.51 in a 64-bit Solaris 9 operating environment, as measured by an Online Transaction Processing (OLTP) workload.

BENCHMARKS USED & TYPES OF TESTS PERFORMED PostMark The PostMark version 1.5 file server benchmark was used to measure the performance of small-file updates, in an effort to model the disk performance of electronic mail, news, and web-based commerce. Further details on the benchmark can be found at http://www.netapp.com/tech_library/postmark.html fsck The fsck tests examine the performance of full fsck for UFS, UFS with logging, and VxFS on two different volume configurations. The performance of file system checks (fsck) must be fast on high-availability servers, which cannot afford long periods of downtime. Journaling file systems, such as VxFS and UFS (with logging turned on), can perform a file system check faster than a more thorough “full fsck”, needing only to replay the last few transactions that had not yet been committed to disk at the time of the system failure. Only if the log has become damaged is a full fsck required, where the entire file system’s content is examined for consistency. In contrast, checking a file system that does not have the benefit of logging (such as UFS without logging enabled) always requires the more time consuming and expensive full fsck. SPEC-SFS The Standard Performance Evaluation Corporation (SPEC) System File Server (SFS) benchmark sfs97_R1, also known as SFS 3.0 is used to evaluate file system performance. Information about this benchmark can be obtained at www.spec.org OLTP The purpose of this test is to illustrate the impact of different I/O and memory configurations on database performance. The benchmark used for this performance comparison was derived from the commonly known TPC-C benchmark that comprises a mixture of read-only and update intensive transactions that simulate a warehouse supplier environment. (Details on this benchmark can be obtained from the Transaction Processing Council’s web page at http://www.tpc.org )

RESULTS SUMMARY PostMark: Performance and CPU utilization of small-file updates Our performance studies show that on an average, the VERITAS stack is more than 14 times faster than UFS with logging, depending on the number of PostMark processes run concurrently and the underlying volume

1

The Database Edition 3.5 comprises the following components: TM VERITAS Volume Manager (VxVM) 3.5, patch 1 TM VERITAS File System (VxFS) 3.5 (including Quick I/O, Cached Quick I/O, and Storage Checkpoints) VERITAS Extension for Oracle Disk Manager TM 3.5

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

3

configuration. VERITAS Foundation Suite with QuickLog2 performance was found to be even better, 15 times faster than the Sun stack with logging. Unlike VERITAS’ performance, which scales well as concurrent PostMark processes are increased from 1 to 16, UFS with logging does not scale to a multiprocessor small file set workload. CPU utilized per transaction throughput by the VERITAS stack is always less than the CPU utilized per transaction throughput by the Sun stack. Fsck - file system check and recovery We found that due to the high cost of downtime associated with a full fsck, the use of UFS without logging in a high-availability server environment is prohibitive. In such environments, the use of a journaling file system should be considered mandatory, not an option. This conclusion has an important implication for the performance studies: because UFS without logging is not a viable file system in a high-availability server due to fsck time, the primary “baseline” comparison to VxFS is UFS with logging. In the rare case when a journaling file system (such as UFS with logging and VxFS) is unable to replay its log during fsck, the more expensive full fsck is required. At such times, VxFS performs a full fsck between 5 and 15 times faster than UFS with logging. SPECSFS- File system performance The Standard Performance Evaluation Corporation (SPEC) System File Server (SFS) benchmark has been used to measure the performance of VxFS running on top of VxVM compared to Solaris 9 SVM and UFS. The VERITAS stack obtained peak throughput that was 196% greater than the SUN stack with logging, and provided significantly faster response time to client requests. OLTP- File system performance in a database environment The OLTP benchmark used in this study is commonly used to evaluate database performance of specific hardware and software configurations. By normalizing the system configuration and varying the file system I/O configuration, it was possible to study the impact of various storage layouts on database performance with this benchmark. The OLTP performance measurements illustrate that the VERITAS Database Edition has equal performance compared to raw partition configurations. The results also show throughput for VxFS to be greater than UFS in the database environment, up to four times. When comparing database throughput of file system cloning in a database environment, VERITAS is over 5 times faster than the Sun stack in an update-intensive OLTP database environment.

TEST CONFIGURATIONS POSTMARK Machine configuration PostMark tests were run on a Sun Sun Fire E6800 system with eight 750 MHz UltraSPARC-III™ CPUs, 8 GB of RAM, and four PCI (E6800 does not have SBUS) controllers. Runs were performed on three types of volumes: a 20-column RAID 1+0 volume (40 disks), a 20-column RAID-0 (20 disks) and a 20+1-column RAID-5 (21 disks) volume. For testing VERITAS File System (VxFS) and VxFS with QuickLog, volumes were created using VERITAS Volume Manager (VxVM); while volumes for UFS and UFS with logging tests were created using Solaris Volume Manager. For the RAID-5 volume created using VxVM, a logging device was not used.

2

VERITAS QuickLog™ is a feature that comes with VERITAS Foundation Suite(TM) and enhances VERITAS File System performance by eliminating the time that a disk spends seeking between the log and data areas of VxFS. Although QuickLog can improve file system performance, VxFS does not require QuickLog to operate effectively.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

4

The 40 disks involved in the volume were spread across two Sun A5200 JBOD arrays in split-loop configuration, each containing a total of 22 disks, each an 18 GB 10,000 RPM Seagate disk. Software configurations Four file system configurations were used: UFS, UFS with logging, VxFS and VxFS with QuickLog. Each file system was created using default options. For mounting of the file systems, default options were used, with two exceptions: UFS with logging used the logging option and VxFS with QuickLog used –o qlog=vxlog1 option. VERITAS Volume Manager command ‘vxassist’ was used as follows to create volumes for VxFS and VxFS with QuickLog testing. For the testing of UFS and UFS with logging, Solaris Volume Manager command ‘metainit’ was used as shown below. RAID 0 VERITAS Volume Manager (VxVM): $vxassist –g testdg –o ordered make postmark_vol 335400000 layout=stripe ncol=20 Solaris Volume Manager: $metainit d0 1 20 RAID 1+0 VERITAS Volume Manager (VxVM): $vxassist –g testdg –o ordered make postmark_vol 335400000 layout=stripe-mirror ncol=20 init=active

RAID 5

Solaris Volume Manager: $metainit d10 1 20 $metainit d201 20 $metainit d0 –m d10 d20 VERITAS Volume Manager (VxVM): $vxassist –g testdg –o ordered make postmark_vol 335400000 layout=raid5 ncol=21 nlog=0 Solaris Volume Manager: $metainit –r d0

Test details Varying the number of concurrent PostMark processes from 1 to 16 tested file system scalability. Regardless of the concurrency, each PostMark process operates on a distinct file set comprised of 20,000 files across 1000 directories, and performs 20,000 transactions. In other words, as the number of concurrent processes is scaled up, the amount of work done by any one PostMark process is kept constant. We kept PostMark default file sizes, which are linearly distributed across a range of 500 bytes to 9.77 KB. Aside from the number of directories and files, the only other PostMark option changed from the default was to bypass I/O buffering of the standard C library. Performance results are reported as PostMark throughput, in transactions per second. In concurrent multiprocess runs, the aggregate throughput (the sum of the throughput of each individual process) is reported. All numbers Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

5

shown are the average of 3 runs, except for UFS with logging runs, many of which were run only once. Reports also include the average CPU utilization during the transaction phase, normalized by average rate of transactions (transactions/second) reported by PostMark benchmark. Postmark results

FSCK Machine configuration Full fsck tests were performed on a Sun Enterprise 4500 system running Solaris 9. The server had 8 GB of RAM and eight fiber Channel HBA cards. Four Sun StorEdge A5200 JBOD arrays, each with 22 18 GB 10,000 RPM Seagate disks were connected to the system in a split-loop configuration such that each HBA had 11 disks connected, for a total of 88 disks. VERITAS File System 3.5 and VERITAS Volume manager 3.5 were installed on the system. Software configurations The same file set was used to populate the file systems before running fsck. This file set was a subset of the data produced by a run of the SPECsfs97 benchmark, with 2,870,713 files in 88,772 directories totaling 72,641,776 KB (about 69.3 GB). File sizes range from 0 bytes to 1.35 MB, with a heavy concentration on most of the power of two file sizes. Table 1 shows the file size distribution used in the 100 GB fsck. File Size Range Up to 4K >4K to 16K >16K to 64K >64K to 256K >256K to 1MB >1MB to 1.35 MB About 31 MB (.tar.gz files)

Number of Files 1,990,612 473,044 242,107 135,901 28,063 981 5

Table 1: File Size Distribution for 100 GB Volume Full Fsck Tests To avoid re-running SPECsfs97 to produce the file set each time, the files were archived into 5 .tar files, which were then compressed using gzip (each .tar.gz file representing one of the five top-level directories produced by the prior run of SPECsfs97). These 5 .tar.gz files were each about 31 MB, and are included among the files on which fsck was run. For a large file system test, a larger (though similar) file set was used. First, the five top-level SPECsfs97 directories were brought into a single .tar file which, when compressed using gzip, was 156 MB. Then, 10 copies of this bigger .tar.gz file were created. When uncompressed and extracted, the file set totals about 692 GB, with a size distribution that is summarized in Table 2. File Size Range Up to 4K >4K to 16K >16K to 64K >64K to 256K >256K to 1MB >1MB to 1.35 MB About 156 MB (.tar.gz files)

Number of Files 19,906,120 4,730,440 2,421,070 1,359,010 280,630 9,810 10

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

6

Table 2: File Size Distribution for 900 GB Volume Full Fsck Tests Test details For small file system tests, a RAID 1+0 (striped-mirror) volume of size 120 GB was used. VxVM was used to create the volume for VxFS testing, whereas SVM was used to create the volume for UFS and UFS with logging testing. The VxVM Volume was created using the command as follows: # vxassist –g testdg –o ordered make testvol 120g init=active \ layout=stripe-mirror ncol=40 The format command was used to create an 3.00 GB partition on each of the 80 disks. Multiple SVM commands were then used to create a stripe-mirror volume as follows: # metainit d10 1 40 # metainit d20 1 40 < List of 40 disks – 10 per controller> # metainit d0 –m d10 d20 For larger file system tests, a 900 GB volume was used. The volume was a RAID-0 (striped) volume on 80 disks. Similar to small file system tests, VxVM was used to create the volume for VxFS testing, whereas SVM was used to create the same for testing the UFS and UFS with logging. The following command was used to create a large Volume using VxVM: # vxassist –g testdg make testvol 1887434008 layout=stripe-mirror ncol=80 The format command was used to create an 11.25 GB partition on each of the 80 disks. Following SVM command was then used to create a stripe-mirror volume: # metainit d0 1 80 For each volume configuration, mkfs was used to create either a UFS or VxFS file system. For the large file system test configuration, a file system of size 899 GB was created. All default options were used to create the file system, except in case of VxFS, –o largefiles option was used to allow for files larger than 2 GB. After creating the file system on the volume, it was mounted using the mount command. While mounting, default options were used, except for VxFS file system –o largefiles option was used and for UFS with logging tests, -o logging option was used. After mounting the file system, a script copied the .tar.gz files to the file system under test. For large file system tests, 10 directories were created under the file system and 10 bigger .tar.gz files (156 MB each) were copied into those directories, such that each .tar.gz files in its own directory. The smaller file system tests utilized only 5 directories and 5 smaller .tar.gz files (31 MB each) were copied in those directories as to have only one .tar.gz file in each directory. After uncompressing and extracting the tar files, the file system was unmounted and full fsck was run: /bin/time fsck –F fstype

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

7

Note that, this command does full file system check of the integrity of the file system. The /bin/time command reports the elapsed and CPU time. Fsck results

SPEC-SFS Machine configuration A Sun Fire V880 system with eight 750 MHz UltraSparc™ III CPUs and 32 GB of memory was utilized as an NFS version 3 server. The storage subsystem employed for all tests were built on four Sun A5200 disk arrays. Each Sun A5200 array was configured with 22 18 GB 10,000 RPM Seagate Cheetah ST318203FC disk drives for a total of 88 drives. Each A5200 was attached to the V880 via one QLogic QLA2300 2Gb Fiber Channel host bus adapter. The V880 was also configured with 12 internal 36 GB 10,000 RPM Fibre Channel drives utilizing a builtin FC-AL interface. These drives were used for various overhead requirements such as a VxVM rootdg disk group and SVM metadb devices. A total of sixteen clients were used to generate the NFS workload. Fourteen of the NFS clients were Sun Microsystems Netra T1, each with one 440 MHz UltraSparc IIi CPU, and 256MB of RAM. The two remaining clients were Sun Ultra 5 workstations with one 400MHz UltraSparc IIi CPU and 128 MB of RAM. One Cisco 100/1000BaseT Network switch (Catalyst 3500XL) was employed to create two private client networks. Eight clients were attached to each private network interface. The NFS server was configured to a maximum of 1600 NFS threads, in line with a recommendation in Cockroft and Pettis’s book, Sun Performance and Tuning, Second Edition. Software configurations The VxFS SFS runs utilized VxVM and the UFS SFS runs utilized SVM. In each case the volume layouts were configured to be identical regardless of the volume manager being used. Performance tests were run for RAID 0 and RAID 1+0 volume configurations. In the tests utilizing RAID 0 volumes, each volume manager was configured with eleven volumes, eight disks per volume. In the RAID 1+0 configuration, each volume manager was configured with five volumes, eight mirrored disks per volume. Sixteen disks total in each RAID 1+0 volume. In both cases the disks were configured as an 8-way stripe with two disks from each controller comprising the striped volume. The VxFS mirrored volumes were configured as RAID 1+0 and were created using the vxassist command. The UFS mirrored volumes were created using the metainit command. For each configuration the same sets of disks were used. For example, the sets of disk used as mirrors in the VxVM configuration were also used to create mirrors in the SVM configuration. Test details The following Operating System and Veritas Software releases were used in testing: • • • •

Solaris 9 (64-bit) GA VxFS 3.5 VxVM 3.5 CISCO IOS Release 12.0(5)XW

The VERITAS stack tests used the following command lines for volume and file creation. (The placeholders within the brackets, < >, were replaced with specific values for each volume.) To create a VxVM RAID 0 volume: vxassist -g make 24g layout=stripe ncol=8 < disk names > Below is a typical VxFS file system creation command line used for all tests: /usr/sbin/mkfs –F vxfs

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

8

To create a SVM RAID 0 volume: /sbin/metainit 1 8 Below is the command line used for UFS file system creation for all tests: /usr/sbin/mkfs –F ufs The VxVM mirrored volume creation scripts used the following command line: vxassist -g make 48g layout=stripe-mirror ncol=8 The SVM mirrored tests used the following command lines to create the mirrored volumes: /sbin/metainit 1 8 /sbin/metainit 1 8 /sbin/metainit -m /usr/sbin/metattach SPEC-SFS results

OLTP Machine configuration The OLTP benchmark tests were conducted on a Sun Microsystems Ultra Enterprise 10000 domain with 13 processors and 10 GB of memory. The UE 10000 system was attached to six 10-bay CLARiiON DAE (Disk Array Enclosure) JBOD racks via six SunTM StorEdgeTM SBUS FC-100 Host Adapters. Each CLARiiON DAE contained 10 18 GB 10,000 RPM Seagate drives. The following software releases were used in the tests: • VERITAS Database Edition 3.5 for Oracle • Oracle 9iR2 (release 9.2.0.1, 64-bit) • Solaris 9 (release 4/02, 64-bit) Software configuration The file systems under test were configured over one 58-way striped volume of 200 GB built on 58 disks on three controllers. All file systems were built on this striped volume. The Oracle redo logs were created on a single drive in one of the CLARiiON racks. For the raw I/O configuration, all the Oracle files except redo logs were striped 58way on the same set of drives to ensure equal drive usage. VERITAS Volume Manager was used to create the striped volumes for all configurations. The size of the database used for the test is 130 GB with a total of 81 Oracle data files, including redo logs, indexes, rollback segments, temporary and user tablespaces. The size of the database is that of a fully scaled TPC-C database with a scale factor of 1,000 warehouses. Test details The benchmark tests were conducted with 1 to 8 GB of Oracle buffer cache and in eight I/O configurations: • RAW – uses the VERITAS Volume Manager raw volumes directly, • QIO – uses the Quick I/O feature of VERITAS File System, • CQIO – uses the Cached Quick I/O feature of VERITAS File System, • ODM – uses the Oracle Disk Manager I/O feature of VERITAS File System3, • DIO – uses the VERITAS File System direct I/O mode4, 3

Oracle Disk Manager is a disk management interface that enhances file management and disk I/O throughput in a database environment. The ODM Application Programming Interface (API) is defined by Oracle and first introduced in Oracle9i. VERITAS Extension for Oracle Disk Manager provides a dynamically-load library and a kernel driver to support the ODM API in Oracle9i.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

9

• BIO – uses the default VERITAS File System buffered I/O mode, • UDIO – uses the optional UNIX File System direct I/O mode5, and • UBIO – uses the default UNIX File System buffered I/O mode. The Oracle block size used for all these tests was 2K. During the test, Oracle statistics, Volume Manager statistics, Quick I/O statistics, and ODM I/O statistics were gathered in addition to the benchmark throughput numbers. OLTP results

RESULTS POSTMARK RAID-0 The results of this series of benchmarks show that VERITAS Foundation Suite with and without QuickLog scales well to multiple processes and retains its margin over UFS and UFS with logging. UFS with logging does not scale but rather consistently decreases in performance as the number of concurrent PostMark processes are added. The PostMark performance of UFS, UFS with logging, VxFS and VxFS with QuickLog is shown in table 1. Number of Concurrent PostMark Processes 1 2 4 6 8 10 12 14 16 Average

UFS ops/sec

UFS with logging ops/sec

VxFS

78.4 140.6 235.9 303.9 346.1 368.1 388.8 407.7 423.6 299.3

108.5 98.8 86.5 78.9 73.3 69.4 66.9 65.2 63.1 79.0

788.0 946.2 1122.5 1152.4 1108.5 1048.8 1056.1 1066.0 1088.5 1041.9

ops/sec

Ops/sec (QuickLog) 764.1 989.1 1237.8 1293.6 1212.7 1212.6 1179.5 1236.1 1138.4 1140.4

QuickLog Improvement over UFS+logging 604% 901% 1331% 1540% 1554% 1648% 1664% 1795% 1704% 1416%

Table 1: PostMark Performance Improvements for VxFS, Compared to UFS and UFS with logging.

4 5

VxFS DIO mode uses the mount option “–o convosync=direct.” UFS DIO mode uses the mount option “-o forcedirectio.”

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

10

Figure 1 shows the PostMark results in chart form, illustrating the degree to which the file systems scale to multiple concurrent PostMark processes. Figure 2 shows the CPU utilization per transaction.

Concurrent Postmark Average Transactions per second on a RAID-0 Volume on Solaris 9 GA 1400.0

Avg Transactions/sec

1200.0 1000.0 800.0 600.0 400.0 200.0 0.0 0

4

8

12

16

# Of Procs VxFS

UFS

UFS-logging

VxFSQlog

Figure 1: PostMark Performance with Increasing Concurrency. Note the performance of UFS with logging, where the aggregate throughput of all PostMark processes actually decreases as additional processes are added.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

11

Concurrent Postmark percent CPU Utilization per avg throughput on a RAID-0 Volume on Solaris 9 GA 9%

% CPU Util/avg throughput

8% 7% 6% 5% 4% 3% 2% 1% 0% 0

4

8

12

16

# Of Procs

VxFS

UFS

UFS-logging

VxFSQlog

Figure 2: CPU utilization with Increasing Concurrency. Note the CPU consumption of VxFS and VxFS with QuickLog is comparable to UFS and much less than UFS with logging.

RAID 1+0 The result of this series of benchmarks shows VxFS (with and without QuickLog) outperforming UFS and UFS with logging. VxFS takes better advantage of parallel reads from mirroring while performance of UFS with logging decreases with each addition of concurrent PostMark processes. VxFS with QuickLog outperforms UFS with logging by at least 762% to a maximum of 1950%. VxFS also maintains a lead over UFS in all tested concurrencies. Table 2 shows performance of UFS, UFS with logging, VxFS and VxFS with QuickLog as measured in aggregated PostMark transactions per second.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

12

Figure 3 shows the comparison between VxFS with QuickLog, VxFS, UFS and UFS with logging runs of concurrent PostMark test. Figure 4 shows the CPU utilization per transaction for the RAID 1+0 run.

Concurrent PostMark Average transactions per second on a RAID-1+0 Volume on Solaris 9 GA 1400 1200

Avg tps

1000 800 600 400 200 0 0

4

8

12

16

# Of Procs VxFS

UFS

UFS-logging

VxFSQlog

Figure 3: PostMark Performance with Increasing Concurrency. The performance of UFS with logging actually decreases as additional processes are added.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

13

Concurrent PostMark percent CPU utilization per avg throughput on a RAID-1+0 Volume on Solaris 9 GA

16%

%CPU Util/avg throughput

14% 12% 10% 8% 6% 4% 2% 0% 0

4

8

12

16

# Of Procs VxFS

UFS

UFS-loggin

VxFSQlog

Figure 4: CPU utilization with Increasing Concurrency. VxFS and VxFS with logging uses CPU more efficiently per transaction than UFS with logging.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

14

RAID-5 Table 3 shows the performance comparison between VxFS, UFS and UFS with logging. Volume used for this comparison is a RAID-5 volume with 21 data + parity disks. As can be seen from the table, VxVM/VxFS stack performs 798% better on average than SVM/UFS with logging stack. T Number of

Concurrent PostMark Processes

UFS ops/sec

UFS with logging ops/sec

1 2 4 6 8 10 12 14 16 Average

28.1 46.1 67.2 81.6 90.0 94.3 100.0 105.1 110.1 80.3

44.5 39.9 35.5 32.7 31.1 29.7 28.5 27.8 27.0 33.0

VxFS ops/sec

Ops/sec (QuickLog)

270.5 304.0 331.5 301.9 293.8 307.7 266.9 254.6 246.5 286.4

235.4 310.5 354.4 343.5 361.4 315.0 236.8 243.4 216.2 290.7

QuickLog Improvement over UFS+logging 429% 679% 898% 950% 1061% 959% 731% 775% 701% 798%

Table 3: PostMark Performance Improvements for VxFS with QuickLog and VxFS, Compared to UFS and UFS with logging. Figure 5 shows that VxVM/VxFS with QuickLog maintains the lead over SVM/UFS and SVM/UFS with logging in terms of PostMark average throughput per second. At its peak, VxVM/VxFS with QuickLog is performing at more than 11 times the performance of SVM/UFS with logging.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

15

Concurrent PostMark Average Throughput per Second on RAID-5 volume on Solaris 9 GA 400 350 300

Avg TPS

250 200 150 100 50 0 0

2

4

6

8

10

12

14

16

# Of Procs VxFS

UFS

UFS-logging

VxFSQlog

Figure 5: PostMark Performance with Increasing Concurrency using RAID-5 volumes. VxFS and VxFS with QuickLog outperforms UFS as well as UFS with logging by a wide margin

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

16

Concurrent PostMark percent CPU utilization per avg throughtput on RAID-5 volume on Solaris 9 GA 14%

% CPU Util/avg throughtput

12% 10% 8% 6% 4% 2% 0% 0

4

8

12

16

# Of Procs

VxFS

UFS

UFS-logging

VxFSQlog

Figure 6: CPU utilization with Increasing Concurrency using RAID-5 volumes. VxFS and VxFS with QuickLog consume less CPU per transaction than UFS with logging

FSCK BENCHMARK This section describes the results obtained in the benchmark comparing the VERITAS stack (VxFS on VxVM) and SUN Stack (UFS with and without logging, on SVM). Figure 1 shows the full fsck times of smaller file system test for both the stacks. The graph shows the elapsed time (actual time it took for the command to finish) as well as the amount of CPU time, divided into user and system components. Full fsck for VxFS finished in 7.3 minutes compared to 110.1 minutes for UFS and 109.4 minutes for UFS with logging. VxFS is about 1400% faster than both UFS and 1396% faster than UFS with logging. In other words, VxFS will finish sanity checking the file system of similar size in about 1/14th the time it takes for UFS file system to do the same. This is very significant since it directly translates into much less downtime after a system crash. As the graph in figure 1 shows, VxFS consumes only 14% of the CPU cycles consumed by UFS or UFS with logging.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

17

Full fsck time on E4500 (120 GB volume, RAID 1+0, 69 GB used) UFS

120

110.1

UFS-logging

VxFS

109.4

Minnutes

Figure 1: Graph showing superior VxFS performance over UFS and UFS with logging. Note that 100 M full fsck on VxFS file system ran about 1400% faster than full fsck on UFS or UFS with logging. i n 80 u t The 60 graph in Figure 2, shows the full fsck times for the larger file system of size 899 GB. The graph shows the e performance edge that VxFS retains over UFS and UFS with logging, even on this large a file system. VxFS s finishes full fsck on this volume in 87.2 minutes compared to 436.2 and 451 minutes that full fsck takes for UFS 40 and UFS with logging respectively. VERITAS is more than 400% faster than the UFS solution. In the process of full fsck, VxFS consumes only 52% of the CPU time used by UFS or UFS with logging. 20

7.3

7.3

7.3

2.3

1.2

2.3

0 Real

user

0.2

sys

component of time

Figures 1 & 2: Graphs shows the Full fsck performance on VxFS, UFS and UFS with logging on a large file system.

Full fsck time on E4500 (900 GB 80 column RAID 0 Volume) UFS

500 450

UFS-logging

VxFS

436.3 451.0

400

Minutes

350 300 250 200 150 87.2

100 50

18.5

19.1

13.1

9.2

9.2

0 Real

user

1.6

sys

Componant time Component ofof time

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

18

SPEC-SFS Table 21 shows the VRTS stack provided 117% to 196% greater peak throughput than the SUN stack with logging. Configuration RAID 0 RAID 0 RAID 1+0 RAID 1+0

VRTS stack Improvement over SUN stack+logging

TCPV3 UDPV3 TCPV3 UDPV3

117% 129% 182% 196%

Table 2: Increase in SPECsfs97_R1 Peak Throughput Obtained By VRTS stack.

In the case of RAID 0, the SUN stack Overall Response Time is 31% higher than the VRTS stack and the SUN RAID 1+0 55% higher than the VRTS RAID 1+0 stack.

UDP v.3 Max Throughput (NFS Ops/sec) VRTS (RAID 0) SUN+logging (RAID 0) VRTS (RAID 1+0) SUN+logging (RAID 1+0)

20,028 8,732 14,999 5,074

Overall Response Time 4.7 6.1 5.8 6.1

Table 2: UDP V3 Max Throughput and Overall Response Time

The NFS UDPv.3 results above show the VRTS stack, again, exceeding the SUN stack throughput for both volume configurations. Overall Response Time for UDP V3 results show the VRTS stack achieves better results than the SUN with logging stack. The SUN RAID 0 Overall Response Time is 32% greater than VRTS RAID 0 and SUN RAID 1+0 is 36% greater than VRTS RAID 1+0.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

19

Detailed Results The results show that VRTS performs significantly better with respect to throughput than SUN with logging stack, for both the RAID 0 and RAID 1+0 configurations. Table 3 and Table 4 show detailed results of each benchmark run with different UFS and VxFS mount options and volume configurations. VRTS provides peak throughput that is 117% to 129% greater than the SUN with logging in a RAID0 volume configuration. The RAID 1+0 volume configuration shows the VRTS stack achieving a peak throughput that is 182% to 196% greater than Sun with logging. Despite enabling a higher load on the server, the VRTS stack provided Overall Response Time (ORT) that is faster than SUN RAID 0, and faster in the case of RAID 1+0 SUN with logging stack. (The ORT provides a measurement of how the system responds, over the entire range of tested loads.) In other words, although the VRTS stack is able to take on a much greater workload (as measured by throughput), it still consistently provides a faster turnaround time for clients (as measured by ORT). A comparison of the RAID 0 and RAID 1+0 peak throughput and peak response time results shows the SUN with logging stack incurs a dramatic decrease in throughput in the RAID 1+0 volume configuration. Peak throughput is essentially halved for both TPC V3 and UDP V3 when compared to the RAID 0 SVM/UFS with logging results. In the case of RAID 1+0 TCP V3 SVM/UFS with logging, the peak response time is three times greater than the RAID0 result.

RAID0 Volumes TCP V3 SUN + logging VRTS

VRTS Ops/Sec

1,797 3,622 5,480 7,244 9,029 10,952 12,778 14,558 16,357 18,219 18,453

Response Ops/Sec Response Time Time (Msec/Op) (Msec/Op)

2.2 2.5 2.8 3.2 3.4 3.8 4.3 4.9 5.9 7.9 10.2

688 1,383 2,094 2,814 3,501 4,237 4,976 5,691 6,369 7,034 7,735 8,456 8,512

3.3 3.1 3.3 3.2 3.9 4.5 5.2 5.4 6.4 7.8 12.6 20.7

Ops/Sec

1,796 3,628 5,483 7,248 9,016 10,948 12,791 14,573 16,417 18,259 20,028

UDP V3 SUN + logging

Response Time (Msec/Op)

3.8 3.3 3.8 3.9 4.0 4.1 4.5 5.0 5.7 6.7 9.1

Ops/Sec

688 1,384 2,103 2,817 3,508 4,235 4,992 5,678 6,377 7,060 7,757 8,427 8,732

Response Time (Msec/Op)

4.0 3.7 4.5 4.0 5.5 5.4 5.7 5.9 7.0 8.0 11.5 16.3

Table 3: SPECsfs97_R1 Statistics for RAID 0 Volume Configuration. In the peak throughput (Ops/sec)columns, VRTS achieved 117% greater throughput than SUN with logging for the TCP V3 results. The UDP V3 results show VRTS greater throughput than SUN with logging.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

20

RAID 1+0 Volumes TCP V3 VRTS SUN + logging VRTS Ops/Sec Response Ops/Sec Response Ops/Sec Response Time Time Time (Msec/Op) (Msec/Op) (Msec/Op)

1,481 3,008 4,572 6,075 7,498 9,089 10,612 12,178 13,611 14,743

2.5 2.8 3.6 4.2 4.9 5.4 6.2 7.2 8.7 11.2

487 1,022 1,544 2,073 2,568 3,109 3,627 4,170 4,702 5,232

2.6 3.2 3.8 4.3 4.8 5.5 6.4 6.6 14.4

1,483 3,010 4,572 6,085 7,518 9,086 10,615 12,181 13,653 14,999

3.3 3.5 4.6 5.0 5.5 5.8 6.6 7.4 8.8 10.9

UDP V3 SUN + logging Ops/Sec Response Time (Msec/Op)

478 1,022 1,481 2,005 2,515 3,001 3,514 4,046 4,577 5,074

8.1 4.5 4.6 4.9 5.3 6.7 6.5 7.6 8.6

Table 4: SPECsfs97_R1 Statistics for RAID 1+0 Volume Configuration. In the peak throughput (Ops/Sec) columns, VRTS achieved 182% greater throughput than SUN with logging under TPC V3 protocol. Under UDP V3 protocol VRTS realized a 196% improvement over SUN with logging.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

21

S P E C s fs 9 7 _ R 1 T C P v 3 P e rfo rm a n c e S V M /U F S V R T S 3 .5 s ta c k s 1 1 R A ID 0 v o lu m e s , 8 w a y s tr ip e , 8 d is k s /v o lu m e VRTS

S U N + lo g g in g

2 5 .0

Response Time millisec/op

2 0 .0

1 5 .0

1 0 .0

5 .0

0 .0 0

2 ,0 0 0

4 ,0 0 0

6 ,0 0 0

8 ,0 0 0

1 0 ,0 0 0

1 2 ,0 0 0

1 4 ,0 0 0

1 6 ,0 0 0

1 8 ,0 0 0

2 0 ,0 0 0

T h ro u g h p u t (N F S O p s /S e c )

Figure 1 and Figure 2 illustrate the throughput and response time limits for the file systems in the RAID0 volume configuration.

S P E C s fs 9 7 _ R 1 U D P v 3 P e r fo r m a n c e S V M /U F S V R T S 3 .5 s ta c k s 1 1 R A ID 0 v o lu m e s , 8 w a y s tr ip e , 8 d is k s /v o lu m e VRTS

S U N + lo g g in g

1 8 .0 1 6 .0

Response Time millisec/op

1 4 .0 1 2 .0 1 0 .0 8 .0 6 .0 4 .0 2 .0 0 .0 0

5 ,0 0 0

1 0 ,0 0 0

1 5 ,0 0 0

2 0 ,0 0 0

2 5 ,0 0 0

T h r o u g h p u t ( N F S O p s /S e c )

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

22

SPECsfs97_R1 TCP v3 Performance SVM/UFS VRTS 3.5 stacks 5 RAID1+0 volum es, 8 way stripe, 16 disks/volum e SUN+logging

VRTS 16.0

14.0

Response Time (millisec/op)

12.0

10.0

8.0

6.0

4.0

2.0

0.0 0

2,000

4,000

6,000

8,000

10,000

12,000

14,000

16,000

Throughput (NFS Ops/Sec)

Figure 3 and Figure 4 illustrate the throughput and response time limits for the file systems in the RAID 1+0 volume configuration.

S P E C s fs 9 7 _ R 1 U D P v 3 P e rfo rm a n c e S V M /U F S V R T S 3 .5 s ta c k s 5 R A ID 1 + 0 v o lu m e s , 8 w a y s tr ip e , 1 6 d is k s /v o lu m e VRTS

S U N + lo g g in g

1 2 .0

Response Time (millisecond/op)

1 0 .0

8 .0

6 .0

4 .0

2 .0

0 .0 0

2 ,0 0 0

4 ,0 0 0

6 ,0 0 0

8 ,0 0 0

1 0 ,0 0 0

1 2 ,0 0 0

1 4 ,0 0 0

1 6 ,0 0 0

T h ro u g h p u t (N F S O p s /s e c )

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

23

The graphs show the VRTS stack achieves much greater throughput than SUN with logging stacks. In the case of RAID 0 utilizing the UDP protocol a peak throughput of over 20,000 NFS Ops/sec are achieved with the VRTS stack. The SUN stack also achieves its highest peak throughput with the UDP protocol but is significantly less at 13,408 NFS Ops/sec. The slowest configuration is SUN with logging, which attains a little over 5,000 NFS Ops/sec.

OLTP TESTS The OLTP benchmark used in this study is commonly used to evaluate database performance of specific hardware and software configurations. By normalizing the system configuration and varying the file system I/O configuration, it was possible to study the impact of various storage layouts on database performance with this benchmark. Database performance The OLTP performance measurements illustrate that the Quick I/O (qio) and Oracle Disk Manager (odm) features enable the VERITAS Database Edition for Oracle to have equal performance compared to raw partition configurations. As the previous studies reported, this performance superiority remains the same no matter which Oracle release (32-bit Oracle or 64-bit Oracle) or which Solaris 8/9 flavor (32-bit or 64-bit) was used.6 Figure 1 shows the relative plot of database throughput compared to Raw I/O. Figure 1 shows that the database throughput with VERITAS Database Edition for Oracle’s Quick I/O (qio) matches closely with that of the Raw I/O. This reaffirms the excellence of VERITAS for achieving raw I/Oequivalent performance while still providing file system manageability to Oracle databases. When 32-bit Oracle is used, we can only allocate up to 4GB of operating system memory to Oracle SGA. For large memory systems, VERITAS Cached Quick I/O is able to utilize the memory beyond the 4GB Oracle SGA limit as a second level to Oracle databases. The second level cache improves Oracle read performance, when data blocks are not cached in the Oracle buffer cache, but in the file system page cache. The performance advantage of Cached Quick I/O from VERITAS (cqio) is shown in Figure 1.

160%

120%

RAW=100%

Percentage of RAW Performance

140%

100%

q io 80%

c q io

odm 60% 40% 20% 0% 1G B

2G B

3G B

4G B

5G B

6G B

7G B

8G B

O r a c le B u ffe r C a c h e S iz e (S G A )

Figure 1: VERITAS Database Edition for Oracle’s achieves equal performance compared to raw partition configurations and greater than raw performance with the Cached Quick I/O feature. 6

See Performance Brief of VERITAS Database Edition 2.1.1, 2.2, and 3.0 for Oracle on veritas.com

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

24

Database throughput The primary performance metric used in this brief is a throughput metric that measures the number of transactions completed per minute (TPM). The transaction mix in this OLTP benchmark represents the processing of an order as it is entered, paid for, checked, and delivered, following the model of a complete business activity. The TPM metric is, therefore, considered a measure of business throughput. Table 1 lists the database throughput of the benchmark tests driven by 50 batch users in different I/O configurations and various Oracle buffer cache sizes.

I/O Configuration

Raw I/O Quick I/O Cached Quick I/O Oracle Disk Manager I/O VxFS buffered I/O UFS buffered I/O

Database Throughput in Transactions per Minute (TPM) Size of Oracle Buffer Cache

1GB 2GB 3GB 4GB 5GB 6GB 7GB 8,283 10,424 11,825 12,777 13,434 13,880 14,181 8,229 10,398 11,784 12,742 13,413 13,884 14,204 11,109 12,292 12,789 13,131 13,003 13,309 13,480 8,028 10,019 2,223 2,518 1,802 1,525

12,163 12,841 13,223 2,447 2,431 2,909 4,338 804 900 1,069 1,015

11,201

13572 5,038 1,111

8GB 14,405 14,339 13,370 13,726 4,487 1,066

Table 1 - Database throughput of the benchmark tests. Table 1 shows the throughput of VERITAS File System exceeding the throughput of UFS. Greater performance gains are seen when VxFS is used with the Quick I/O and Cached Quick I/O feature of VERITAS Database Edition. Snapshots File system cloning technologies have become popular for backing up and restoring Oracle databases. VERITAS File System provides a unique facility for creating a persistent image of a file system, i.e. cloning a file system, at an exact point in time called a Storage Checkpoint™. A VERITAS Storage Checkpoint™ is a feature of the Database Edition, provides a low-overhead solution for block-level incremental storage backup and rollback. In Solaris 9, UNIX File System provides a similar FSSNAP facility for creating a snapshot of a mounted file system. Unlike storage checkpoints, snapshots are read-only and are not persistent across system reboot. These limits make UFS snapshots only suitable for file system backup operations. The OLTP benchmark tests show the performance trade-off associated with these snapshot technologies in an update-intensive OLTP database environment. To study the performance impact of file system cloning in a database environment, benchmark tests were conducted with a checkpoint or snapshot created right before the benchmark was started. The following table shows the performance variation in database throughput of the clone tests. I/O Configuration VxFS Cached Quick I/O VxFS Quick I/O VxFS ODM UFS CDIO

Database Throughput in TPM baseline VxFS w/Storage Checkpoint 7,979 6,285 7,592 5,977 7,335 5,795 7,536

UFS Snapshot (FSSNAP)

1,171

Table 2 – Throughput comparison of different file system cloning technologies.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

25

The TPM results in Table 2 show the significant performance advantage VERITAS Database Edition with Storage Checkpoint has over the Sun stack with FSSNAP. VERITAS provides over 5 times the throughput of the Sun stack in this benchmark test. It is also notable that UFS snapshots cause significant performance degradations, 85%, in the benchmark.

CONCLUSION VERITAS Foundation Suite™ and the VERITAS Database Edition™ solutions combine the industry-leading technologies of VERITAS Volume Manager™ and VERITAS File System™ to address the increasing costs of managing mission-critical data and disk resources in Direct Attached Storage (DAS) and Storage Area Network (SAN) environments. These storage management solutions deliver optimal performance tuning and sophisticated management capabilities to ensure continuous availability of your mission-critical data. The data from the industry-benchmark results in this paper prove the power of these solutions. The PostMark tests show the performance and CPU utilization of small-file updates. These results indicate that VERITAS Foundation Suite with QuickLog7 performance is 15 times faster than the Sun stack with logging. This means that an application that requires only 5 minutes of processing time with VERITAS can demand over 1 hour to process without VERITAS. In addition, the performance has been proven to scale as concurrent PostMark processes are increased. The SPEC-SFS test results support the findings of the PostMark tests, confirming the robustness of the VERITAS solution. The SPEC-SFS file server benchmark, which evaluates file system performance, shows VERITAS throughput to be greater than the SUN stack, and the VERITAS stack’s ability to provide significantly faster response times to client requests. Availability of the VERITAS stack is demonstrated through the results of the fsck benchmark tests, which measure file system check and recovery times. After an unexpected system failure, the VERITAS File System recovers between 5 and 15 times faster than UFS with logging, making the VERITAS File System mandatory for any IT environment with high availability requirements. In addition to superior functionality, VERITAS Foundation Suite improves the availability of Solaris 9 by allowing online administration, without interrupting access to data to perform these tasks. Similar performance, availability and scalability benefits can be derived in a database environment as demonstrated by the results of the OLTP tests. The OLTP performance measurements illustrate that the VERITAS Database Edition has equal performance compared to raw partition configurations. The results also show throughput for VxFS to be up to four times greater than UFS in the database environment. When comparing database throughput of file system cloning in a database environment, VERITAS is over 5 times faster than the Sun stack in an update-intensive OLTP database environment. Organizations can enjoy the performance, availability, and most importantly, manageability benefits of using VERITAS File System in their database environment while achieving the same robust performance comparable to that of raw partitions. VERITAS’ ability to deliver a solution that offers the most robust performance and throughput that can scale as any organization is unmatched.

7

VERITAS QuickLog™ is a feature that comes with VERITAS Foundation Suite(TM) and enhances VERITAS File System performance by eliminating the time that a disk spends seeking between the log and data areas of VxFS. Although QuickLog can improve file system performance, VxFS does not require QuickLog to operate effectively.

Copyright 2002 VERITAS Software Corporation. All rights reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. June 2002.

26