White Paper. Cloud Benchmarking

White Paper Cloud Benchmarking A Case Experience in Collaborative Solutioning This paper describes a leading IT services provider’s experience in clo...
Author: Edwina Daniel
1 downloads 1 Views 536KB Size
White Paper

Cloud Benchmarking A Case Experience in Collaborative Solutioning This paper describes a leading IT services provider’s experience in cloud benchmarking. The assessment was conducted, for a Latin America Telecom customer, in collaboration with a leading organization for benchmarking cloud infrastructure. The paper focuses on the cloud benchmarking strategy adopted for the customer’s different cloud configurations. Numerous benchmarking test suites were also executed. The customer’s cloud performance was measured against multiple metrics including analyzing comparatively their cloud performance against other cloud vendors. The paper concludes with a note on the importance of benchmarking cloud infrastructure and its relevance in enabling wider cloud adoption.

About the Authors Senthil Pandian Senthil Pandian is an IT Infrastructure consultant working with TCS'Global Consulting Practice - Infrastructure Solution Group (GCP - IS). He has extensive experience in IT Infrastructure consulting, Security consulting, Datacenter design, Solution Architecting, Project management and Test Management. Worked extensively in locations across the globe are United States, United Kingdom, Canada, Latin America, Singapore, UAE and Saudi Arabia with Telecom, Banking, Insurance, and Retail, Government and Transport business customers. He has a bachelor degree in Business Administration from a reputed institute in India. Dr. C. Parthiban Dr. C. Parthiban is a Solution Manager working with TCS’s Global Consulting Practice Infrastructure Solutions group. He has 15 years of experience covering a wide range of areas including IT strategy, Infrastructure Optimization, Architecture Solutions, Systems Integration and leading large consulting engagements. He holds a Doctorate in Applied Science from a premier university in Chennai, India. Kumar Anand Kumar Anand is a Business Analyst working with TCS’ Global Consulting Practice - IT Infrastructure Group. He has extensiveexperience in IT strategy, infrastructure optimization, architecture solutions, system integration in large consulting engagements. He has consulted with clients across United States, Switzerland, Poland and India and has a Master’s Degree in Business Administration from a premier business school in India.

Acknowledgment The authors are grateful to Mr. C.S.R. Krishnan, for encouraging us through the process of writing this paper and providing valuable suggestions that have helped us draft this paper. 2

Table of Contents 1.

Introduction

4

2.

Benchmarking Strategy

4

3.

Benchmark Metrics

7

4.

Observations

9

5.

Conclusion

12

3

Introduction This section gives a brief overview of our customer, their requirements, and our collaborative partner for this engagement. A. Customer’s Profile and Cloud Infrastructure The customer is a Latin America-based Telecom company that offers information technology solutions through network managed services. Their portfolio comprises value added services, which include data center, cloud and security applications, and industry-focused solutions for Healthcare, Education, Government and Finance. In addition to this, they also offer consulting services and manage complex technology intensive projects. Through these services they cater to ~7,500 business customers. The customer aspired to expand their Cloud Computing Enterprise C-Computing service portfolio. In order to do this, they wanted to launch the Infrastructure as a Service (IaaS) offering, based on VBlockTM Infrastructure Platforms, to their customers as a value added service. The cloud technology stack was made up of Cisco’s Unified Computing System to run virtual machines, EMC’s Storage and Backup software and hardware, and VMware’s virtualization software, integrated within the VBlockTM platform, and vCloud Director for provisioning of cloud applications. In this context, the customer wanted to conduct an assessment that would benchmark their cloud performance against other similar profile vendors in the market. B.Our Collaborative Partner Our partner is a leading organization in benchmarking well-known cloud providers across various cloud services including IaaS. Over the years, our partner has run these benchmarks on thousands of cloud server instances running on multiple cloud providers and has developed and evolved techniques for analyzing the results and providing comparative analysis between providers and instance types. We collaborated with them to conduct a private benchmarking assessment of the customer’s cloud.

Benchmarking Strategy This section describes the benchmarking strategy adopted for successful assessment of the customer’s cloud performance. A multi-pronged strategy was devised to look at multiple facets of benchmarking. They included the approach to be followed, cloud configurations to be considered, categories to be evaluated, metrics to be measured, tests to be executed and vendors against whose cloud architecture and performance the customer’s cloud architecture and performance would be compared. Subsequent sections detail the above-mentioned points. A.Benchmarking Strategy 1)Assessment Approach The approach that was followed for benchmarking is illustrated in Figure 1 below.

4

The testing methodology involved two iterations (loaded and unloaded testing) in two different environments (unutilized and utilized environments) to perform an extensive benchmarking exercise and provide enough data to compare it with other cloud vendors. The following points describe the seven steps involved in benchmarking the customer’s cloud performance.

5

3

1 Create 6 VM Instances for Testing Create 1 GB VM

Create 8 GB VM

Create 2 GB VM

Create 16 GB VM

Create 4 GB VM

Create 32 GB VM

2

Execute Iteration - 1 Benchmark Tests

Process Benchmark Results and Document

Run separate benchmark suites for each category 1 GB VM

Run separate benchmark suites for each category 8 GB VM

n

Run separate benchmark suites for each category 2 GB VM

Run separate benchmark suites for each category 16 GB VM

n

Run separate benchmark suites for each category 4 GB VM

Run separate benchmark suites for each category 32 GB VM

4

n n n

6 Compare Scores with Other Cloud Vendors n n

Install Benchmark Suites on the Individual VMs Install benchmark suites on 1 GB VM

Install benchmark suites on 8 GB VM

Install benchmark suites on 2 GB VM

Install benchmark suites on 16 GB VM

Install benchmark suites on 4 GB VM

Install benchmark suites on 32 GB VM

Execute Iteration - 2 Benchmark Tests Run separate benchmark suites for each category 1 GB VM

Run separate benchmark suites for each category 8 GB VM

Run separate benchmark suites for each category 2 GB VM

Run separate benchmark suites for each category 16 GB VM

Compare scores with other vendors Analyze the current gaps in Alestra

7 Provide Recommendations and Get Signoff n n

Run separate benchmark suites for each category 4 GB VM

CPU Benchmark Metric Disk IO Performance Metric Interpreted Pgm Perf. Metric Memory IO Perf. Metric Encoding & Encryption Metric

Determine OFIs Recommend measures

Run separate benchmark suites for each category 32 GB VM

Figure 1: Assessment Approach

a) Step 1: The assessment was initiated by building an environment to simulate cloud infrastructure through creation of one instance each of the six configurations under consideration b) Step 2: The next step involved installing benchmark suites on the individual virtual machines. The tests were carried out on each virtual machine instance sequentially. c) Step 3: The third step was to execute Iteration-1 benchmark tests by running separate benchmark suites for each category of virtual machines. Iteration-1 testing was performed with a homogeneous CPU allocation. d) Step 4: Step four was to execute Iteration-2 benchmark tests by running separate benchmark suites for each category of virtual machines. Iteration-2 testing was performed with a heterogeneous CPU allocation. e) Step 5: In step 5, the test results are collated and benchmarked against the five aggregate metrics (discussed in the Benchmark Metrics section). 5

f ) Step 6: The next step was to compare the benchmark report with other cloud vendors and analyze and understand the shortcomings and the advantages based on various performance metrics. g) Step 7: The final step of the assessment was to identify various opportunities for improvements. They were to be taken up for meeting or exceeding the performance benchmarks where shortcomings were observed. The customer was provided with recommendations on how to overcome these gaps. 2)Virtual Machine Instance Configurations The six virtual machine instances configurations considered for benchmarking, which were part of the customer’s standard IaaS cloud offering and other vendors are listed in Table 1.

Low

Mid

High

Customer Cloud Instances

AWS Cloud Instances

Rackspace Cloud Instances

IBM SmartCloud Enterprise

TataInsta Compute India

1 GB

m1.small

Nil

Nil

Tata 1GB

2 GB

m1.large

Nil

Copper (2GB)

Nil

4 GB

m2.2xlarge

4 GB

Nil

Nil

8 GB

m2.2xlarge

8 GB

Silver (8GB)

Tata 8GB

16 GB

m2.4xlarge

Nil

Gold(16GB)

Tata 16GB

32 GB

cc1.4xlarge

Nil

Platinum (32GB)

Nil

Table 1: Configuration of virtual machine instances

The six cloud virtual instances listed in Table 1 address the differing needs of various environments. Based on the needs, one can provision any instance from a lower (1GB) to a higher configuration (32GB). For the purpose of benchmarking the customer’s IaaS cloud platform, we segregated the instances into three categories of servers n

Low range server instances – (1GB and 2GB)

n

Mid range server instances – (4GB and 8GB)

n

High range server instances – (16GB and 32GB)

Categorization of virtual instances helps isolate and compare the performance of instances in each category across vendors. Besides, low range server instances can be utilized for lesser workloads, such as an application or web server. On the other hand, high range server instances can be utilized for higher workloads, such as a database server. As described in the assessment approach, the creation and testing of all the cloud instances were done in the same environment.

6

Benchmark Metrics There are multiple infrastructure elements that function together to form the IaaS cloud platform. Hence, it is necessary to identify the critical components of the underlying infrastructure and assess their performance against the benchmark. The following 5 key performance metrics were utilized to assess the performance of customer’s cloud environment, and are part of our partners’ computing benchmark test suite: n

CPU Performance Metric

n

Disk IO Performance Metric

n

Memory IO Performance Metric

n

Encoding and Encryption Performance Metric

n

Programming Language Performance Metric

Subsequent sections describe the significance of each of the metrics in detail. a) CPU Performance Metric The CPU Performance Metric is utilized to measure CPU performance and analyze characteristics of cloud server instances. Objective: IaaS service is based on hypervisor technology running on multi-tenant environments. This leads to different methods of allocating/sharing CPU processing power, for example, fixed allocation, weighted allocation, throttled, etc. The challenge customers face, in general, is in assessing the computing power of CPU allotted to a virtual machine due to multiple ways by which cloud vendors allocate CPU processing power. The problem is further magnified by varied definitions of CPU by different vendors such as ECU (by Amazon), VPU, GHZ, Cores, and many more. This makes it difficult to objectively compare different cloud vendors. A common scale is needed to measure the CPU performance. Our partner’s proprietary measurement technique addresses this issue and is based on approximation of Amazon’s AWS Elastic Compute Unit (ECU). The metric is an aggregate of 19 benchmark tests carried out on the virtual machine’s CPU. Some of the widely known benchmarks for CPU performance measurement are Geekbench, Unixbench and SPEC2000. Attention is also given to tests, which can assess performance of multi core CPU’s by including them in the benchmarking test suite. b) Disk IO Performance Metric The Disk IO Performance Metric is used to measure disk input/output performance. Objective: The aim of the test is to measure the speed of read and write performance of the storage allocated to a particular virtual instance. This metric measures overall disk IO performance using 14 disk IO benchmarks including bonnie++, dbench, fio, hdparm and iozone. Applications that utilize significant disk interaction like databases and web servers are highly sensitive to IO performance capabilities. Because of this, it is recommended that IO performance be likewise tiered between smaller- and larger-sized instances. 7

c) Memory IO Performance Metric This metric measures the performance of memory IO operations. Objective: The aim of the tests is to measure how fast the CPU is able to read and write data to the cloud instance memory. The metric is an aggregate of 7 benchmark tests. Some of the widely known benchmarks for performance measurement are Geekbench, CacheBench and Unixbench. d) Encoding and Encryption Performance Metric The metric measures the encoding and encryption power of the cloud server instance. Objective: The aim of the test is to measure how fast cloud server instances are able to perform encoding and encryption operations, such as the time required by the server to encode a WAV file to an MP3 format. The metric is an aggregate of 7 benchmark tests. Some of the widely known benchmarks for performance measurement are Monkey Audio Encoding, WAV to MP3 and GnuPG. e) Programming Language Performance Metric This metric measures the performance of four common interpreted programming languages including Java, Ruby, Python and PHP. These are the common languages used to build the server-side software applications. Objective: The objective of the test is to simulate a real life application workload and measure application response and the underlying infrastructure like CPU, memory ,etc., for both client and server systems. The metric is an aggregate of 4 benchmark tests. Some of the widely known benchmarks for performance measurement are SPECjvm, Ruby, Python and PHP.

8

Observations In this section the results obtained from the benchmark test results are for the virtual instances illustrated for each computing metric. A comparative analysis with other vendors was done on the basis of these test results. 1)Test Results a) CPU Performance Metric

CPU Performance

CCU - Cloud Computing Unit

45 40 35 30 25 20 15 10 5 0

Alestra AWS

1 GB Instance Score

2 GB Instance Score

4 GB Instance Score

8 GB Instance Score

16 GB Instance Score

32 GB Instance Score

5.36

7.25

26.81

31.41

34.95

40.19

1

4

13

13

26

33.5

4.88

4.92 16.52

28.64

29.87

27.42

29.75

Rackspace IBM SmartCloud Tata InstaCompute

5.89 6.9

9

b) Disk IO Performance Metric

Disk IO Performance 200 180 160

IOP

140 120 100 80 60 40 20 0

1 GB Instance Score

2 GB Instance Score

4 GB Instance Score

Alestra

19.79

64.83

85.15

139.51

187.83

189.35

AWS

28.81

59.13

78.94

78.94

82.79

96.11

142.12

151.66

147.76

90.66

89.11

Rackspace

60.05

IBM SmartCloud

98.35

Tata InstaCompute

38.03

8 GB Instance Score

16 GB Instance Score

32 GB Instance Score

76.87

c) Programming Language Performance Metric d) Memory IO Performance Metric

Programming Language Performance 250

200

Lang

150

100

50

0

1 GB Instance Score

Alestra AWS

23.11

2 GB Instance Score

4 GB Instance Score

8 GB Instance Score

115.23

126.89

72.54

113.58

Rackspace

77.06

IBM SmartCloud Tata InstaCompute

107.27 107.71

16 GB Instance Score

32 GB Instance Score

158.14

174.3

207.76

113.58

128.53

146.81

116.58

131.14

133.62

125.62

135.3

77.19

10

Memory IO Performance 160 140

MIOP

120 100 80 60 40 20 0

1 GB Instance Score

2 GB Instance Score

4 GB Instance Score

8 GB Instance Score

16 GB Instance Score

Alestra

84.13

AWS

22.24

105.39

116.32

135.34

136.66

145.05

57.47

105.78

105.78

100.4

143.44

66.75

69.08 92.67

99.07

103.68

118.34

120.16

Rackspace 84.9

IBM SmartCloud Tata InstaCompute

89.47

32 GB Instance Score

e) Encoding and Encryption Performance Metric

Encoding & Encryption Performance 200 180 160 140

Encode

120 100 80 60 40 20 0

Alestra AWS

1 GB Instance Score

2 GB Instance Score

4 GB Instance Score

109.46

112.8

131.44

163.7

172.5

173.31

43.61

103.81

135.6

135.6

135.79

147.42

125.54

124.82

125.29

158.09

157.23

111.22

Rackspace IBM SmartCloud Tata InstaCompute

125.16 144

8 GB Instance Score

16 GB Instance Score

32 GB Instance Score

111.01

11

Conclusion With cloud still in the early stages of adoption and the fact that it is evolving each day, it is imperative to assess cloud performance by conducting an extensive benchmarking exercise. This helps arrive at appropriate service levels and set realistic expectations for both the customer and the service provider. The service provider must take steps to instill confidence in cloud computing, which in turn will pave the way for increasing demand and a wider adoption of cloud services . Benchmarking also enables the cloud services provider to continuously reinvigorate its cloud offering by identifying and remediating performance bottlenecks and implementing strict cloud governance standards. In addition, it can help emerging as well as established cloud service providers to assess where they are in comparison to the top vendors and leverage this knowledge to re-strategize their cloud architecture. It is important to understand that benchmarking is one of the ways to continuously improve, strengthen, and sustain the established infrastructure cloud platform services.

References i

KUMAR ANAND, ‘CLOUD ADOPTION LIFECYCLE’, INTERSCIENCE OPEN ACCESS JOURNAL, ISBN – 978-93-81693-17-9.

12

About TCS’ Global Consulting Practice TCS’ Global Consulting Practice (GCP) is a key component in how TCS delivers additional value to clients. Using our collective industry insight, technology expertise, and consulting know-how, we partner with enterprises worldwide to deliver integrated end-to-end IT enabled business transformation services. By tapping our worldwide pool of resources - onsite, offshore and nearshore, our high caliber consultants leverage solution accelerators and practice capabilities, balanced with our knowledge of local market demands, to enable enterprises to effectively meet their business goals. GCP spearheads TCS' consulting capacity with consultants located in North America, UK, Europe, Asia Pacific, India, Ibero-America and Australia.

Contact For more information about TCS’ consulting services, contact [email protected] Subscribe to TCS White Papers TCS.com RSS: http://www.tcs.com/rss_feeds/Pages/feed.aspx?f=w Feedburner: http://feeds2.feedburner.com/tcswhitepapers

About Tata Consultancy Services (TCS) Tata Consultancy Services is an IT services, consulting and business solutions organization that delivers real results to global business, ensuring a level of certainty no other firm can match. TCS offers a consulting-led, integrated portfolio of IT and IT-enabled infrastructure, engineering and assurance services. This is delivered through its unique Global Network Delivery ModelTM, recognized as the benchmark of excellence in software development. A part of the Tata Group, India’s largest industrial conglomerate, TCS has a global footprint and is listed on the National Stock Exchange and Bombay Stock Exchange in India.

IT Services Business Solutions Outsourcing All content / information present here is the exclusive property of Tata Consultancy Services Limited (TCS). The content / information contained here is correct at the time of publishing. No material from here may be copied, modified, reproduced, republished, uploaded, transmitted, posted or distributed in any form without prior written permission from TCS. Unauthorized use of the content / information appearing here may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties. Copyright © 2012 Tata Consultancy Services Limited

TCS Design Services I M I 08 I 12

For more information, visit us at www.tcs.com