IDC HPC Market Update June 2010 Earll JJoseph E h – [email protected] j h@id Jie Wu – [email protected] Steve Conway – [email protected] Lloyd Cohen – [email protected] Charlie Hayes – [email protected] Copyright 2010 IDC. Reproduction is forbidden unless authorized. All rights reserved.

Presentation Agenda ¾ HPC Technical Server Market Update ¾ Market Drivers and Barriers ¾ New Forecasts for 2010 and to 2014 ¾ The HPC Market Beyond The Servers

¾ HPC User Forum Update ¾ European/EU HPC Research ¾ New N IDC HPC R Research h Pl Plans Feel free to ask questions at any time! © 2010 IDC

Jun-10

2

IDC’s HPC Team Earl Joseph IDC HPC research studies, HPC User Forum and strategic consulting Steve Conway HPC User Forum Forum, consulting consulting, primary user research and events Jie Wu HPC census and forecasts, China research, interconnects and grids Ll d Cohen Lloyd C h HPC market analysis, data analysis and workstations

Beth Throckmorton Government account support and special projects Charlie Hayes Government HPC issues, issues DOE and special studies Mary Rolph HPC User Forum conference planning and logistics

©2007 IDC 3 © 2010 IDC

Jun-10

3

HPC Technical Server Market Update

© 2010 IDC

Jun-10

4

What Is HPC? IDC uses these terms to cover all technical y scientists,, engineers, g , financial servers used by analysts and others: ƒ HPC ƒ HPTC ƒ Technical Servers ƒ Highly computational servers HPC covers all servers that are used for computational t ti l or d data t intensive i t i tasks t k Now getting close to 20% of all servers © 2010 IDC

Jun-10

5

IDC HPC Data Sources Multiple Worldwide HPC data structures: ƒ Quarterly OEM supplier shipments (HPC Qview) – A view ie of what hat systems s stems look like when hen first shipped – 15 industry/application segments

ƒ HPC ISV database ƒ HPC end end-user user database – A view of how systems are used – Looking at the whole ecosystem

ƒ Many custom HPC studies each year year, result in separate

data structures: – Petascale research for DARPA, DOD and DOE – S Supercomputer p p studies for IBM,, C Cray, y, SG SGI,, Fujitsu, j , etc. – On many research teams from exascale to fingerprints

ƒ HPC User Forum meetings (35 so far) to collect

q trends and issues requirements,

© 2010 IDC

Jun-10

6

HPC Server Market Size By Competitive Segments (2009 Data) HPC Servers $8.6B

Workgroup (under $100K) $1.7B

Supercomputers (Over $500K) $3.4B Divisional ($250K - $500K) $1.1B

© 2010 IDC

Departmental ($250K - $100K) $2.5B

Jun-10

7

Top Trends in HPC The global economy is still impacting HPC Æ HPC declined 11.6% for 2009 overall ƒ 2008 was down 3% – A major change from19% yearly growth in prior years – But the high end grew 25% in 2009 Major challenges for datacenters: ƒ Power, cooling, real estate, system management ƒ Storage and data management continue to grow in importance Software hurdles will rise to the top for most users ƒ Application scaling and performance is a problem SSDs will gain momentum and could redefine storage GPGPUs are starting to gain ground © 2010 IDC

Jun-10

8

HPC Market Results: Revenues and System Units Worldwide Technical Computing Revenue ($K) by Competitive Segment, 2 Competitive Segment 2008 2009 08 vs 09 Growth Supercomputer 2 686 128 3,369,410 2,686,128 3 369 410 25 4% 25.4% Divisional 1,395,817 1,070,764 -23.3% Departmental 3,167,096 2,516,253 -20.6% Workgroup g p 2,522,809 , , 1,680,687 , , -33.4% Total 9,771,849 8,637,114 -11.6% Source: IDC, March 2010

Worldwide Technical Computing Unit Shipment by Competitive Segment, 2 Competitive Segment 2008 2009 08 vs 09 Growth Supercomputer 1,863 2,100 12.7% Divisional 4,054 3,574 -11.8% Departmental 20,105 14,840 -26.2% Workgroup 148,069 84,090 -43.2% T t l Total 174 091 174,091 104 604 104,604 -39.9% 39 9% Source: IDC, March 2010 © 2010 IDC

Jun-10

9

HPC Vendor Revenue Share, 2009 Dawning, 0.5% Bull 0.5% Bull, 0 5% Appro, 0.8% Hitachi, 1.3% SGI 1.7% SGI, 1 7% Fujitsu, 2.2% NEC, 3.2%

Oth 11.0% Other, 11 0%

IBM, 29.3%

Cray, 4.0% Sun, 4.1%

Dell, 12.7%

© 2010 IDC

HP, 28.6%

Jun-10

10

Revenue Share by Vendor Supercomputer Segment Supercomputer Revenue Share by Vendor, Q409 Appro Hitachi 2.6%

Bull Sun 0.7% 1.5%

0 1% 0.1%

Dell 3.0%

NEC 0.0%

Other 0.5%

Fujitsu 3.2% SGI 5 0% 5.0% IBM 46.1% HP 18.1%

Cray 19.0%

© 2010 IDC

Jun-10

11

Revenue Share by Vendor Divisional Segment Divisional Revenue Share by Vendor, Q409 Dawning 1.1%

Bull 1.1%

Other 10.5%

SGI 0.1%

Fujitsu 1 9% 1.9% NEC 4.3%

HP 39.0%

Appro 4.8%

Dell 7.1% Sun 8 3% 8.3%

© 2010 IDC

IBM 21.8%

Jun-10

12

Revenue Share by Vendor Departmental Segment Departmental p Revenue Share by y Vendor,, Q409 Q SGI 0.5% Dawning 0.8%

Appro 0.2%

Bull 0.2%

Fujitsu j 1.9% Hitachi 2.1%

Other 7.8%

HP 35.6%

Sun 5.3%

IBM 15.7%

Dell D ll 29.8%

© 2010 IDC

Jun-10

13

Revenue Share by Vendor Workgroup Segment Workgroup Revenue Share by Vendor, Q409 Other, 32% HP, 30%

Dawning, 1% Dawning, 1% Bull, 0% Fujitsu,  SGI, 2% 1% Sun 3% Sun, 3%

IBM, 26% Dell, 6%

© 2010 IDC

Jun-10

14

HPC Q4 2009 By Regions

Worldwide Q4 2009 By Region Region Q409 Total N.A. Rev. $1,327,824 Total EMEA Rev. $748,704 Total Asia/Pac Rev. $293,036 Total Japan Rev. $192,615 Total ROW Rev. $ $24,074 Total $2,586,253

© 2010 IDC

% 51.3% 28.9% 11.3% 7.4% 0.9% 100.0%

Jun-10

15

HPC Q4 2009 Nodes By Processor Type

CPU Type x86-64 EPIC RISC Other RISC-BG Grand Total

© 2010 IDC

WW Rev WW Units Q409 Q409 1,920,197 26,873 145 245 145,245 707 368,233 2,882 143,000 1 9,600 3 2,586,275 30,466

System Node Volume Q409 332,993 4 644 4,644 4,592 0 6,144 348,373

Jun-10

16

Total HPC Revenue Share by Processor Type

Source IDC, 2010 © 2010 IDC

Jun-10

17

Total HPC Revenue by OS

100% 90%

HPC  revenu ue  share by O O/S

80% 70% 60%

Linux rev. 50%

Unix Rev. Unix Rev.

40%

W/NT Rev.

30% 20% 10% 0% 2003

2004

2005

2006

2007

2008

2009

Source IDC, 2010 © 2010 IDC

Jun-10

18

Growth In HPC Clusters, Q103 – Q409 Non‐Cluster

Cluster

100% 90% 80% 70% 60% 50% 40% 30% 20% 10%

Q103 Q203 Q303 Q403 Q104 Q204 Q304 Q404 Q105 Q205 Q305 Q405 Q106 Q206 Q306 Q406 Q107 Q207 Q307 Q407 Q108 Q208 Q308 Q408 Q109 Q209 Q309 Q409

0%

Source IDC, 2010 © 2010 IDC

Jun-10

19

Why Is Commodity Hot? .. Price! HPC Server Processor/Sockets Metrics, 2009

CPU T Type x86-64

System Ave CPUs ASP($K) /System /S t

$K/CPU CPUs CPU / $M

68.2

27

2.5

399

EPIC

145.2

14

10.3

98

RISC

148.4

22

6.7

149

2,785.7

29

96.2

10

Vector Source: IDC, 2010 © 2010 IDC

Jun-10

20

Why is Commodity Hot (Cont’d) HPC Server Core Metrics, 2009 System ASP($K)

Ave Cores /System

$K/Core

Cores / $M

68.2

89

0.8

1302

EPIC

145 2 145.2

25

58 5.8

174

RISC

148.4

49

3.1

327

2,785.7

29

96.2

10

CPU Type x86-64

Vector

Source IDC, 2010 © 2010 IDC

Jun-10

21

HPC Market Drivers and Barriers

© 2010 IDC

Jun-10

22

The Value of HPC Leadership Computational modeling/simulation/design is now established as the third branch of scientific inquiry, complementing theory and experimentation. HPC-driven innovation has become a prerequisite for: ƒ Scientific leadership ƒ Industrial leadership ƒ Economic advancement ƒ National/regional security In this study, 97% of the businesses th t had that h d adopted d t d HPC said id they th could no longer compete or survive without it.

© 2010 IDC

Jun-10

23

HPC Can Change A Nation's Wealth

© 2010 IDC

Jun-10

24

HPC Leadership Can Provide New Knowledge

© 2010 IDC

Jun-10

25

Research Example:

We Have Entered the Petascale Era and Are Heading g toward Exascale Computing p g • Petascale systems are in place or planned for Europe, the U.S. and Asia • Multiple applications have been run at sustained transpetaflop speeds • Exascale planning initiatives are already under way

© 2010 IDC

Jun-10

26

HPC Market Drivers and Barriers

© 2010 IDC

Jun-10

27

Major Customer Pain Points Software has become the #1 roadblock ƒ Better management software is needed – HPC clusters l t are still till h hard d tto setup t and d operate t – New buyers – require “ease-of-everything” ƒ Parallel software is lacking for most users – Many applications will need a major redesign – Multi-core will cause many issues to “hit-the-wall” Clusters are still hard to use and manage ƒ System management & growing cluster complexity ƒ Power, cooling and floor space are major issues ƒ Third party software costs ƒ Weak interconnect performance at all levels ƒ RAS is a growing issue ƒ Storage and data management are becoming new bottle necks ƒ Lack of support for heterogeneous environment and accelerators © 2010 IDC

Jun-10

28

Software Scaling Limitations TABLE 20 Typical Number of Processors the ISV Applications Use for Single Jobs CPU Range

Number of Applications

Percent

1

19

24.4%

2-8

25

32.1%

9 32 9-32

20

25 6% 25.6%

33-128

9

11.5%

129-1024

4

5.1%

Unlimited

1

1.3%

78

100.0%

Total:

© 2010 IDC

Jun-10

29

New Challenges Affecting HPC Datacenters The increase in CPUs and server units is creating significant g IT challenges g in: ƒ Managing complexity – How to best manage a complex cluster – How to install/setup a new cluster without having to buy a large number of separate pieces ƒ Application scaling and hardware utilization – How to deliver strong performance to users on their applications – How to make optimal use of new processor and system designs ƒ Power/cooling and Space © 2010 IDC

Jun-10

30

HPC Power And Cooling Issues

© 2010 IDC

Jun-10

31

Research Example:

Power and Cooling As A Crucial HPC Issue Power and cooling large HPC systems t h has become one of the top issues in HPC This worldwide study found that: ƒ “Power and cooling infrastructure limitations were the biggest barriers to increasing HPC resources” © 2010 IDC

Jun-10

32

Power and Cooling Study: Background • Power and cooling has become a top concern among HPC data centers – Increases in HPC system sizes have escalated energy requirements • At th the same ti time, energy prices i have h risen i substantially above historic levels • The third element in this "perfect perfect storm" storm is the challenge of making HPC processors more energyefficient without overly compromising performance – the holy grail of HPC • And these power and cooling developments are g at a time of increased sensitivity y toward occurring carbon footprints and global climate change and aging infrastructures © 2010 IDC

Jun-10

33

Power and Cooling Study: General Findings • HPC data centers’ average per site: ƒ Available floor space p over 26,000 , ft2 ƒ Used floor space about 17,000 ft2 – 63% of available space

ƒ Cooling C li capacity it 22 22.7 7 million illi BTU BTUs or 1 1,839 839 ttons ƒ Annual power consumption 6.356 MW

• HPC data centers costs ƒ Annual power cost was $2.9 million or $456 per KW ƒ Ten sites provided the percentage of their budget spent on power—average was 23% ƒ Two-thirds of the sites had budget for power and cooling g upgrades pg – Average amount budgeted is $6.87 million © 2010 IDC

Jun-10

34

Power and Cooling Study: Key Finding #1 • Nearly all the HPC user sites surveyed (96%) g design g criteria important p to their consider “green” HPC system and data center planning process ƒ A substantial majority (71%) plan to include power

and d cooling li efficiency ffi i goals l and/or d/ metrics t i iin th their i future plans ƒ Nearly all sites were able to describe steps they had

taken to make their HPC resources and operations "greener" – IIncluding l di d data t center t fl flow analysis, l i h hot-aisle t i l cold-aisle ld i l containment, moving to higher voltage distribution systems, more regular maintenance schedules, the use of "free" cooling, and the purchase of liquid-cooled liquid cooled systems, among many other measures © 2010 IDC

Jun-10

35

Power and Cooling Study: Key Finding #2 • Approximately two-thirds of user sites plan to p or build new HPC data centers expand ƒ Nearly the same percentage (61%) also have

budgets in place to upgrade their power and cooling capabilities biliti d during i th the nextt ttwo tto th three years ƒ The average amount budgeted for power and

cooling upgrades is $6.87 $6 87 million ƒ The average data center space currently available is

more than 26,000 square feet and on average the data centers and are using just under 17,000 square feet, or 63% of the available space

© 2010 IDC

Jun-10

36

Power and Cooling Study: Key Finding #5 • Liquid cooling is the alternative approach being y the user sites considered most often by ƒ For cooling HPC systems, products, and data

centers, a majority of sites expect to maintain existing i ti air-based i b d cooling li methods, th d b butt d departures t from the status quo tended toward increased adoption of water and other liquid cooling t h l i technologies ƒ Liquid cooling was the top alternative approach

being considered by both users and vendors

© 2010 IDC

Jun-10

37

Power and Cooling Study: Key Finding #7 • Approximately half of the surveyed user sites (48%) pay p y for p power and cooling g costs out of their own budgets ƒ Of the government sites, 75% pay for power and

cooling li di directly tl outt off th their i own b budgets, d t whereas h only 50% of academic sites and 14% of industry sites do this ƒ Nearly all of the other sites said that despite having

no direct budgetary responsibility for this, they work hard to control power and cooling costs

© 2010 IDC

Jun-10

38

Power and Cooling Study: Key Finding #10 • HPC users and vendors differ sharply on the game-changing g g cooling g technologies g likelihood of g ƒ Just over one-third (36%) of the user sites expect

game-changing cooling technologies to emerge in th nextt five the fi years ƒ The vendors were much more optimistic than the

user sites, sites with 62% of them foreseeing the emergence of game-changing cooling technologies in this timeframe

© 2010 IDC

Jun-10

39

Power and Cooling Study: Power and Cooling by Sub-systems

TABLE 20 Distribution Of Power And Cooling Costs Among HPC Sub-Systems How do your power and cooling costs divide among your HPC compute compute, storage storage, visualization sub sub-systems? systems? Response (percentages)

Government

Industry

Academia

All Sites

% Compute

92.6%

81.7%

90.1%

89.7%

% Storage

4.9%

18.3%

8.2%

8.6%

% Visualization

2.1%

0.0%

1.1%

1.3%

Don't know or not sure

0.4%

0.0%

0.6%

0.4%

© 2010 IDC

Jun-10

40

IDC Top 10 HPC Predictions for 2010 1. The HPC Market Will Resume Growth in Mid-2010 2. The Race For Global Leadership Will Turbo-Charge the S Supercomputers t S Segmentt 3. In 2010 Evolutionary Change Will Trump Revolutionary Change g y Level the Playing y g Field For HPC 4. Commoditization Will Increasingly Competition 5. The Highly Parallel Programming Challenge Will Increase 6 X86 Processors Will Dominate 6. Dominate, But GPGPUs Will Gain Traction As x86 Hits the Wall 7. Infiniband Will Continue To Gain HPC Market Share 8 HPC Storage 8. S Will Outpace O the h HPC Server S Market M k Recovery R 9. Power and Cooling Efficiency Will Become More Important, But Is Not Far Along Today 10. Cloud Computing May Be Coming To A Neighborhood Near You © 2010 IDC

Jun-10

41

2. The Race For Global Leadership Will TurboCharge the Supercomputers Segment ƒ The $500K and up Supercomputers segment did very well in 2009 at about $2.6B ƒ IDC predicts di t it will ill grow 17% tto $3 $3.1B 1B iin 2010 and d tto $4.1B in 2013 ƒ The $3M+ segment grew a whopping 65% in 2009 to reach h $1 $1.0B 0B and d we expectt it tto grow tto $1 $1.4B 4B iin 2013 ƒ The race for HPC leadership might turbo-charge the Supercomputers segment possibly for a decade to come ƒ Although the Petascale Era is just dawning, governments around the world are already g y exploring p g exascale computing

© 2010 IDC

Jun-10

42

6. x86 Processors Will Dominate, But GPGPUs Will Gain Traction as x86 Hits the Wall ƒ x86 processors went from near-zero to hero in HPC in the past decade, largely replacing RISC. ƒ x86 86 will ill continue ti to t dominate, d i t but b t GPGPUs GPGPU will ill start t t making their presence felt more in 2010. ƒ Multiple Large HPC procurements have substantial GPGPU content. t t ƒ GPGPUs provide more peak/Linpack flops per dollar for politics and will inevitably provide more sustained flops for suitable applications. ƒ In 2010, some ISVs will announce plans to redesign pp with GPGPUs in mind. their apps

© 2010 IDC

Jun-10

43

7. Infiniband Will Continue To Gain HPC Market Share ƒ Infiniband’s share grew substantially, 2005-2009, at the expense of proprietary interconnects, while Ethernet’s share remained constant ƒ IDC forecasts that by 2013, the HPC interconnect market will grow to about $2.25B from $2.0B in 2009 ƒ It will take a while for 10 GigE to work its way through the HPC market 2009

2005

2013 (Forecast) 13%

21% 39% 49%

49%

50% 37%

30% 12% Ethernet © 2010 IDC

Infiniband

Proprietary

Ethernet

Infiniband

Proprietary

Ethernet

Infiniband

Proprietary Jun-10

44

10. Cloud Computing May Be Coming To A Neighborhood Near You ƒ Clouds offer the ability to quickly add resources, and a way to try before buying ƒ CERN iis d developing l i what h t may b be th the world’s ld’ biggest bi t private cloud, to distribute data, applications and computing resources to scientists around the world ƒ NASA iis b building ildi a private i t cloud l d to t enable bl researchers h to run climate models on remote systems provided by NASA. This saves NASA from having to help users b ild the build th complex l models d l on their th i local l l systems t ƒ NSF and Microsoft just announced a cloud arrangement ƒ NERSC and the Joint Genome Institute are collaborating on a l cloud initiative related to gene sequencing data

© 2010 IDC

Jun-10

45

HPC Market Forecasts

© 2010 IDC

Jun-10

46

HPC Server Revenue($K) Forecast 2008 - 2014

WW HPC Server Forecast, 2009 - 2014 2009

2010

2011

2012

2013

CAGR 2014 (09-14)

Supercomputer

3,369,410 3,624,352 3,879,294 4,134,237 4,389,179 4,617,522

6.5%

Divisional

1,070,764 1,129,694 1,188,625 1,247,555 1,306,485 1,366,596

5.0%

D Departmental l

2 516 253 2,698,090 2,516,253 2 698 090 2,879,928 2 879 928 3,061,765 3 061 765 3,243,602 3 243 602 3,479,978 3 479 978

6 7% 6.7%

Workgroup

1,680,687 1,787,410 1,894,133 2,000,856 2,107,579 2,242,167

5.9%

Total

8,637,114 , , 9,239,547 , , 9,841,980 , , 10,444,413 , , 11,046,846 , , 11,706,263 , ,

6.3%

Source: IDC, 2010

© 2010 IDC

Jun-10

47

Growth Areas: Industry/Application Segments Worldwide HPC Revenues ($M) 2008 Bio-Sciences Bio Sciences $1 412 $1,412 CAE $1,131 Chemical Engineering $238 DCC & Distribution $572 Economics/Financial $281 EDA / IT / ISV $751 Geosciences and Geo-engineering Geo engineering $570 Mechanical Design and Drafting $112 Defense $920 Government Lab $1 460 $1,460 University/Academic $1,852 Weather $392 Other $80 Total Revenue $9,772 Source: IDC 2010 © 2010 IDC

2009 $1 120 $1,120 $874 $179 $460 $198 $540 $539 $73 $849 $1 349 $1,349 $1,641 $353 $78 $8,252

2010 to 2014 Growth Sectors

Jun-10

48

HPC User Forum Update

© 2010 IDC

Jun-10

49

Register at: www.hpcuserforum.com

© 2010 IDC

Jun-10

50

HPC User Forum Mission To Improve The Health Of The High-performance Computing Industry Through g Open p Discussions, Information-sharing g And Initiatives Involving HPC Users In Industry, y, Government And Academia Along g With HPC Vendors And Other Interested Parties

© 2010 IDC

Jun-10

51

HPC User Forum Goals Assist HPC users in solving their ongoing computing, technical and business problems Provide a forum for exchanging information, identifying areas of common interest, and developing unified positions on requirements ƒ

By working with users in other sectors and vendors ƒ To help direct and push vendors to build better products ƒ Which should also help p vendors become more successful

Provide members with a continual supply of information on: ƒ

Uses of high end computers computers, new technologies technologies, high end best practices, market dynamics, computer systems and tools, benchmark results, vendor activities and strategies

Provide members with a channel to present their achievements and requirements to interested parties © 2010 IDC

Jun-10

52

April 2010 HPC User Forum Meeting:

Design and Manufacturing: Tier 1 ƒ Boeing: “If the solutions can’t run during the day or overnight g on