Supercomputer Education and Research Centre

SERC/HPCS/1/2014 Supercomputer Education and Research Centre Indian Institute of Science Bangalore 560012 January 2014 INTRODUCTION : Supercomput...
Author: Ross Cross
4 downloads 0 Views 363KB Size
SERC/HPCS/1/2014

Supercomputer Education and Research Centre Indian Institute of Science Bangalore 560012

January 2014

INTRODUCTION :

Supercomputer Education and Research Centre (SERC) at Indian Institute of Science (IISc) is a high performance computing facility set up to cater to the computing needs of the faculty and students. The centre works round the clock and is ably supported by environmental systems including the un-interrupted power supply and diesel generator set to ensure non-stop computing facility to the user community.

Through this tender, SERC is inviting proposals, in two-cover format, from bidders for enhancing the High Performance Computing and Storage (HPCS) System environment to address the current and immediate future requirements. This document gives the details of the requirements of the HPC System.

SCOPE OF WORK :

SERC is inviting proposals from bidders for a HPCS system with the following subcomponents:

Table 1: Components of the High Performance Computing and Storage

HPC-CPU-Cluster

HPC System with CPU nodes (see detailed technical specification in Table 2) of 500 TFLOPS of HPC Linpack performance (with turbo-mode off) with the associated software stack

HPC-GPU-Cluster

HPC System of 100 TFLOPS HPC Linpack performance (with turbo-mode off) with GPU nodes consisting of CPU-GPU combination (see detailed specification in Table 2) with the associated software stack

HPC-Storage

A 2 PetaByte high performance storage (see detailed specification in Table 2) with the associated software stack

The proposed system will be housed in the Ground floor of the existing SERC building. Bidders have to clearly understand the existing support infrastructure available at SERC and propose the necessary additional infrastructure requirement, if any, for the data center to deliver the expected functionality and performance. Details regarding the existing support infrastructure are given in Annexure 1.

The solution proposed by the bidders is expected to be a total turn-key solution meeting all the stipulated requirements. Supply, installation, commissioning, integration of the compute and storage system along with warranty services for a period of three years, as well the supply and installation of the necessary support infrastructure, if any, are the responsibilities of the successful bidder.

The bidders have to ensure that the resources (personnel) allocated for each one of the above tasks are competent and capable to meet all the technical requirements in order to ensure that the broad objective of delivery of services as per expectations is fully met.

SUMMARY OF REQUIREMENTS:

The technical specifications of the proposed system, has to meet the requirements stipulated in Table 2. It is to be noted that the solution architecture should meet the user requirements in all areas of science and engineering and in particular the following:



Computational Fluid Dynamics



Weather and Climate modeling



Computational Physics



Computational Chemistry



Computational Biology



Material science and engineering

It is desirable that the proposed solution architecture is robust and mature from the point of view of supporting a large number of complex science and engineering applications in the above areas, and currently working at different customer installations. The proposed HPC system is expected to run scientific applications of different areas of interest to IISc such as CFD, Weather and climate modeling, physics, chemistry, material science etc.

The requirements in terms of the architecture, processor family acceptable, total addressable memory, storage and network requirements are clearly spelt out in Table 2.

All specified and required software products for the solution must be clearly listed with mode of licensing used and number of licenses required including the period of validity and any maintenance or upgrade applicable.

SERC has taken steps to build the support infrastructure and allocate necessary space to house the proposed HPC system and the bidder has to take into account that the proposed solution is meeting the stated power and space requirements.

The solution quoted by the bidder for meeting the above stated requirement may require best in class point products/systems from multiple OEMs in order to ensure that all the stipulated requirements are met and the solution is optimal and cost-effective. Therefore the integration and interfacing required across the proposed products is the responsibility of the bidder and should be appropriately achieved to meet the overall requirement.

SPECIFICATIONS AND TECHNICAL DETAILS

The proposed system should have a total compute capability of 600 TFLOPS of HP Linkpack performance (with turbo-mode off), with 500 TFLOPS of performance coming from the CPU cluster and 100 TFFLOPS coming from GPUcluster (with CPU-GPU combination). The performance specified throughout this document is always the sustained HPLinpack performance, unless explicitly stated otherwise. Further, throughout this document, the HPL performance mentioned is with turbo-mode off.

Table 2: Technical Specifications of the HPC System and Storage

A - High Performance Computing CPU Cluster 1.

System Architecture

Tightly integrated cluster with IB-FDR or proprietary interconnect

2.

Compute Power

A base system of 500 TFLOPS of sustained compute power on HPL (with turbo-mode off).

3.

Processor Architecture (64-bit)

Intel Xeon E5 v2 (Ivybridge) at 2.4 GHz or higher or AMD Opteron 62xx at 2.4 GHz or higher, or IBM Power7 at 3.8GHz or higher or IBM PowerPC at 1.6GHz or higher

4.

Memory

Total of 100TB or more

5.

Node-to-node Interconnect speed with non-blocking and all-to-all connections

Greater than 50Gb/s for parallel jobs; @1Gb/s for monitoring and management Inter-rack: >35Gb/s for parallel jobs; @1Gb/s for monitoring and management Node-to-node MPI latency of 2 microseconds or less.

6.

Optional local storage for OS & s/w env.

2X 500GB SAS or SATA @7200rpm or more for solutions supporting local storage on compute nodes.

7.

Operating system

Linux based 64-bit OS with support for NIS/LDAP, NFS ver3/4, autoconf/automake, cmake, svn/mercurial/cvs/git, python, trace, vampir, emacs, vim etc., and the HPC benchmark programs compiled with solution specific configuration.

8.

HPC parallel development environment

Architecture specific Compiler suite for C/C++/Fortran90 and 77 with runtime libs, profilers and debuggers Communication libs MPI/OpenMP/pthreads. Vendor specific versions like MPICH2/MVAPICH/OpenMPI with non-mpd based installation. gdb and PAPI Parallel runtime debuggers like DDT/Totalview

9.

Parallel Scientific and Math libs

Scientific and math libs for BLAS, LAPACK, Scalapack, fftw, hdf5, netcdf, PETSc, Trilinos etc.

10.

Support for Parallel filesystem

OS must support parallel file systems like lustre or gpfs, etc.

11.

Batch Schedulers and workload management

Open source based or commercial workload managers for batch job scheduling with policies to allow for advance reservation and resource usage based controls with job accounting and reporting. Must be able to support interactive jobs with GUI for debugging on a dedicated debug queue.

12.

Master and compute nodes

The master and compute nodes must have identical hardware and configuration, where applicable.

13.

Service nodes

2 or more service nodes to be provided for management and monitoring of the server. Hardware configuration must be compliant to support the service management tools for the server.

14.

System configuration and management

Unified system monitoring toolset for configuration, diagnosis and management. Compute node load monitoring tools like Ganglia/Nagios or equivalent must be part of the toolset.

15.

RAS features

All compute units must have hot pluggable, redundant power supply and fan units.

16.

Master and login node redundancy

8 management/service/login nodes must be made available for redundancy and in load balancing mode.

17.

Node Redundancy

1% of the required compute capacity must be provided extra as redundancy for compute node failures.

B - High Performance Computing GPU Cluster 1.

System Architecture

Tightly integrated cluster with IB-FDR or proprietary interconnect

2.

Compute Power

A base system of 100 TFLOPS of sustained compute power on HPL (with turbo-mode off).

3.

Processor Architecture (64-bit)

Intel Xeon E5 v2 (Ivybridge) at 2.4 GHz or higher or AMD Opteron 62xx at 2.4 GHz or higher (The processor architecture of the CPU nodes should match the nodes of the CPU cluster, if the bidder is proposing an x86 based solution)

4.

Memory

4 GB (or more) per core

5.

Node-to-node Interconnect speed with non-blocking and all-to-all connections

The interconnect architecture of the GPU cluster should be identical to that of the CPU cluster, if the bidder is proposing an x86 based solution) Greater than 50Gb/s for parallel jobs; @1Gb/s for monitoring and management Inter-rack: >35Gb/s for parallel jobs; @1Gb/s for monitoring and management Node-to-node MPI latency of 2 microseconds or less.

6.

Optional local storage for OS & s/w env.

2X 500GB SAS or SATA @7200rpm or more for solutions supporting local storage on compute nodes.

7.

GPU cards

Not more than 2 cards per node of Nvidia K20 or K20x. The compute power provided by the GPU cards should be 100TF sustained HP Linpack performance.

8.

Operating system

Linux based 64-bit OS with support for NIS/LDAP, NFS

ver3/4, autoconf/automake, cmake, svn/mercurial/cvs/git, python, trace, vampir, emacs, vim etc., and the HPC benchmark programs compiled with solution specific configuration. 9.

GPU software env.

Appropriate latest software for development and execution of GPU based codes to be included.

10.

RAS features

All nodes must have hot pluggable, redundant power supply and fan units.

11.

Master and login node redundancy

4 management/service/login nodes must be made available for redundancy and in load balancing mode.

12.

Node Redundancy

1% of the required compute capacity must be provided extra as redundancy for compute node failures.

C - High Performance Computing Storage 1.

Storage – High Density

2PB usable capacity (with Hardware RAID6) using NLSAS/SATA disks (2TB/3TB). Demonstrable performance on IOR/IOZONE benchmark with parallel filesystem for large sequential read/write with Write Throughput >25GB/s.

2

Storage – High Speed

Additionally, quote for 1 PB usable capacity (with Hardware RAID6) using SAS disks (1TB or less) with required RAID Controllers. Demonstrable performance on IOR/IOZONE benchmark with parallel file system for large sequential read/write with Write Throughput >30GB/s

3.

File system

Parallel file systems like lustre or gpfs, etc.

Requirements regarding the node level redundancy stipulated is mandatory (for both the CPU nodes and the GPU nodes, including the GPU cards) and the bidders have to ensure that the solution configured takes into account the high availability and redundancy requirements.

INFRASTRUCTURE REQUIREMENT:

The vendor must clearly indicate in their technical bid the requirement for power, cooling and space for the proposed solution. The infrastructure requirement of the proposed system should be as below.

1.

Power requirement

It is desirable to limit the power required for the system to less than 1000 KVA.

2.

Cooling Requirement

It is desirable to limit the cooling required for the system to less than 100 TR.

3.

Space requirement

It is desirable to limit the real estate space for the complete system to less than 1200 sqft

SERC currently has a spare capacity of 2 x 500 KVA UPS, 2 x 750 KVA DG and 100 TR of chiller capacity. The proposed solution could utilize this. The infrastructure requirement over and above the spare capacity must also be quoted for in the proposal. The details of the available infrastructure and the scope of the data centre implementation are detailed in Annexure 1. As far as possible individual line items must be quoted (at the required level) so that any price discovery required can be obtained.

BENCHMARK PERFORMANCE CRITERIA :

The solution proposed by the bidder (The bidder submitting the response to this bid will henceforth be referred to as the bidder.) as a response to this RFP document has to adhere to the following benchmarking.

The following four benchmarks must be run on the proposed system (HPC-CPUCluster) configured at 500 TFLOPS (sustained HPL) and the best execution time (in seconds) must be reported. Successfully completed execution run results along-with detailed job script, output files, job-logs and any other relevant files must be included in the submission. All explanations including optimizations, environment variables and machine topology specification must also be necessarily included with the submission.

Sl. No.

Benchmark Description

1.

CESM: CESM code execution with the following configuration: atm-land at 0.5 and ocean at 0.1 degree (grid name : ft05_12) details at: http://www.cesm.ucar.edu/models/cesm1.1/cesm/doc/modelnl/grid.html Details of CESM 1.0 are available at: http://www.cesm.ucar.edu/models/cesm1.0/tags/cesm1_0_2/whatsnew.html Length of simulation would be: 90 days with outputs every 10 days. The number of processes could be scaled to the number of cores in the 500 TFLOPS system.

2

Zeus : For the given Zeus code and zmp_inp input file use the following input parameters for the 300TFLOPS run; Problem size = 1664 X 1536X 1536. And Tiles=32 X 32 X 24. The number of processes, specified in terms of the tile size could be scaled to the number of cores in the 500 TFLOPS system.

3

HiFun : HiFUN with input file for 15000 domains for 100 iterations. Time per iteration computed as an average over 10 representative iterations (from iteration 51-60) should be reported. The number of processes is fixed as given in the program and the bidder is expected to run the benchmark on a 500TFLOPS system utilizing as many cores as appropriate.

4

SCALAPACK Matrix Inversion of 3M x 3M matrix The number of processes could be scaled to the number of cores in the 500 TFLOPS system.

1. The proposed solution architecture has to deliver a performance of at least 500 TFLOPS (sustained) on High Performance Linpack (HPL) with turbo-mode off on the HPC-CPU Cluster and 100 TFLOPS (sustained) on HPL (with turbo-mode off) on the HPC-GPU Cluster. Documentary proof to the effect that the proposed solution will meet this criterion has to be submitted along with the bid. 2. In addition to the above, on the proposed solution, performance for the four programs (mentioned in the section on “Benchmark Performance Criteria” ) has to be submitted as best time achieved, on a system (HPCCPU-Cluster) with a sustained HPL performance of 500 TFLOPS (with turbo mode off). More specifically, all the four benchmarks must be run on a system with 500 TFLOPS of HPL (sustained performance) having the same architecture (including the same software stack) as the solution proposed. The bidder is permitted to run the benchmark program with turbo-mode on. (The benchmark report should clearly indicate whether the turbo-mode was kept on or off during the benchmark execution.) In all cases, the execution times reported should be based on actual runs. No performance prediction of any kind will be allowed. Non-submission of benchmark results with the required details or results on systems that are not the same as the proposed solutions will result in disqualification of the proposal. 3. If the bidder has access only to a system which has higher HPL performance than 500 TFLOPS, then the bidder is expected to run the benchmarks on a subset of the system whose HPL performance does not exceed 500 TFLOPS, taking into account all the nodes in the subset. More specifically, the benchmark has to use a machine topology that exercises all the cores in each of the nodes in the subset. 4. However, if a bidder does not have access to the exact system that is proposed, the bidder can run it on a system which has a processor which is one version older and/or with interconnect architecture which has lower performance than the proposed solution. Similarly, if a bidder has access

only to a system with a lower HPL performance (lower than 500 TFLOPS), the bidder is permitted to run the benchmarks on such a system and report the execution time, provided the system is within 20% of the stipulated performance (i.e., has a HPL performance of 400 TFLOPS or higher). The achieved execution times on these machines (without any performance projection) will be taken and compared with the results submitted by other bidders. In other words no compensation/allowance will be given for running the benchmark on a lower performing system. 5. For HiFUN, however, as the number of processes is fixed, the bidder is expected to run the benchmark on a system with 15,000 cores, utilizing a machine topology exercising all the cores in each of the nodes. 6. Successfully completed execution run results along-with detailed job script, output files, job logs, machine topology used for the run and any other relevant files must be included in the submission. All explanations including optimizations, environment variables and machine topology information for the run must necessarily be included with the submission. 7. The benchmarks must be run on systems hosted in their labs or on systems delivered by the SI/OEM. Complete details of the system, including detailed system specifications (similar to that of the technical specifications indicated in this document) along with execution dates, userid details and email-ids of the site/system administrators of the data center are to be provided. IISc chooses the right to contact the corresponding personnel for seeking additional information on the execution runs submitted as part of the response. Further, the technical committee may request for benchmarks rerun, wherever required (within a period of 7 days from the date of request for rerun), under the supervision of nominated persons.

Satisfactory completion of benchmarks as specified above is a mandatory requirement.

BIDDER’S ELIGIBILITY CRITERIA

1. The bidder should have been implemented a 500 TFLOPS or higher capacity HPC system at least at one customer site. If the bidder happens to be a system integrator either the bidder or one of the OEMs should meet the above condition; the bid should include the authorization letters from the OEMs. The technical bid should clearly demarcate the responsibilities between the bidder and the OEMs. Complete details of the same have to be submitted with the bid. 2. The bidder must have a proven record of maintaining and managing at least one system having 250 TFLOPS or higher for a period of 1 (one) year. Appropriate documentary evidence with a letter from the customer reporting the details of the maintenance/management responsibilities and the performance of the bidder should also be included in the technical bid of the proposal. 3. The bidder is expected to be a company with an annual turn-over of Rs. 100 Crores in each of the last 3 financial years. 4. The bidder (along with their OEM) should have proven record of having demonstrated their competence and capability, as a team, to deliver all the services expected during the contract period. Compliance to conditions 1 to 3 above are mandatory and are not relaxable. EVALUATION METHODOLOGY 1. The bids received from the bidders will be evaluated by the Technical Committee constituted by the Institute. 2. The evaluation process to identify the successful bidder is based on a combined techno-commercial evaluation, details of which are mentioned subsequently in this document. 3. Evaluation of technical bids. 1. In the first stage only the technical bids are evaluated. The mandatory conditions stipulated elsewhere in the document must be adhered to and failure of the same will result in disqualification of the bid. 2. The technical criteria set out for evaluation of the technical offer is given below.

Sl. No.

Description

Max. Score

Min. Score

1

Bidder’s Eligibility Criteria (of Bidder or OEM)

20

15

2

Benchmark Performance Score

30

18

3

Solution Superiority

10

5

4

Infrastructure Requirement & Operational Cost

20

12

5

No. of HPC Systems by the bidder/OEM with 500TFLOPS or higher

20

12

Overall Score

100

70

3. The technical scores for the each of the above will be given based on the detailed explanation given below: 1. Bidder’s Eligibility Criteria: Supporting Documents: For item (1): A copy of the P.O. ; For item (2), a letter from the customer site stating clearly the duration of contract, scope, and satisfactory delivery of services; For item (3), annual audited balance sheet for 3 years. Scoring Scheme: Meeting Items (1) to (3) in Bidder’s Eligibility criteria will entail 15 marks. Additionally, a maximum of 5 marks will be given for Item 4, which will be based on the response to the RFP, technical presentation/ discussion. 2. Benchmark Performance Criteria: Supporting Documents: As specified in the Benchmark Performance Criteria section. Scoring Scheme: Aggregate Normalized benchmark execution time (calculated as described below) will be used for awarding scores in this category. Bidder with an aggregate normalized

benchmark execution time of 2 will get 18 marks. Bidder scoring an aggregate normalized execution time higher than 2 will be disqualified. For a bidder having an aggregate normalized execution time X (between 1 and 2), the allocated marks will be calculated as (30 - (X-1)*12). Calculation of Benchmark Performance Score: All benchmarks are given equal weightage. Performance (execution time) of a benchmark given by a bidder is normalized w.r.t. the lowest execution time for that benchmark across all bidders. For HiFUN, as the number of processes in the benchmark is fixed, the execution time will be further normalized w.r.t. the peak performance of the subsystem used for the benchmark (taking into the account the no. of cores used and the clock rate of the processor). The aggregate normalized execution time of a bidder is the geometric mean of the normalized execution of the four benchmarks performance. 3. Solution Superiority Supporting Documents: Bidder has to clearly indicate the superiority of the proposed solution (hardware, software, integration, etc.) Scoring Scheme: Solutions meeting the technical requirements of the tender will be given 5 marks. Additional marks will be given based on the superiority of the proposed solution as evaluated by the technical committee. 4. Infrastructure Requirement Criteria Supporting Documents: Necessary documents to support datacenter requirement (UPS Power and Cooling) Scoring Scheme: For the purpose of awarding score for this category, we define Rationalized Operational Cost (ROC) as PUE * UPS power required for the proposed system in KVA. A fixed PUE of 1.25 will be used across all bidders as for the evaluation of this item is concerned. The bidder with minimum ROC would be given 20 marks. Solutions having an ROC greater than 1.5x of the minimum ROC will be disqualified. For a bidder having an ROC

which is X times the minimum ROC (X is between 1.0 to 1.5), the marks will be awarded as (20 – (X-1)*16). 5. No. of Top500 Systems Supporting Documents: Top 500 listing and/or supplementary documents (purchase order) for such installation. Scoring Scheme: If the bidder/OEM has 1 (one) system with HPLinpack performance of 500 TF or higher and is currently operational/in the Top500 system, 12 marks will be allocated. For each additional system, 0.4 marks will be given, subject to a maximum of 8 additional marks.

4. It is to be noted that only those bidders who score an aggregate of 70 or higher will be considered as technically qualified. Further, a bidder must score the minimum score indicated for each item in order to be qualified. The decision of the technical committee is final and binding on all the bidders. 5. The absolute technical scores (out of 100) of the qualified will be used in the subsequent techno-commercial evaluation

EVALUATON OF COMMERCIAL BIDS 1. Commercial bids shall be opened for the technically qualified bidders after the technical evaluation. The Institute will communicate the date and time of opening of the commercial bids to the qualified bidders. 2. Commercial bids will be opened on the said date and time, irrespective of the presence of the bidder / authorized representative. 3. Commercial bids which are not in compliance to the terms and conditions set out in the tender will be rejected. 4. A combined techno-commercial evaluation based on 40:60 weightage will be followed for this tender.

a. To determine the commercial score, the total cost of ownership (TCO) which includes supply, installation and commissioning of the total turn-key solution of the proposed HPC System (including CPU-Cluster, GPUCluster, HPC-Storage, and the necessary data center/support infrastructure requirements), warranty for 3 years, and 2 years postwarranty AMC services will be calculated.

b. The absolute technical score and the commercial offer (TCO) of the bidders qualified in the technical evaluation will be converted to the weighted technical and commercial scores using the formula:

Weighted technical score = (Absolute score of the bidder / Absolute score of the bidder with the highest score ) * 40

Weighted commercial score = (Offer of Lowest bidder / Offer of the bidder) * 60

The bidder with the highest score is declared as the successful bidder. In case of tie (upto the second decimal place), the bidder with the highest technical score is will be declared as the successful bidder.

An example table illustrating the techno-commercial evaluation is shown below.

Sl. No.

Bidder

Tech. Score

TCO

Weighted Tech. Score

Weighted

Total Score

Comm. Score 1

B1

90

34

(90/90)*40=

(30/34)*60 =

92.94

40

52.94

2

B2

80

31

(80/90)*40= 35.55

(30/31)*60 = 58.06

93.61

3

B3

72

30

(72/90)*40 = 32.0

30/30*60 = 60.0

92.0

Bidder B2 is declared as the successful bidder.

ACCEPTANCE CRITERIA :

The acceptance test criteria will include the following. 1. As a part of the technical proposal, the bidder has to submit a comprehensive document giving complete details of Installation, commissioning, configuration, and testing of the proposed solution that would be carried out at the customer site. 2. The bidder has to demonstrate the performance of the system for meeting the specified HPL performance as well as the performance of benchmark programs as specified in the bidder’s technical bid. 3. During the acceptance, the bidder has to demonstrate subsystem/ component-wise performance, including storage and interconnect architecture. 4. If the bidder fails to demonstrate any of the above requirement, the technical team of the Institute will come out with appropriate configuration that should have been quoted, which would have met the HPL ratings and the benchmarks performance, and the bidder has to supply, install and maintain a solution twice that capacity at no extra cost to the Institute. 5. It is to be noted that maximum of two weeks will be available to the bidder to conform to the acceptance test criteria set out.

6. Any delay in commissioning or conformance to the acceptance beyond the stipulated time will result in extending the warranty: each day of delay would results in 3 additional days of warranty. The penalty clause is only applicable for solutions which are considered as technically meeting the requirements, as evaluated by the technical committee. The clause cannot therefore be used as an argument to qualify any solution, which the technical committee considers as not meeting the requirements.

SERVICE LEVEL AGREEMENT

1. The bidder has to ensure that the solution proposed, as a total turn key solution, to meet the stated requirements, delivers an uptime guarantee of 95% of the entire system, in addition, should also deliver at least 99% uptime for 90% (of the compute and storage capability) of the system, measured on a monthly basis.

2. In the event of a failure of any of the sub-systems or components of the proposed solution, the bidder has to ensure that the defects are rectified before end of the next working day.

3. Failure to meet the above requirement will result in extension of the warranty services by 3 days for delay of each day during the warranty period.

4. Therefore, the bidder along with the OEMs have to put systems and processes in place to address the above during the period of the contract.

WARRANTY:

1. Warranty services for the system supplied by the successful bidder should be valid for a period of 3 years from the date of acceptance of the equipment. Warranty service charges (in Indian rupees) have to be explicitly mentioned as a separate line item in the Commercial Bid.

2. During the warranty period, the bidder shall be fully responsible for the manufacturer’s warranty in respect of proper design, quality and workmanship of all the systems supplied. 3. During the warranty period, the bidder shall attend to all the hardware problems on site and shall replace the defective parts at no extra cost to the purchaser. 4. During the warranty period, the bidder shall attend to all failures relating to software installation, configuration, management and performance. Periodic maintenance w.r.t. software upgrades, updates, and patches, as well as preventive maintenance, are the responsibilities of the bidder. 5. During the warranty period, the preventive maintenance and repairs of the data center components supplied by the bidder are the responsibilities of the bidder. 6. The bidder should also clearly indicate post-warranty comprehensive AMC cost, as a percentage of the equipment cost, for a period of 5 years, on an annual basis, in the Commercial bid.

GUIDELINES TO BIDDERS

A two-cover system is proposed for the submission of tenders, consisting of

Technical Bid: The technical bid should contain

I. II.

III.

Executive Summary of the proposal Technical details of the proposed subsystems in the prescribed format (See Annexure 2) Benchmarks results in the prescribed format (See Annexure 3). Softcopy of the benchmark output, logs, and other details as specified in the benchmark criteria section must be also be submitted

IV.

V. VI. VII. VIII. IX. X.

XI.

Specification for the additional data centre/infrastructure requirement for the proposed configuration (taking into account the spare capacity available with SERC). This should also include the necessary heatexchangers/in-row cooling equipment/ AHU as well as the required piping, power distribution/control panel and other requirements. This should be specified in the prescribed format (See Annexure 4) Overall Compliance Statement (See Annexure 5) Terms and conditions of the offer. Supporting technical material, including brochures. Audited annual balance sheet of the company for the last 3 years Supporting documents for bidder’s eligibility criteria Agreeing to the terms and conditions of the tender; A copy of the tender document, duly signed on each page with seal, must be enclosed; A copy of the masked Commercial bid of the bill-of-materials.

XII.

Detailed document on Installation, Commissioning, Configuration and Testing.

XIII.

Bidders proposing multiple options must submit items I – XII above for each option separately, as self-contained bid and this is a mandatory requirement.

Commercial bid:

I.

The Commercial bid should contain details of the prices for each one of the subsystems (as well as the data centre components) of the total offer giving clearly the rate and the quantity. Bundling of the prices is not acceptable.

II.

For optional items the prices have to be quoted separately and the Institute reserves the right to decide about the procurement of the same. However the bidder must quote for all optional items.

III.

Bidders proposing multiple options must quote for each of the configurations separately including the necessary data-centre requirement as self-contained bids and this is a mandatory requirement.

Covers containing the technical and commercial bids must be individually sealed, and superscribed respectively as “SERC/HPCS/1/2014 – Technical Bid” and ““SERC/HPCS/1/2014 – Commercial Bid”. The two covers must be put in a larger enveloped, sealed, superscribed as “High Performance Computing System and Storage (HPCS) - SERC/HPCS/1/2014”. Nonconformance of any of the above can result in disqualification.

Additional Guidelines:

1. The total solution as per the agreed bill of materials has to be supplied within 4 – 6 weeks after receiving a firm PO from IISc and the installation to be complete within 2-3 weeks after supply of the equipment. 2. IISc is eligible for customs and excise duty exemption under notification 10/97-ce. Hence please quote the ED component, if any, separately so as to avail exemption on issue of certificate by us. Bidders planning to quote any imported solution have to give the offer in the respective currency. 3. The offer has to clearly state the components of pricing separately. For example, the supply part, F & I, I & C, Warranty services and any other charges have to be quoted as separate line items. 4. A copy of the masked Commercial bid has to be given in the technical offer and the process followed by the Institute is a two cover bid system. 5. No request for any further extension of the above deadline shall be entertained. Delayed and/or incomplete tenders are liable to rejection. 6. All the covers should bear the name and address of the bidder. 7. The Technical Bid and the Commercial Bid should be duly signed by the authorized representative of the bidder.

8. The Technical Bid and the Commercial Bid should be bound separately as complete volumes. 9. The prices should not be mentioned in the Technical Bid. 10. A tender, not complying with any of the above conditions is liable to rejection. Incomplete proposals are liable to be rejected. 11. The Director, IISc, Bangalore-12 reserves the right to modify the technical specifications or the required quantity at any time. In such case, the bidders will be notified. 12. The Director, IISc reserves the right to accept or reject any proposal, in full or in part, without assigning any reason. 13. The bidders are requested to go through the Terms and Conditions detailed in this document, before filling out the tender. Agreeing to the terms and conditions of the tender document (by signing all pages of the copy of a tender document) is a mandatory requirement. 14. With reference to understanding the infrastructure set-up and spare capacity available at SERC, the bidders are requested to visit the site between Feb. 3 -- 4, 2014 (during working hours). Bidders can send a single consolidated email (to [email protected]) comprising all their queries (technical and commercial) on or before Feb. 5, 12.00 noon. No further queries or clarification will be entertained. Hence the bidders have to ensure that all queries related to the tender, including those on benchmark programs, are sent before the specified deadline. 15. A prebid-clarification meeting is scheduled on Feb. 6, 2014 as per the timeline given below. No further queries will not be entertained after the prebid clarification meeting. COMMERCIAL TERMS & CONDITIONS 1. The commercial bid should contain among other things, payment terms, warranty, installation and commissioning charges. These charges will be paid only after successful supply, installation and acceptance. SERC will enter into a contract with the successful bidder which will detail all contractual obligations during warranty period. Bidders have to quote for AMC charges for 5 years after 3 year warranty period.

2. In case of rupee offer, the component of tax, E.D. and any other statutory levies should be shown separately and not included in the total amount, to enable us to avail exemption. 3. In case of imports, the commercial bid should contain among other things the name and address of the Indian agent, if any and the agency commission payable to him. Agency commission part will be deducted from FOB value, and will be paid to him by us separately in equivalent Indian rupees. Please quote the prices for shipment on ‘FOB’ terms. 4. In respect of imported solution, IISc will arrange for customs clearance, at Bangalore airport, which will be final destination airport. Hence costs related to customs/clearance need not be included in the offer. 5. In CIF offers of imported solutions, insurance should be on “Warehouse to Warehouse” basis and should not terminate at Bangalore airport. 6. IISc is not exempted from any other VAT or other taxes. Hence this component may be shown as separate line item wherever applicable. 7. Proposals should contain name and contact details viz phone, fax, email of designated person to which all future communication will be addressed. 8. Price should be quoted per unit and the total amount for the required quantity. 9. Offer should be valid for 60 days from the date of submission. 10. IISc will place the purchase order only on the successful bidder. However if the bidder requires multiple orders to be placed on the system integrator/OEMs, it has to be clearly mentioned in the technical bid with appropriate reasoning. The number of such orders would be limited to three. 11. It is to be noted that the bidder will be the system integrator for the total requirement and will be responsible to ensure that the OEMs and the additional system integrator mentioned above, understand all the requirements and are in a position to execute their respective responsibilities for successful implementation of the contract. PAYMENT TERMS

1. The conditions regarding payment terms are as follows: 2. The total project cost will consists of two parts: a. Equipment supply part (Supply) b. Installation, commissioning, warranty and maintenance services part (shortly referred as services) 3. The total cost of the system (Supply part) will be paid through SIGHT DRAFT/Letter of Credit (documents through Bank). 4. Installation charges, if any, payable only in Indian Rupees, will be paid after acceptance of the system. 5. Warranty charges will be paid (in Indian Rupees) in equal quarterly installments. The payment will be made at the end of each quarter after satisfactory completion of services. Hence warranty charges for year 2 and year 3 has to be quoted as a percentage of the equipment cost and as a separate line item. In case the warranty for year 2 and year 3 is bundled for a subsystem, then IISc reserves the right to hold 5% of the subsystem cost as a performance security.

Schedule of Events:

The tender document will be made available in the IISc webpage from the date of release of the tender. The benchmark programs and input files will be made available in electronic media. To obtain the electronic media, a fee of Rs. 5000/(Rupees Five Thousand only), in the form of a demand draft (drawn in favor of “The Registrar, Indian Institute of Science, Bangalore” ) should be submitted to SERC Office on any working day (between 9.30 a.m. to 5.00 p.m.)

Release of Tender

Jan, 28, 2014

Site visit to Inspect Infrastructure

Feb. 3 - Feb. 4, 2014

Submission of queries (for Prebid clarification):

Feb. 5, 2014, 12 noon

Prebid Clarification Meeting

Feb. 6, 2014, 12 noon.

Submission of Tender Response

March 7 April 2, 2014, 5 p.m. (FIRM)

No request for extension of any deadline will be entertained.

Annexure 1 Infrastructure Details

UPS REQUIREMENTS :

The UPS requirement for the proposed computer system has to be specified by the bidder and they have to provide the total solution taking into account 1+1 redundancy.

The offer has to include

a. 2 x X KVA UPS Systems b. Associated Battery banks to deliver the 15 min battery back up under full load. c. Appropriately sized cables with adequate number of runs from the sub-station to the UPS room - Estimated Length for each run is given in the enclosed diagram. d. Input Panel at the source end e. Accessories required for installing and commissioning the UPS system. f. Output panel at the load end

g. Cabling requirements from output panel in the basement floor to the power distribution panel in the computer floor. h. Earthing requirements for the panels and the UPS.

The BoQ/BoM should clearly indicate the required sizing and number of runs needed, which would be used in the TCO calculation.

CHILLER REQUIREMENTS : SERC has additional chiller capacity of 100 TR and if the bidders require chillers over and above this capacity for the environmental cooling part, have to provide the additional capacity with full redundancy in the N+1 configuration. The details regarding the distances are shown in the enclosed diagram.

The electrical panel for the chiller, the piping to the pumps, piping to the HVACC system in the ground floor, and the earthing of the electrical are all responsibilities of the bidder. The approximate distances for these are given in the diagram.

The bidder is required also to quote for an appropriate capacity of chilled water storage tank and separate UPS to cater to requirement of the pumps for the AC system. The storage tank would be located in the basement at a distance of 20m from the pump. The piping for the storage tank is also the responsibility of the bidder.

DG REQUIREMENTS :

SERC has a spare capacity of 2 x 750 KVA DG System and for any of the six configurations stipulated, if the bidders require additional DG Capacity for the proposed configuration, they have to ensure that the proposed system integrates with the existing DG sets and the AMF panel of the proposed system also is designed to address the requirement of auto load sharing and auto synchronization.

Annexure 2

Technical Specification Details of different subsystems of the proposed solution.

Technical specifications as prescribed in the “TecSpecandBMsTemplate.xls” MS-Excel workbook must be submitted along-with the technical bids. If a bidder proposes multiple solutions, each solution must have a separate “TecSpecandBMsTemplate.xls” filled and submitted.

The “TecSpecandBMsTemplate.xls” consists of 1-7 (HPC-CPUCluster, HPCGPUCluster, HPC-Storage2PB, HPC-Storage1PB, HPC-InterconnectCPUCluster, HPCInterconnectGPUCluster and HPC-SoftwareArchitecture) excel-sheets, each one describing the required information related to a specific technical detail of the proposed solution. The bidders must complete these sheets in all respects. The vendor must submit both the soft and hard copies of the filled “TecSpecandBMsTemplate.xls” file along-with the technical bids. The hard copy of the template file must also be duly signed by the authorized person/s of the bidder.

Annexure 3

Benchmark Performance Statement

Benchmark performance results as prescribed in the HPC-BechmarkPerf sheet of the “TecSpecandBMsTemplate.xls” MS-Excel workbook must be submitted along-with the technical bids. If a bidder proposes multiple solutions, each solution must have a separate such sheet filled and submitted.

The vendor must submit both the soft and hard copies of the filled BenchmarkPerf excel sheet of the “TecSpecandBMsTemplate.xls” file along-with the technical bids. The hard copy of the template file must also be duly signed by the authorized person/s of the bidder.

Annexure 4

Data Center / Infrastructure Requirement Statement

Data Center/Infrastructure Statement as describe in the “HPC-InfraReq” sheet of the “TecSpecandBMsTemplate.xls” MS-Excel workbook must be submitted along-with the technical bids. If a bidder proposes multiple solutions, each solution must have a separate “TecSpecandBMsTemplate.xls” file filled and submitted.

The vendor must submit both the soft and hard copies of the filled HPC-InfraReq excel sheet in the “TecSpecandBMsTemplate.xls” file along-with the technical bids. The hard copy of the template file must also be duly signed by the authorized person/s of the bidder.

Annexure 5

RFP Compliance Statement

RFP Compliance Statement as describe in the “HPC-RFPCompliance” sheet of the “TecSpecandBMsTemplate.xls” MS-Excel workbook must be submitted along-with the technical bids. If a bidder proposes multiple solutions, each solution must have a separate “TecSpecandBMsTemplate.xls” filled and submitted.

The vendor must submit both the soft and hard copies of the filled excel sheet in the “TecSpecandBMsTemplate.xls” file along-with the technical bids. The hard copy of the template file must also be duly signed by the authorized person/s of the bidder.