Here is Your Customized Document Your Configuration is: Action to Perform - Learn about storage system Information Type - Hardware and operational overview Storage-System Model - CX4-240

Reporting Problems To send comments or report errors regarding this document, please email: [email protected]. For issues not related to this document, contact your service provider. Refer to Document ID: 1424629

Content Creation Date 2010/9/26

Content Creation Date 2010/9/26

Content Creation Date 2010/9/26

CX4-240 Storage Systems

Hardware and Operational Overview

This document describes the hardware, powerup and powerdown sequences, and status indicators for the CX4-240 storage systems with UltraFlex™ technology. Major topics are: Š Š Š Š Š Š

Storage-system major components.................................................. Storage processor enclosure (SPE)................................................... Disk-array enclosures (DAEs)......................................................... Standby power supplies (SPSs)....................................................... Powerup and powerdown sequence ............................................... Status lights (LEDs) and indicators .................................................

2 4 12 18 19 25

1

Storage-system major components The storage system consists of: Š

A storage processor enclosure (SPE)

Š

Two standby power supplies (SPSs)

Š

One Fibre Channel disk-array enclosure (DAE) with a minimum of five disk drives

Š

Optional DAEs

A DAE is sometimes referred to as a DAE3P.

The high-availability features for the storage system include: Š

Redundant storage processors (SPs) configured with UltraFlex™ I/O modules

Š

Standby power supplies (SPS)

Š

Redundant power supply/cooling modules (referred to as power/cooling modules)

The SPE is a highly available storage enclosure with redundant power and cooling. It is 2U high (a U is a NEMA unit; each unit is 1.75 inches) and includes two storage processors (SPs) and the power/cooling modules. Each storage processor (SP) uses UltraFlex I/O modules to facilitate: Š

4 Gb/s and/or 8 Gb/s Fibre Channel connectivity, and 1 Gb/s and/or 10 Gb/s Ethernet connectivity through its front-end ports to Windows, VMware, and UNIX hosts

Š

10 Gb/s Ethernet Fibre Channel over Ethernet (FCoE) connectivity through its front-end ports to Windows, VMware, and Linux hosts. The FCoE I/O modules require FLARE 04.30.000.5.5xx or later on the storage system.

Š

4 Gb/s Fibre Channel connectivity through its back-end ports to the storage system’s disk-array enclosures (DAEs).

The SP senses the speed of the incoming host I/O and sets the speed of its front-end ports to the lowest speed it senses. The speed of the 2

Hardware and Operational Overview

DAEs determine the speed of the back-end ports through which they are connected to the SPs. Table 1 gives the number of Fibre Channel, FCoE, and iSCSI I/O front-end ports and Fibre Channel back-end ports supported for each SP. The storage system cannot have the maximum number of Fibre Channel front-end ports, maximum number of FCoE front-end, and the maximum number of iSCSI front-end ports listed in Table 1. The actual number of Fibre Channel, FCoE, and iSCSI front-end ports for an SP is determined by the number and type of UltraFlex I/O modules in the storage system. For more information, refer to UltraFlex I/O modules, page 6 . Table 1

Front-end and back-end ports per SP Storage system

Fibre Channel front-end I/O ports

FCoE front-end I/O ports

iSCSI front-end I/O ports

Fibre Channel back-end disk ports

CX4-240

2 or 6

1 or 2

2, 4, or 6

2

The storage system requires at least five disks and works in conjunction with one or more disk-array enclosures (DAEs) to provide terabytes of highly available disk storage. A DAE is a disk enclosure with slots for up to 15 Fibre Channel or SATA disks. The disks within the DAE are connected through a 4 Gb/s point-to-point Fibre Channel fabric. Each DAE connects to the SPE or another DAE with simple FC-AL serial cabling. The CX4-240 storage system supports a total 16 DAEs for a total of 240 disks on its two back-end buses. Each bus supports as many as eight DAEs for a total of 120 disks per bus. You can place the disk enclosures in the same cabinet as the SPE, or in one or more separate cabinets. High-availability features are standard.

Hardware and Operational Overview

3

Storage processor enclosure (SPE) The SPE components include: Š

A sheet-metal enclosure with a midplane and front bezel

Š

Two storage processors (SP A and SP B), each consisting of one CPU module and an I/O carrier with slots for I/O modules

Š

Four power supply/system cooling modules (referred to as power/cooling modules) – two associated with one SP A and two associated with SP B.

Š

Two management modules – one associated with SP A and one associated with SP B. Each module has SPS, management, and service connectors.

Figure 1 and Figure 2 show the SPE components. If the enclosure provides slots for two identical components, the component in slot A is called component-name A. The second component is called component-name B. For increased clarity, the following figures depict the SPE outside of the rack cabinet. Your SPE may arrive installed in a rackmount cabinet. Power/cooling modules A0 - A1

CPU module A Figure 1

4

Power/cooling modules B0 - B1

CPU module B

SPE components (front with bezel removed)

Hardware and Operational Overview

CL4135

SP A

3

SP B

10/100/1000

0

0

1

2

10/100/1000

Management module B Figure 2

Management module A

CL4134

SPE components (back)

Midplane The midplane distributes power and signals to all the enclosure components. The CPU modules, I/O modules, and power/cooling modules plug directly into midplane connectors.

Front bezel The front bezel has a key lock and two latch release buttons. Pressing the latch release buttons releases the bezel from the enclosure.

Storage processors (SPs) The SP is the SPE’s intelligent component and acts as the control center. Each SP includes: Š

One CPU module with: z

One dual-core processor

z

4 GB of DDR-II DIMM (double data rate, dual in-line memory module) memory

Š

I/O module enclosure with five UltraFlex I/O module slots , of which four are usable

Š

One management module with: z

One GbE Ethernet LAN port for management and backup (RJ45 connector)

z

One GbE Ethernet LAN port for peer service (RJ45 connector)

Hardware and Operational Overview

5

z

One serial port for connection to a standby power supply (SPS) (micro DB9 connector)

z

One serial port for RS-232 connection to a service console (micro DB9 connector)

UltraFlex I/O modules Table 2 lists the number of I/O modules the storage system supports and the slots the I/O modules can occupy. More slots are available for optional I/O modules than the maximum number of optional I/O modules supported because some slots are occupied by required I/O modules. With the exception of slots A0 and B0, the slots occupied by the required I/O modules can vary between configurations. Figure 3 shows the I/O module slot locations and the I/O modules for the standard minimum configuration with 1 GbE iSCSI modules. The 1 GbE iSCSI modules shown in this example could be 10 GbE iSCSI or FCoE I/O modules. Table 2

Number of supported I/O modules per SP All I/O modules

Storage system

Optional I/O modules

Number supported per SP

SP A slots

SP B slots

Number supported per SP

SP A slots

SP B slots

4

A0-A3

B0-B3

2

A1-A3

B1-B3

3

CX4-240

10/100/1000

B0 Figure 3

0

0

1

2

10/100/1000

B1

B2

B3

B4

A0

A1

A2

A3

A4

CL4127

I/O module slot locations (1 GbE iSCSI and FC I/O modules for a standard minimum configuration shown)

The following types of modules are available: Š

4 or 8 Gb Fibre Channel (FC) modules with either: z

2 back-end (BE) ports for disk bus connections and 2 front-end (FE) ports for server I/O connections (connection to a switch or server HBA). or

6

Hardware and Operational Overview

z

4 front-end (FE) ports for server I/O connections (connection to a switch or server HBA).

The 8 Gb FC module requires FLARE 04.28.000.5.7xx or later. Š

10 Gb Ethernet (10 GbE) FCoE module with 2 FCoE front-end (FE) ports for server I/O connections (connection to a FCoE switch and from the switch to the server CNA). The 10 GbE FCoE module requires FLARE 04.30.000.5.5xx or later.

Š

1 Gb Ethernet (1 GbE) or 10 Gb Ethernet (10 GbE) iSCSI module with 2 iSCSI front-end (FE) ports for network server iSCSI I/O connections (connection to a network switch, router, server NIC, or iSCSI HBA). The 10 GbE iSCSI module requires FLARE 04.29 or later.

Hardware and Operational Overview

7

Table 3 lists the I/O modules available for the storage system and the number of each module that is standard and/or optional. Table 3

I/O modules per SP Number of modules per SP Module

Standard

Optional

4 or 8 Gb FC module: 2 BE ports (0, 1) 2 FE ports (2, 3)

1

0

4 or 8 Gb FC module: 4 FE ports (0, 1, 2, 3)

0

1

10 GbE FCoE module: 2 FE ports (0, 1)

1 or 0 (see note 1)

1 (see note 2)

1 or 10 GbE iSCSI module: 2 FE ports (0, 1)

1 or 0 (see note 1)

1 (see note 2).

Note 1: The standard system has either 1 FCoE module or 1 iSCSI module per SP, but not both types. Note 2: The maximum number of 10 GbE FCoE or 10 GbE iSCSI modules per SP is 1.

IMPORTANT Always install I/O modules in pairs – one module in SP A and one module in SP B. Both SPs must have the same type of I/O modules in the same slots. Slots A0 and B0 always contain a Fibre Channel I/O module with two back-end ports and two front-end ports. The other available slots can contain any type of I/O module that is supported for the storage system.

The actual number of each type of optional Fibre Channel, FCoE, and iSCSI I/O modules supported for a specific storage-system configuration is limited by the available slots and the maximum number of Fibre Channel, FCoE, and iSCSI front-end ports supported for the storage system. Table 4 lists the maximum number of Fibre Channel, FCoE, and iSCSI FE ports per SP for the storage system.

8

Hardware and Operational Overview

Table 4

Maximum number of front-end (FE) ports per SP Maximum Fibre Channel FE ports per SP

Maximum FCoE FE ports per SP

Maximum iSCSI FE ports per SP (see note)

6

4

6

Storage system CX4-240

Note: The maximum number of 10 GbE iSCSI ports per SP is 2.

Back-end (BE) port connectivity Each FC back-end port has a connector for a copper SFP-HSSDC2 (small form factor pluggable to high speed serial data connector) cable. Back-end connectivity cannot exceed 4 Gb/s regardless of the I/O module’s speed. Table 5 lists the FC modules that support the back-end buses. Table 5

FC I/O module ports supporting back-end buses Storage system and FC modules

Back-end bus (module port)

CX4-240 FC module in slots A0 and B0

Bus 0 (port 0) Bus 1 (port 1)

Fibre Channel (FC) front-end connectivity Each 4 Gb or 8 Gb FC front-end port has an SFP shielded Fibre Channel connector for an optical cable. The FC front-end ports on a 4 Gb FC module support 1, 2, or 4 Gb/s connectivity, and the FC front-end ports on an 8 Gb FC module support 2, 4, or 8 Gb/s connectivity. You cannot use the FC front-end ports on an 8 Gb FC module in a 1 Gb/s Fibre Channel environment. You can use the FC front-end ports on a 4 Gb FC module in an 8 Gb/s Fibre Channel environment if the FC switch or HBA ports to which the module’s FE ports connect auto-adjust their speed to 4 Gb/s. FCoE front-end connectivity Each FCoE front-end port on a 10 GbE FCoE module runs at a fixed 10 Gb/s speed, and must be cabled to an FCoE switch. Versions that support fiber-optic cabling include SFP shielded connectors for optical Ethernet cable. Supported active twinaxial cables include SFP

Hardware and Operational Overview

9

connectors at either end; the ports in FCoE modules intended for active twinaxial cabling do not include SFPs. iSCSI front-end connectivity Each iSCSI front-end port on a 1 GbE iSCSI module has a 1GBaseT copper connector for a copper Ethernet cable, and can auto-adjust the front-end port speed to 10 Mb/s, 100 Mb/s, or 1 Gb/s. Each iSCSI front-end port on a 10 GbE iSCSI module has an SFP shielded connector for an optical Ethernet cable, and runs at a fixed 10 Gb/s speed. You can connect 10 GbE iSCSI modules to supported switches with active twinaxial cable after removing the optical SFP connectors. Because the 1 GbE and the 10 GbE Ethernet iSCSI connection topologies are not interoperable, the 1 GbE and the 10 GbE iSCSI modules cannot operate on the same physical network.

Power/cooling modules Each of the four power/cooling modules integrates one independent power supply and one blower into a single module. The power supply in each module is an auto-ranging, power-factor-corrected, multi-output, offline converter. The four power/cooling modules (A0, A1, B0, and B1) are located above the CPUs and are accessible from the front of the unit. A0 and A1 share load currents and provide power and cooling for SP A, and B0 and B1 share load currents and provide power and cooling for SP B. A0 and B0 share a line cord, and A1 and B1 share a line cord. An SP or power/cooling module with power-related faults does not adversely affect the operation of any other component. If one power/cooling module fails, the others take over.

SPE field-replaceable units (FRUs) The following are field-replaceable units (FRUs) that you can replace while the system is powered up:

10

Š

Power/cooling modules

Š

Management modules

Š

SFP modules, which plug into the 4 Gb and 8 Gb Fibre Channel front-end port connectors in the Fibre Channel I/O modules

Hardware and Operational Overview

Contact your service provider to replace a failed CPU board, CPU memory module, or I/O module.

Hardware and Operational Overview

11

Disk-array enclosures (DAEs) DAE UltraPoint™ (sometimes called "point-to-point") disk-array enclosures are highly available, high-performance, high-capacity storage-system components that use a Fibre Channel Arbitrated Loop (FC-AL) as the interconnect interface. A disk enclosure connects to another DAE or an SPE and is managed by storage-system software in RAID (redundant array of independent disks) configurations. The enclosure is only 3U (5.25 inches) high, but can include 15 hard disk drive/carrier modules. Its modular, scalable design allows for additional disk storage as your needs increase. A DAE includes either high-performance Fibre Channel disk modules or economical SATA (Serial Advanced Technology Attachment, SATA II) disk modules. CX4–240 systems also support solid state disk (SSD) Fibre Channel modules, also known as enterprise flash drive (EFD) Fibre Channel modules. You cannot mix SATA and Fibre Channel components within a DAE, but you can integrate and connect FC and SATA enclosures within a storage system. The enclosure operates at either a 2 or 4 Gb/s bus speed (2 Gb/s components, including disks, cannot operate on a 4 Gb/s bus). Simple serial cabling provides easy scalability. You can interconnect disk enclosures to form a large disk storage system; the number and size of buses depends on the capabilities of your storage processor. You can place the disk enclosures in the same cabinet, or in one or more separate cabinets. High-availability features are standard in the DAE. The DAE includes the following components: Š

A sheet-metal enclosure with a midplane and front bezel

Š

Two FC-AL link control cards (LCCs) to manage disk modules

Š

As many as 15 disk modules

Š

Two power supply/system cooling modules (referred to as power/cooling modules)

Any unoccupied disk module slot has a filler module to maintain air flow. The power supply and system cooling components of the power/cooling modules function independently of each other, but the assemblies are packaged together into a single field-replaceable unit (FRU). 12

Hardware and Operational Overview

The LCCs, disk modules, power supply/system cooling modules, and filler modules are field-replaceable units (FRUs), which can be added or replaced without hardware tools while the storage system is powered up. Figure 4 shows the disk enclosure components. Where the enclosure provides slots for two identical components, the components are called component-name A or component-name B, as shown in the illustrations. For increased clarity, the following figures depict the DAE outside of the rack or cabinet. Your DAEs may arrive installed in a rackmount cabinet along with the SPE.

Power/cooling module B

Link control card B

Power LED (green or blue)

Fault LED (amber)

! !

PRI

EXP

PRI

!

EXP

# PRI

EXP

PRI

A

EXP

B #

!

!

!

Power/cooling module A Figure 4

Link control card A

Disk activity LED (green)

Fault LED (amber)

EMC3437

DAE outside the cabinet — front and rear views

As shown in Figure 5, an enclosure address (EA) indicator is located on each LCC. (The EA is sometimes referred to as an enclosure ID.) Each link control card (LCC) includes a bus (loop) identification indicator. The storage processor initializes bus ID when the operating system is loaded.

Hardware and Operational Overview

13

Bus ID

Enclosure address

0 1 2 3

0 1 2 3

EA selection (press here to change EA)

4 5 6 7

4 5 6 7

#

! !

PRI

EXP

PRI

!

EXP

# PRI

EXP

PRI

# !

!

!

A

EXP

B

EMC3210

Figure 5

Disk enclosure bus (loop) and address indicators

The enclosure address is set at installation. Disk module IDs are numbered left to right (looking at the front of the unit) and are contiguous throughout a storage system: enclosure 0 contains modules 0-14; enclosure 1 contains modules 15-29; enclosure 2 includes 30-44, and so on.

Midplane A midplane between the disk modules and the LCC and power/cooling modules distributes power and signals to all components in the enclosure. LCCs, power/cooling modules, and disk drives – the enclosure’s field-replaceable units (FRUs) – plug directly into the midplane.

Front bezel The front bezel has a locking latch and an electromagnetic interference (EMI) shield. You must remove the bezel to remove and install drive modules. EMI compliance requires a properly installed front bezel.

Link control cards (LCCs) An LCC supports and controls one Fibre Channel bus and monitors the DAE.

14

Hardware and Operational Overview

Expansion link active LED

Primary link active LED Fault LED (amber)

PRI !

EXP

Power LED (green)

PRI

EXP ! !

PRI

EXP

PRI

!

EXP

# PRI

EXP

PRI

# !

!

!

A

EXP

B

EMC3226

Figure 6

LCC connectors and status LEDs

A blue link active LED indicates a DAE enclosure operating at 4 Gb/s. The link active LED(s) is green in a DAE operating at 2 Gb/s.

The LCCs in a DAE connect to other Fibre Channel devices (processor enclosures, other DAEs) with twin-axial copper cables. The cables connect LCCs in a storage system together in a daisy-chain (loop) topology. Internally, each DAE LCC uses FC-AL protocols to emulate a loop; it connects to the drives in its enclosure in a point-to-point fashion through a switch. The LCC independently receives and electrically terminates incoming FC-AL signals. For traffic from the system’s storage processors, the LCC switch passes the input signal from the primary port (PRI) to the drive being accessed; the switch then forwards the drive’s output signal to the expansion port (EXP), where cables connect it to the next DAE in the loop. (If the target drive is not in the LCC’s enclosure, the switch passes the input signal directly to the EXP port.) At the unconnected expansion port (EXP) of the last LCC, the output signal (from the storage processor) is looped back to the input signal source (to the storage processor). For traffic directed to the system’s storage processors, the switch passes input signals from the expansion port directly to the output signal destination of the primary port. Each LCC independently monitors the environmental status of the entire enclosure, using a microcomputer-controlled FRU (field-replaceable unit) monitor program. The monitor communicates Hardware and Operational Overview

15

status to the storage processor, which polls disk enclosure status. LCC firmware also controls the LCC port-bypass circuits and the disk-module status LEDs. LCCs do not communicate with or control each other. Captive screws on the LCC lock it into place to ensure proper connection to the midplane. You can add or replace an LCC while the disk enclosure is powered up.

Disk modules Each disk module consists of one disk drive in a carrier. You can visually distinguish between module types by their different latch and handle mechanisms and by type, capacity, and speed labels on each module. An enclosure can include Fibre Channel or SATA disk modules, but not both types. You can add or remove a disk module while the DAE is powered up, but you should exercise special care when removing modules while they are in use. Drive modules are extremely sensitive electronic components. Disk drives The DAE supports Fibre Channel and SATA disks. The Fibre Channel (FC) disks, including enterprise flash (SSD) versions, conform to FC-AL specifications and 4 Gb/s Fibre Channel interface standards, and support dual-port FC-AL interconnects through the two LCCs. SATA disks conform to Serial ATA II Electrical Specification 1.0 and include dual-port SATA interconnects; a paddle card on each drive converts the assembly to Fibre Channel operation. The disk module slots in the enclosure accommodate 2.54 cm (1 in) by 8.75 cm (3.5 in) disk drives. The disks currently available for the storage system and the usable capacities for disks are listed in the EMC® CX4 Series Storage Systems – Disk and FLARE® OE Matrix (P/N 300-007-437) on the EMC Powerlink website. The vault disks must all have the same capacity and same speed. The 1 TB, 5.4K rpm SATA disks are available only in a DAE that is fully populated with these disks. Do not intermix 1 TB, 5.4K rpm SATA disks with 1 TB, 1.2K rpm SATA disks in the same DAE, and do not replace a 1 TB, 5.4K rpm SATA disk with a 1 TB, 1.2K rpm SATA disk, or vice versa.

16

Hardware and Operational Overview

The 1 TB SATA disks operate on a 4 Gb/s back-end bus like the 4 Gb FC disks, but have a 3 Gb/s bandwidth. Since they have a Fibre Channel interface to the back-end loop, these disks are sometimes referred to as Fibre Channel disks.

Disk power savings Some disks support power savings, which lets you assign power saving settings to these disks in a storage system running FLARE version 04.29.000.5.xxx or later, so that these disks transition to the low power state after being idle for at least 30 minutes. For the currently available disks that support power savings, refer to the EMC® CX4 Series Storage Systems – Disk and FLARE® OE Matrix (P/N 300-007-437) on the EMC Powerlink website. Drive carrier The disk drive carriers are metal and plastic assemblies that provide smooth, reliable contact with the enclosure slot guides and midplane connectors. Each carrier has a handle with a latch and spring clips. The latch holds the disk module in place to ensure proper connection with the midplane. Disk drive activity/fault LEDs are integrated into the carrier.

Power/cooling modules The power/cooling modules are located above and below the LCCs. The units integrate independent power supply and dual-blower cooling assemblies into a single module. Each power supply is an auto-ranging, power-factor-corrected, multi-output, offline converter with its own line cord. Each supply supports a fully configured DAE and shares load currents with the other supply. The drives and LCCs have individual soft-start switches that protect the disk drives and LCCs if they are installed while the disk enclosure is powered up. A FRU (disk, LCC, or power/cooling module) with power-related faults does not adversely affect the operation of any other FRU. The enclosure cooling system includes two dual-blower modules. If one blower fails, the others will speed up to compensate. If two blowers in a system (both in one power/cooling module, or one in each module) fail, the DAE goes offline within two minutes.

Hardware and Operational Overview

17

Standby power supplies (SPSs) Two 1U 1200-watt DC SPSs provide backup power for one SP and the first (enclosure 0, bus 0) DAE adjacent to it. The SPSs allow write caching – which prevents data loss during a power failure – to continue. A faulted or not fully charged SPS disables the write cache. Each SPS rear panel has one AC inlet power connector with power switch, AC outlets for the SPE and the first DAE (EA 0, bus 0) respectively, and one phone-jack type connector for connection to an SP. Figure 7 shows the SPS connectors. A service provider can replace an SPS while the storage system is powered up.

SPE

SP interface

Active LED (green) On battery LED (amber)

AC power connector

Power switch

Fault LED (amber)

Replace battery LED (amber) EMC2292

Figure 7

18

1200 W SPS connectors

Hardware and Operational Overview

Powerup and powerdown sequence The SPE and DAE do not have power switches.

Powering up the storage system 1. Verify the following: ❑ Master switch/circuit breakers for each cabinet/rack power strip are off. ❑ The two power cords for the SPE are plugged into the SPSs and the power cord retention bails are in place. ❑ Power cords for the first DAE (EA 0, bus 0; often called the DAE-OS) are plugged into the SPSs and the power cord retention bails are in place. ❑ The power cords for the SPSs and any other DAEs are plugged into the cabinet’s power strips. ❑ The power switches on the SPSs are in the on position. ❑ Any other devices in the cabinet are correctly installed and ready for powerup. 2. Turn on the master switch/circuit breakers for each cabinet/rack power strip. In the 40U-C cabinet, master switches are on the power distribution panels (PDPs), as shown in Figure 8 and Figure 9.

Hardware and Operational Overview

19

Each AC circuit in the 40U-C cabinet requires a source connection that can support a minimum of 4800 VA of single phase, 200-240 V AC input power. For high availability, the left and right sides of the cabinet must receive power from separate branch feed circuits. Each pair of power distribution panels (PDP) in the 40U-C cabinet can support a maximum of 24 A AC current draw from devices connected to its power distribution units (PDU). Most cabinet configurations draw less than 24 A AC power, and require only two discrete 240 V AC power sources. If the total AC current draw of all the devices in a single cabinet exceeds 24 A, the cabinet requires two additional 240 V power sources to support a second pair of PDPs. Use the published technical specifications and device rating labels to determine the current draw of each device in your cabinet and calculate the total.

20

Hardware and Operational Overview

ON I

ON I

O OFF

O OFF

! !

PRI

EXP

PRI

!

EXP

# PRI

EXP

PRI

A

EXP

B #

ON I

!

ON I

O OFF !

!

O OFF

! !

PRI

EXP

PRI

!

EXP

# PRI

EXP

PRI

A

EXP

B #

!

!

!

ON I

ON I

O OFF

O OFF ! !

PRI

EXP

PRI

!

EXP

#

B A

PRI

EXP

# !

EXP

!

!

PRI

EXP

PRI

! !

PRI

EXP

PRI

!

EXP

#

B A

# !

EXP

PRI

EXP

PRI

EXP

!

!

PRI

EXP

PRI

! !

PRI

EXP

PRI

!

EXP

#

B A

# !

!

!

! !

PRI

EXP

PRI

!

EXP

#

A

O OFF

B #

ON I

!

PRI

EXP

PRI

EXP

O OFF ON I

!

!

PRI

EXP

Master switch

Master switch

! !

PRI

EXP

PRI

!

EXP

#

B A

# !

O OFF

!

!

O OFF ON I

ON I !

PRI

EXP

!

EXP

PRI

!

DAE-OS #

B

PRI

A

EXP

# !

PRI

!

!

EXP

O OFF

O OFF ON I

3

SPE

ON I

10/100/1000

0

1

2

10/100/1000

MGMT B

Power source B

SLOT B0

SLOT B1

SPS switch

SLOT B2

SLOT B3

SLOT B4

MGMT A

SLOT A0

SLOT A1

SLOT A2

SLOT A3

SLOT A 4

SPS switch

Power source A CL4128

Figure 8

PDP master switches and power sources in the 40U-C cabinet with two PDPs used

Hardware and Operational Overview

21

ON I

ON I

O OFF

O OFF

! !

PRI

EXP

PRI

!

EXP

# PRI

EXP

PRI

A

EXP

B #

ON I

!

ON I

O OFF !

!

O OFF

! !

PRI

EXP

PRI

!

EXP

# PRI

EXP

PRI

A

EXP

B #

!

!

!

ON I

ON I

O OFF

O OFF ! !

PRI

EXP

PRI

!

EXP

#

B A

PRI

EXP

# !

EXP

!

!

PRI

EXP

PRI

! !

PRI

EXP

PRI

!

EXP

#

B A

# !

EXP

!

!

PRI

EXP

PRI

! !

PRI

EXP

PRI

!

EXP

#

B A

# !

EXP

PRI

EXP

!

!

PRI

EXP

PRI

! !

PRI

EXP

PRI

!

EXP

#

O OFF

B A

#

ON I

!

O OFF

Master switch

!

!

ON I

Master switch

! !

PRI !

EXP

# PRI PRI

EXP

PRI

EXP

A

EXP

B #

!

O OFF

!

!

O OFF ON I

ON I !

PRI

EXP

!

EXP

PRI

!

DAE-OS #

B

PRI

A

EXP

# !

PRI

!

!

EXP

O OFF

O OFF ON I

3

SPE

ON I

10/100/1000

0

1

2

10/100/1000

MGMT B

SLOT B0

SPS switch

SLOT B1

SLOT B2

SLOT B3

SLOT B4

MGMT A

SLOT A0

SLOT A1

SLOT A2

SLOT A3

SLOT A 4

SPS switch

Power source B Power source D Figure 9

Power source A Power source C

CL4129

PDP master switches and power sources in the 40U-C cabinet with four PDPs

The storage system can take 8 to 10 minutes to complete a typical powerup. Amber warning LEDs flash during the power on self-test (POST) and then go off. The front fault LED and the SPS recharge LEDs commonly stay on for several minutes while the SPSs are charging. The powerup is complete when the CPU power light on each SP is steady green.

22

Hardware and Operational Overview

The CPU status lights are visible on the SPE when the front bezel is removed (Figure 10).

SP A Figure 10

SP B

CL4095

Location of CPU status lights

If amber LEDs on the front or back of the storage system remain on for more than 10 minutes, make sure the storage system is correctly cabled, and then refer to the troubleshooting flowcharts for your storage system on the CLARiiON Tools page on the EMC Powerlink website (http://Powerlink.EMC.com). If you cannot determine any reasons for the fault, contact your authorized service provider.

Powering down the storage system 1. Stop all I/O activity to the SPE. If the server connected to the SPE is running the AIX, HP-UX, Linux, or Solaris operating system, back up critical data and then unmount the file systems. Stopping I/O allows the SP to destage cache data, and may take some time. The length of time depends on criteria such as the size of the cache, the amount of data in the cache, the type of data in the cache, and the target location on the disks, but it is typically less than one minute. We recommend that you wait five minutes before proceeding. Hardware and Operational Overview

23

2. After five minutes, use the power switch on each SPS to turn off power. The SPE and primary DAE power down within two minutes.

!

CAUTION Never unplug the power supplies to shut down an SPE. Bypassing the SPS in that manner prevents the storage system from saving write cache data to the vault drives, and results in data loss. You will lose access to data, and the storage processor log displays an error message similar to the following: Enclosure 0 Disk 5 0x90a (Can’t Assign - Cache Dirty) 0 0xafb40 0x14362c

Contact your service provider if this situation occurs. This turns off power to the SPE and the first DAE (EA 0, bus 0). You do not need to turn off power to the other connected DAEs.

24

Hardware and Operational Overview

Status lights (LEDs) and indicators Status lights made up of light emitting diodes (LEDs) on the SPE, its FRUs, the SPSs, and the DAEs and their FRUs indicate the components’ current status.

Storage processor enclosure (SPE) LEDs This section describes status LEDs visible from the front and the rear of the SPE. SPE front status LEDs Figure 11 and Figure 12 show the location of the SPE status LEDs that are visible from the front of the enclosure. Table 6 and Table 7 describe these LEDs.

CL4092

Figure 11

SPE front status LEDs (bezel in place)

Hardware and Operational Overview

25

SP A

SP B

Figure 12

SPE front status LEDs (bezel removed)

Table 6 LED

Symbol

Power

System fault

LED Power cooling module status

CPU power

26

Meaning of the SPE front status LEDs (bezel in place) Quantity

State

Meaning

1

Off

SPE is powered down.

Solid blue

SPE is powered up.

Off

SPE is operating normally.

Solid amber

A fault condition exists in the SPE. If the fault is not obvious from another fault LED on the front, look at the rear of the enclosure.

1

Table 7 Symbol None

CL4095

Meaning of the SPE front status LEDs (bezel removed) Quantity 1 per module

1 per CPU

State Off

Meaning Power cooling module is not powered up.

Solid green

Module is powered up and operating normally.

Amber

Module is faulted.

Off

CPU is not powered up.

Solid green

CPU is powered up and operating normally.

Hardware and Operational Overview

LED CPU fault

Symbol

Quantity 1 per CPU

Unsafe to remove

1 per CPU

State Blinking amber

Meaning Running powerup tests.

Solid amber

CPU is faulted.

Blinking blue

OS is loaded.

Solid blue

CPU is degraded.

Solid white

DO NOT REMOVE MODULE while this light is on.

SPE rear status LEDs Table 8 describes the status LEDs that are visible from the rear of the SPE. Table 8 LED Management module status (see note 1)

I/O module status (see note 1)

BE port link (see note 2)

Symbol None

None

None

Meaning of the SPE rear status LEDs Quantity

State

Meaning

1 per module

Solid green

Power is being supplied to module.

Off

Power is not being supplied to module.

Amber

Module is faulted.

Solid green

Power is being supplied to module.

Off

Power is not being supplied to module.

Amber

Module is faulted.

Off

No link because of one of the following conditions: the cable is disconnected, the cable is faulted or it is not a supported type.

Solid green

1 Gb/s or 2 Gb/s link speed.

Solid blue

4 Gb/s link speed.

Blinking green then blue

Cable fault.

1 per module

1 per Fibre Channel back-end port

Hardware and Operational Overview

27

LED

Symbol

FE port link (see note 2)

None

Quantity

State

Meaning

1 per Fibre Channel front-end port

Off

No link because of one of the following conditions: the host is down, the cable is disconnected, an SFP is not in the port slot, the SFP is faulted or it is not a supported type.

Solid green

1 Gb/s or 2 Gb/s link speed.

Solid blue

4 Gb/s link speed.

Blinking green then blue

SFP or cable fault.

Note 1: LED is on the module handle. Note 2: LED is next to the port connector.

DAE status LEDs This section describes the following status LEDs and indicators: Š

Front DAE and disk modules status LEDs

Š

Enclosure address and bus ID indicators

Š

LCC and power/cooling module status LEDs

Front DAE and disk modules status LEDs Figure 13 shows the location of the DAE and disk module status LEDs that are visible from the front of the enclosure. Table 9 describes these LEDs.

28

Hardware and Operational Overview

Fault LED (Amber)

Disk Activity LED (Green) Figure 13

Power LED (Green or Blue)

Fault LED (Amber) EMC3422

Front DAE and disk modules status LEDs (bezel removed)

Hardware and Operational Overview

29

Table 9

Meaning of the front DAE and disk module status LEDs

LED

Quantity

State

Meaning

DAE power

1

Off

DAE is not powered up.

Solid green

DAE is powered up and back-end bus is running at 2 Gb/s.

Solid blue

DAE is powered up and back-end bus is running at 4 Gb/s.

DAE fault

1

Solid amber

On when any fault condition exists; if the fault is not obvious from a disk module LED, look at the back of the enclosure.

Disk activity

1 per disk module

Off

Slot is empty or contains a filler module or the disk is powered down by command, for example, as the result of a temperature fault.

Solid green

Drive has power but is not handling any I/O activity (the ready state).

Blinking green, mostly on

Drive is spinning and handling I/O activity.

Blink green at a constant rate

Drive is spinning up or spinning down normally.

Blinking green, mostly off

Drive is powered up but not spinning; this is a normal part of the spin-up sequence, occurring during the spin-up delay of a slot.

Solid amber

On when the disk module is faulty, or as an indication to remove the drive.

Disk fault

1 per disk module

Enclosure address and bus ID indicators Figure 14 shows the location of the enclosure address and bus ID indicators that are visible from the rear of the enclosure. In this example, the DAE is enclosure 2 on bus (loop) 1; note that the indicators for LCC A and LCC B always match. Table 10 describes these indicators.

30

Hardware and Operational Overview

Bus ID

Enclosure address

0 1 2 3

0 1 2 3

EA selection

4 5 6 7

4 5 6 7

#

! !

PRI

EXP

PRI

!

EXP

# PRI

EXP

PRI

A

EXP

B #

!

!

!

4 5 6 7

4 5 6 7

0 1 2 3

0 1 2 3

Enclosure address

Bus ID

# EA selection

EMC3178

Figure 14 Table 10

Location of enclosure address and bus ID indicators Meaning of enclosure address and bus ID indicators

LED

Quantity

State

Meaning

Enclosure address

8

Green

Displayed number indicates enclosure address.

Bus ID

8

Blue

Displayed number indicates bus ID. Blinking bus ID indicates invalid cabling – LCC A and LCC B are not connected to the same bus or the maximum number of DAEs allowed on the bus is exceeded.

DAE power/cooling module status LEDs Figure 15 shows the location of the status LEDs for the power supply/system cooling modules (referred to as power/cooling modules). Table 11 describes these LEDs.

Hardware and Operational Overview

31

!

Power LED (green) Power fault LED (amber) Blower fault LED (amber)

!

! !

PRI

EXP

PRI

!

EXP

# PRI

EXP

PRI

A

EXP

B #

!

!

!

!

!

Blower fault LED (amber) Power fault LED (amber) Power LED (green) EMC3179

Figure 15

DAE power/cooling module status LEDs

Table 11

Meaning of DAE power/cooling module status LEDs

LEDs

Quantity

State

Meaning

Power supply active

1 per supply

Green

On when the power supply is operating.

Power supply fault (see note)

1 per supply

Amber

On when the power supply is faulty or is not receiving AC line voltage. Flashing when either a multiple blower or ambient over-temperature condition has shut off power to the system.

Blower fault (see note)

1 per cooling module

Amber

On when a single blower in the power supply is faulty.

Note: The DAE continues running with a single power supply and three of its four blowers. Removing a power/cooling module constitutes a multiple blower fault condition, and will power down the enclosure unless you replace a blower within two minutes.

DAE LCC status LEDs Figure 16 shows the location of the status LEDs for a link control card (LCC). Table 12 describes these LEDs.

32

Hardware and Operational Overview

Primary link active LED (green or blue)

Expansion link active LED (2 Gb/s - green 4 Gb/s - blue)

Fault LED (amber)

PRI !

EXP

Power LED (green)

PRI

EXP ! !

PRI

EXP

PRI

!

EXP

# PRI

EXP

PRI

A

EXP

B #

!

!

!

EXP

PRI

!

Fault LED (amber)

PRI

EXP

Power LED (green)

Primary link active LED

Expansion link active LED EMC3184

Figure 16 Table 12

DAE LCC status LEDs Meaning of DAE LCC status LEDs

Light

Quantity

State

Meaning

LCC power

1 per LCC

Green

On when the LCC is powered up.

LCC fault

1 per LCC

Amber

On when either the LCC or a Fibre Channel connection is faulty. Also on during power on self test (POST).

Primary link active

1 per LCC

Green

On when 2 Gb/s primary connection is active.

Blue

On when 4 Gb/s primary connection is active.

Green

On when 2 Gb/s expansion connection is active.

Blue

On when 4 Gb/s expansion connection is active.

Expansion link active

1 per LCC

SPS status LEDs Figure 17 shows the location of SPS status LEDs that are visible from rear. Table 13 describes these LEDs.

Hardware and Operational Overview

33

Active LED (green) On battery LED (amber)

Fault LED (amber)

Replace battery LED (amber) EMC3421

Figure 17 Table 13

1200 W SPS status LEDs Meaning of 1200 W SPS status LEDs

LED

Quantity

State

Meaning

Active

1 per SPS

Green

When this LED is steady, the SPS is ready and operating normally. When this LED flashes, the batteries are being recharged. In either case, the output from the SPS is supplied by AC line input.

On battery

1 per SPS

Amber

The AC line power is no longer available and the SPS is supplying output power from its battery. When battery power comes on, and no other online SPS is connected to the SPE, the file server writes all cached data to disk, and the event log records the event. Also on briefly during the battery test.

Replace battery

1 per SPS

Amber

The SPS battery is not fully charged and may not be able to serve its cache flushing function. With the battery in this state, and no other online SPS connected to the SPE, the storage system disables write caching, writing any modified pages to disk first. Replace the SPS as soon as possible.

Fault

1 per SPS

Amber

The SPS has an internal fault. The SPS may still be able to run online, but write caching cannot occur. Replace the SPS as soon as possible.

34

Hardware and Operational Overview

Copyright © 2008-2010 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks mentioned herein are the property of their respective owners. Hardware and Operational Overview

35