Dell Ax4-5 Manual
Have a look at the manual Dell Ax4-5 Manual online for free. It’s possible to download the document as PDF or print. UserManuals.tech offer 327 Dell manuals and user’s guides for free. Share the user manual or guide on Facebook, Twitter or Google+.
![](/img/blank.gif)
Introduction11 Table 1-3 lists hardware requirements for the disk processor enclosures DPE DAE, and SPS. NOTE: Ensure that the core software version running on the storage system is supported by Dell. For specific version requirements, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Cluster website at dell.com/ha. Supported Cluster Configurations Direct-Attached Cluster In a direct-attached cluster, all nodes of the cluster are directly attached to a single storage system. In this configuration, the RAID controllers (or storage processors) on the storage systems are connected by cables directly to the Fibre Channel HBA ports in the nodes. Figure 1-1 shows a basic direct-attached, single-cluster configuration. Table 1-3. Dell/EMC Storage System Requirements Storage SystemMinimum Required StoragePossible Storage ExpansionSPS AX4-5 1 DPE with at least 4 and up to 12 hard drivesUp to 3 DAE with a maximum of 12 hard- drives eachFirst is required, the second SPS is optional
![](/img/blank.gif)
12Introduction Figure 1-1. Direct-Attached, Single-Cluster Configuration EMC® PowerPath® Limitations in a Direct-Attached Cluster EMC PowerPath provides failover capabilities and multiple path detection as well as dynamic load balancing between multiple ports on the same storage processor. However, direct-attached clusters supported by Dell connect to a single port on each storage processor in the storage system. Because of the single port limitation, PowerPath can provide only failover protection, not load balancing, in a direct-attached configuration. SAN-Attached Cluster In a SAN-attached cluster, all of the nodes are attached to a single storage system or to multiple storage systems through a SAN using redundant switch fabrics. SAN-attached clusters are superior to direct-attached clusters in configuration flexibility, expandability, and performance. Figure 1-2 shows a SAN-attached cluster. cluster node private network public network cluster node storage systemFibre Channel connections Fibre Channel connections
![](/img/blank.gif)
Introduction13 Figure 1-2. SAN-Attached Cluster Other Documents You May Need CAUTION: The safety information that is shipped with your system provides important safety and regulatory information. Warranty information may be included within this document or as a separate document. NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com/manuals. The Rack Installation Guide included with your rack solution describes how to install your system into a rack. The Getting Started Guide provides an overview of initially setting up your system. The Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide provides more information on deploying your cluster with the Windows Server 2003 operating system. The Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide provides more information on deploying your cluster with the Windows Server 2008 operating system. cluster node cluster node private network Fibre Channel connections storage systemFibre Channel switch Fibre Channel switchpublic network Fibre Channel connections
![](/img/blank.gif)
14Introduction Dell Cluster Configuration Support Matrices provides a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster. The HBA documentation provides installation instructions for the HBAs. Systems management software documentation describes the features, requirements, installation, and basic operation of the software. Operating system documentation describes how to install (if necessary), configure, and use the operating system software. The Dell PowerVault™ tape library documentation provides information for installing, troubleshooting, and upgrading the tape library. The EMC PowerPath documentation that came with your HBA kit(s) and Dell/EMC Storage Enclosure User’s Guides. Updates are sometimes included with the system to describe changes to the system, software, and/or documentation. NOTE: Always read the updates first because they often supersede information in other documents. Release notes or readme files may be included to provide last-minute updates to the system or documentation, or advanced technical reference material intended for experienced users or technicians.
![](/img/blank.gif)
Cabling Your Cluster Hardware15 Cabling Your Cluster Hardware NOTE: To configure Dell blade server modules in a Dell™ PowerEdge™ cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com/manuals. Cabling the Mouse, Keyboard, and Monitor When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes. See the documentation included with your rack for instructions on cabling each node’s connections to the switch box. Cabling the Power Supplies Refer to the documentation for each component in your cluster solution to ensure that the specific power requirements are satisfied. The following guidelines are recommended to protect your cluster solution from power-related failures: For nodes with multiple power supplies, plug each power supply into a separate AC circuit. Use uninterruptible power supplies (UPS). For some environments, consider having backup generators and power from separate electrical substations. Figure 2-1, and Figure 2-2 illustrate recommended methods for power cabling for a cluster solution consisting of two PowerEdge systems and two storage systems. To ensure redundancy, the primary power supplies of all the components are grouped onto one or two circuits and the redundant power supplies are grouped onto a different circuit.
![](/img/blank.gif)
16Cabling Your Cluster Hardware Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems and One Standby Power Supply (SPS) in an AX4-5 Storage System redundant power supplies on one AC power strip (or on one AC PDU [not shown]) primary power supplies on one AC power strip (or on one AC PDU [not shown]) NOTE: This illustration is intended only to demonstrate the power distribution of the components. SPS
![](/img/blank.gif)
Cabling Your Cluster Hardware17 Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge Systems and Two SPS(s) in an AX4-5 Storage System Cabling Your Cluster for Public and Private Networks The network adapters in the cluster nodes provide at least two network connections for each node, as described in Table 2-1. NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com/manuals. AB AB0 Fibre 1 Fibre0 Fibre 1 Fibre AB AB primary power supplies on one AC power strip (or on one AC PDU [not shown])redundant power supplies on one AC power strip (or on one AC PDU [not shown])
![](/img/blank.gif)
18Cabling Your Cluster Hardware Figure 2-3 shows an example of cabling in which dedicated network adapters in each node are connected to each other (for the private network) and the remaining network adapters are connected to the public network. Figure 2-3. Example of Network Cabling Connection Table 2-1. Network Connections Network Connection Description Public network All connections to the client LAN. At least one public network must be configured for Mixed mode for private network failover. Private network A dedicated connection for sharing cluster health and status information only. cluster node 1cluster node 2 public network private network private network adapterpublic network adapter
![](/img/blank.gif)
Cabling Your Cluster Hardware19 Cabling the Public Network Any network adapter supported by a system running TCP/IP may be used to connect to the public network segments. You can install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port. Cabling the Private Network The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications. Table 2-2 describes two possible private network configurations. NOTE: For more information on the supported cable types, see your system or NIC documentation. Table 2-2. Private Network Hardware Components and Connections Method Hardware Components Connection Network switchGigabit or 10 Gigabit Ethernet network adapters and switchesDepending on the hardware, connect the CAT5e or CAT6 cables, the multimode optical cables with LC (Local Connector) connectors, or the twin-ax cables from the network adapters in the nodes to a switch. Point-to-Point (two-node clusters only)Gigabit or 10 Gigabit Ethernet network adapters with RJ-45 connectorsConnect a standard CAT5e or CAT6 Ethernet cable between the network adapters in both nodes. 10 Gigabit Ethernet network adapters with SFP+ connectorsConnect a twin-ax cable between the network adapters in both nodes. Optical Gigabit or 10 Gigabit Ethernet network adapters with LC connectorsConnect a multi-mode optical cable between the network adapters in both nodes.
![](/img/blank.gif)
20Cabling Your Cluster Hardware Using Dual-Port Network Adapters You can configure your cluster to use the public network as a failover for private network communications. If dual-port network adapters are used, do not use both ports simultaneously to support both the public and private networks. NIC Teaming NIC teaming combines two or more NICs to provide load balancing and fault tolerance. Your cluster supports NIC teaming, but only in a public network; NIC teaming is not supported in a private network. You should use the same brand of NICs in a team, and you cannot mix brands of teaming drivers. Cabling the Storage Systems This section provides information for connecting your cluster to a storage system in a direct-attached configuration, or to one or more storage systems in a SAN-attached configuration. Cabling Storage for Your Direct-Attached Cluster NOTE: Ensure that the management port on each storage processor is connected to the storage server with the management station, using an Ethernet network cable. A direct-attached cluster configuration consists of redundant Fibre Channel host bus adapter (HBA) ports cabled directly to a Dell/EMC storage system. Direct-attached configurations are self-contained and do not share any physical resources with other server or storage systems outside of the cluster. Figure 2-4 shows an example of a direct-attached, single cluster configuration with redundant HBA ports installed in each cluster node.