Home > Dell > Server > Dell Ax4-5 Manual

Dell Ax4-5 Manual

    Download as PDF Print this page Share this page

    Have a look at the manual Dell Ax4-5 Manual online for free. It’s possible to download the document as PDF or print. UserManuals.tech offer 327 Dell manuals and user’s guides for free. Share the user manual or guide on Facebook, Twitter or Google+.

    							Cabling Your Cluster Hardware31
     The shared storage systems and firmware must be identical. Using 
    dissimilar storage systems and firmware for your shared storage is not 
    supported.
     MSCS is limited to 22 drive letters. Because drive letters A through D are 
    reserved for local disks, a maximum of 22 drive letters (E to Z) can be used 
    for your storage system disks. 
     Windows Server 2003 and 2008 support mount points, allowing greater 
    than 22 drives per cluster.
    For more information, see Assigning Drive Letters and Mount Points 
    section of 
    Dell Failover Clusters with Microsoft Windows Server 2003 
    Installation and Troubleshooting Guide
     or Dell Failover Clusters with 
    Microsoft Windows Server 2008 Installation and Troubleshooting Guide
     on 
    the Dell Support website at 
    support.dell.com/manuals.
    Figure 2-11 provides an example of cabling the cluster nodes to four 
    Dell/EMC storage systems. 
    Figure 2-11. PowerEdge Cluster Nodes Cabled to Four Storage Systems
    private network cluster node
    Fibre Channel 
    switch 
    storage systems (4)Fibre Channel 
    switch  cluster node 
    						
    							32Cabling Your Cluster Hardware
    Connecting a PowerEdge Cluster to a Tape Library
    To provide additional backup for your cluster, you can add tape backup 
    devices to your cluster configuration. The Dell PowerVault™ tape libraries 
    may contain an integrated Fibre Channel bridge, or Storage Network 
    Controller (SNC), that connects directly to your Dell/EMC Fibre Channel 
    switch. 
    Figure 2-12 shows a supported PowerEdge cluster configuration using 
    redundant Fibre Channel switches and a tape library. In this configuration, 
    each of the cluster nodes can access the tape library to provide backup for 
    your local disk resources, as well as your cluster disk resources. Using this 
    configuration allows you to add more servers and storage systems in the 
    future, if needed. 
     NOTE: While tape libraries can be connected to multiple fabrics, they do not 
    provide path failover.
    Figure 2-12. Cabling a Storage System and a Tape Library
    Obtaining More Information
    See the storage and tape backup documentation for more information on 
    configuring these components.
    private network cluster node
    Fibre Channel 
    switch 
    storage systemsFibre Channel 
    switch  cluster node
    tape library 
    						
    							Cabling Your Cluster Hardware33
    Configuring Your Cluster With SAN Backup
    You can provide centralized backup for your clusters by sharing your SAN 
    with multiple clusters, storage systems, and a tape library.
    Figure 2-13 provides an example of cabling the cluster nodes to your storage 
    systems and SAN backup with a tape library.
    Figure 2-13. Cluster Configuration Using SAN-Based Backup
    cluster 2 cluster 1
    Fibre Channel switch 
    storage systems
    tape library Fibre Channel switch  
    						
    							34Cabling Your Cluster Hardware 
    						
    							Preparing Your Systems for Clustering35
    Preparing Your Systems for 
    Clustering
     CAUTION: Only trained service technicians are authorized to remove and access 
    any of the components inside the system. See the safety information that shipped 
    with your system for complete information about safety precautions, working 
    inside the computer, and protecting against electrostatic discharge.
    Cluster Configuration Overview
    1Ensure that your site can handle the cluster’s power requirements. 
    Contact your sales representative for information about your regions 
    power requirements.
    2Install the systems, the shared storage array(s), and the interconnect 
    switches (example: in an equipment rack), and ensure that all these 
    components are powered on.
     NOTE: For more information on step 3 to step 7 and step 10 to step 13, see 
    Preparing your systems for clustering section of Dell Failover Clusters with 
    Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell 
    Failover Clusters with Microsoft Windows Server 2008 Installation and 
    Troubleshooting Guide located on the Dell Support website at 
    support.dell.com/manuals.
    3Deploy the operating system (including any relevant service pack and 
    hotfixes), network adapter drivers, and storage adapter drivers (including 
    Multipath I/O drivers(MPIO)) on each of the systems that will become 
    cluster nodes. Depending on the deployment method that is used, it may 
    be necessary to provide a network connection to successfully complete this 
    step. 
     NOTE: To help in planning and deployment of your cluster, record the relevant 
    cluster configuration information in the Cluster Data Form on page 55 and 
    zoning information in the Zoning Configuration Form on page 57. 
    						
    							36Preparing Your Systems for Clustering
    4Establish the physical network topology and the TCP/IP settings for 
    network adapters on each server node to provide access to the cluster 
    public and private networks. 
    5Configure each cluster node as a member in the same Microsoft® 
    Windows Active Directory® Domain. 
     NOTE: You can configure the cluster nodes as Domain Controllers. For more 
    information, see “Selecting a Domain Model” section of Dell Failover Clusters 
    with Microsoft Windows Server 2003 Installation and Troubleshooting Guide 
    or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and 
    Troubleshooting Guide located on the Dell Support website at 
    support.dell.com/manuals.
    6Establish the physical storage topology and any required storage network 
    settings to provide connectivity between the storage array and the servers 
    that will be configured as cluster nodes. Configure the storage system(s) as 
    described in your storage system documentation. 
    7Use storage array management tools to create at least one logical unit 
    number (LUN). The LUN is used as a cluster Quorum disk for Windows 
    Server 2003 Failover cluster and as a Witness disk for Windows Server 
    2008 Failover cluster. Ensure that this LUN is presented to the servers that 
    will be configured as cluster nodes. 
     NOTE: For security reasons, it is recommended that you configure the LUN on 
    a single node as mentioned in step 8 when you are setting up the cluster. 
    Later, you can configure the LUN as mentioned in step 9 so that other cluster 
    nodes can access it.
    8Select one of the systems and form a new failover cluster by configuring 
    the cluster name, cluster management IP, and quorum resource. For more 
    information, see Preparing Your Systems for Clustering on page 35. 
     NOTE: For Windows Server® 2008 Failover Clusters, run the Cluster 
    Validation Wizard to ensure that your system is ready to form the cluster. 
    9Join the remaining node(s) to the failover cluster. For more information, 
    see Preparing Your Systems for Clustering on page 35.
    10Configure roles for cluster networks. Take any network interfaces that are 
    used for iSCSI storage (or for other purposes outside of the cluster) out of 
    the control of the cluster. 
    11Test the failover capabilities of your new cluster.  
    						
    							Preparing Your Systems for Clustering37
     NOTE: For Windows Server 2008 Failover Clusters, you can also use the 
    Cluster Validation Wizard. 
    12Configure highly-available applications and services on your failover 
    cluster. Depending on your configuration, this may also require providing 
    additional LUNs to the cluster or creating new cluster resource groups. 
    Test the failover capabilities of the new resources. 
    13Configure client systems to access the highly available applications and 
    services that are hosted on your failover cluster.
    Installation Overview
    Each node in your Dell Windows Server failover cluster must have the same 
    release, edition, service pack, and processor architecture of the Windows 
    Server operating system installed. For example, all nodes in your cluster may 
    be configured with Windows Server 2003 R2, Enterprise x64 Edition. If the 
    operating system varies among nodes, it is not possible to configure a failover 
    cluster successfully. It is recommended to establish server roles prior to 
    configuring a failover cluster, depending on the operating system configured 
    on your cluster. 
    For a list of Dell PowerEdge Servers, Fibre Channel HBAs and switches, and 
    recommended list of operating system variants, specific driver and firmware 
    revisions, see the Cluster Configuration Support Matrices located on Dell 
    High Availablity Cluster website at dell.com/ha.
    For more information on deploying your cluster with Windows Server 2003 
    operating systems, see the Dell Failover Clusters with Microsoft Windows 
    Server 2003 Installation and Troubleshooting Guide. For more information on 
    deploying your cluster with Windows Server 2008 operating systems, see the 
    Dell Failover Clusters with Microsoft Windows Server 2008 Installation and 
    Troubleshooting Guide.
    The following sub-sections describe steps that must be taken to enable 
    communication between the cluster nodes and your shared Dell/EMC AX4-5 
    Fibre Channel storage array, and to present disks from the storage array to the 
    cluster. Install and configure the following components on each node: 
    1
    The Fibre Channel HBA(s) and driver on each cluster node
    2EMC PowerPath on each cluster node
    3Zoning, if applicable 
    						
    							38Preparing Your Systems for Clustering
    4The shared storage system
    5A failover cluster
    Installing the Fibre Channel HBAs
    Fo r  d u a l-HBA configurations, It is recommended that you install the Fibre 
    Channel HBAs on separate peripheral component interconnect (PCI) buses. 
    Placing the adapters on separate buses improves availability and performance. 
    See the Dell Cluster Configuration Support Matrices located on the Dell High 
    Availability Clustering website at dell.com/ha for more information about 
    your systems PCI bus configuration and supported HBAs.
    Installing the Fibre Channel HBA Drivers
    See the EMC documentation that is included with your HBA kit for more 
    information. 
    See the Emulex support website located at emulex.com or the Dell Support 
    website at support.dell.com for information about installing and configuring 
    Emulex HBAs and EMC-approved drivers. 
    See the QLogic support website at qlogic.com or the Dell Support website at 
    support.dell.com for information about installing and configuring QLogic 
    HBAs and EMC-approved drivers. 
    See the Dell Cluster Configuration Support Matrices located on the Dell High 
    Availability Clustering website at dell.com/ha for information about 
    supported HBA controllers and drivers.
    Installing EMC PowerPath
    EMC® Po w e r Pa t h® detects a failed storage path and automatically re-routes 
    I/Os through an alternate path. PowerPath also provides load balancing of 
    data from the server to the storage system. To install PowerPath:
    1
    Insert the PowerPath installation media in the CD/DVD drive.
    2On the Getting Started screen, go to the Installation section, and click the 
    appropriate link for the operating system that is running on the node.
    3Select Run this program from its current location and click OK.
    4In the Choose Language Setup screen, select the required language, and 
    click 
    OK. 
    						
    							Preparing Your Systems for Clustering39
    5In the We l c o m e window of the setup wizard, click Next.
    6In the CLARiiON AX-Series window, select and click Next. Follow the 
    onscreen instructions to complete the installation. 
    7Click Ye s to reboot the system.
    Implementing Zoning on a Fibre Channel 
    Switched Fabric
    A Fibre Channel switched fabric consists of one or more Fibre Channel 
    switches that provide high-speed connections between servers and storage 
    devices. The switches in a Fibre Channel fabric provide a connection through 
    inbound and outbound points from one device (sender) to another device or 
    switch (receiver) on the network. If the data is sent to another switch, the 
    process repeats itself until a connection is established between the sender and 
    the receiver. 
    Fibre Channel switches provide you with the ability to set up barriers between 
    different devices and operating environments. These barriers create logical 
    fabric subsets with minimal software and hardware intervention. Similar to 
    subnets in the client/server network, logical fabric subsets divide a fabric into 
    similar groups of components, regardless of their proximity to one another. 
    The logical subsets that form these barriers are called zones. 
    Zoning automatically and transparently enforces access of information to the 
    zone devices. More than one PowerEdge cluster configuration can share 
    Dell/EMC storage system(s) in a switched fabric using Fibre Channel switch 
    zoning. By using Fibre Channel switches to implement zoning, you can 
    segment the SANs to isolate heterogeneous servers and storage systems from 
    each other.
    Using Worldwide Port Name Zoning
    PowerEdge cluster configurations support worldwide port name zoning. 
    A worldwide name (WWN) is a unique numeric identifier assigned to Fibre 
    Channel interfaces, such as HBA ports, storage processor (SP) ports, and 
    Fibre Channel to SCSI bridges or storage network controllers (SNCs). 
    						
    							40Preparing Your Systems for Clustering
    A WWN consists of an 8-byte hexadecimal number with each byte separated 
    by a colon. For example, 10:00:00:60:69:00:00:8a is a valid WWN. Using 
    WWN port name zoning allows you to move cables between switch ports 
    within the fabric without having to update the zones.
    Table 3-1 provides a list of WWN identifiers that you can find in the 
    Dell/EMC cluster environment.
     WARNING: When you replace a switch module, a storage controller, or a Fibre 
    Channel HBA in a PowerEdge server, reconfigure your zones to provide continuous 
    client data access. 
    Single Initiator Zoning
    Each host HBA port in a SAN must be configured in a separate zone on the 
    switch with the appropriate storage ports. This zoning configuration, known 
    as single initiator zoning, prevents different hosts from communicating with 
    each other, thereby ensuring that Fibre Channel communications between 
    the HBAs and their target storage systems do not affect each other. 
    When you create your single-initiator zones, follow these guidelines:
    Table 3-1. Port Worldwide Names in a SAN Environment
    Identifier Description
    xx:xx:00:60:69:xx:xx:xxDell/EMC or Brocade switch
    xx:xx:xx:00:88:xx:xx:xxMcData switch
    50:06:01:6x:xx:xx:xx:xxDell/EMC storage processor
    xx:xx:00:00:C9:xx:xx:xxEmulex HBA ports
    xx:xx:00:E0:8B:xx:xx:xxQLogic HBA ports (non-embedded)
    xx:xx:00:0F:1F:xx:xx:xxDell 2362M HBA port
    xx:xx:xx:60:45:xx:xx:xxPowerVault 132T and 136T tape 
    libraries
    xx:xx:xx:E0:02:xx:xx:xxPowerVault 128T tape autoloader
    xx:xx:xx:C0:01:xx:xx:xxPowerVault 160T tape library and Fibre 
    Channel tape drives
    xx:xx:xx:C0:97:xx:xx:xxPowerVault ML6000 Fibre Channel 
    tape drives 
    						
    All Dell manuals Comments (0)

    Related Manuals for Dell Ax4-5 Manual