DOC

Storage Area Networks

By Lisa Lawson,2014-02-24 05:26
10 views 0
JISC Technology and Standards Watch Report Storage Area Networks November 2003. TSW 03-07Staff productivity gains for backup administration

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    TSW 03-07

    November 2003

    ?JISC 2003

    JISC Technology and Standards Watch Report

STORAGE AREA NETWORKS

    Steve Chidlow, UNIVERSITY of LEEDS (S.Chidlow@leeds.ac.uk)

     Page 1 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

Contents

    Acknowledgement ............................................................................................................................. 2 1.0 Executive Summary ................................................................................................................ 3 2.0 Introduction ............................................................................................................................. 3 3.0 The Technology ...................................................................................................................... 3 3.1 Defining the Storage Technology: DAS, SAN, NAS .............................................................. 3 3.2 The Fabric Switches, Fibre Channel, iSCSI technologies .................................................. 5 3.3 Disk Technologies ............................................................................................................... 6 3.4 Storage Arrays .................................................................................................................... 7 4.0 Centralised Backup Systems ................................................................................................... 8 5.0 Strategic Fit and Industry Positioning ....................................................................................... 9 6.0 Data Growth Rates and Their Management ............................................................................. 9 7.0 Storage Management ............................................................................................................ 10 7.1 SAN Management ............................................................................................................. 10 7.2 Storage Virtualisation ......................................................................................................... 10 7.3 Storage Resource Management ........................................................................................ 11 7.4 SMIS/Bluefin Storage Management Initiatives .................................................................... 12 8.0 Data Categorisation Strategy................................................................................................. 12 9.0 Fit of a SAN into a Data Categorisation and DR Strategy ....................................................... 13 10.0 E-Science/Grid Support ..................................................................................................... 14 11.0 Benefits of a SAN .............................................................................................................. 14 11.1 Reduced hardware capital costs ........................................................................................ 14 11.2 Reduced effort to manage storage ..................................................................................... 14 11.3 Increased productivity through improved fault tolerance and DR capability ......................... 14 11.4 24x7 Availability ................................................................................................................. 15 11.5 More efficient backup ......................................................................................................... 15 11.6 Scalable Storage ............................................................................................................... 15 11.7 Interoperability between diverse systems ........................................................................... 15 11.8 Centralised Management ................................................................................................... 15 12.0 Justification for SANs Writing the Business Case ............................................................ 16 13.0 Risks/Issues ...................................................................................................................... 16 14.0 Glossary ............................................................................................................................ 18 15.0 References and Further Information ................................................................................... 20

Acknowledgement

    I am indebted to my colleague Adrian Ellison, who jointly authored with me the document “Business

    Justification for the Deployment of a University Storage Area Network”. This was the business case for a SAN

    at the University of Leeds and is the basis for several sections of this document.

     Page 2 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    1.0 Executive Summary

    ; Demand for data storage capacity and data availability across the UK HE and FE sector is growing

    rapidly; this demand is mirrored across other business sectors.

    ; A Storage Area Network (SAN) can provide a total solution to for the storage needs of HE/FE

    institutions in a cost effective way, despite perceived high initial purchase costs.

    ; A SAN provides significant benefits in terms of storage management, data availability and disaster

    recovery capability.

    ; Cost benefit analysis should be used to demonstrate benefits over the lifetime of the equipment,

    potentially 5 years, for example.

    ; Maximising benefits realisation will require buy-in from all areas of the HE/FE institution it is not

    just a solution for the institution‟s computing service, particularly in distributed environments.

    ; Deployment of a SAN has strong strategic fit with most HE/FE institutions‟ desires to support new

    patterns of learning (e-learning, lifelong learning, widened participation, for example) by supporting

    24x7 availability and reduced “data downtime”.

    ; It will also prove to be a key tool in compliance with security and disaster recovery audit requirements.

    ; As of late 2003, the procurement, installation and configuration of SANs is a highly complex and

    lengthy exercise with many unexpected interoperability problems, so achieving the benefits will be a

    challenge!

2.0 Introduction

    Many organisations, including those in the HE/FE sector are finding that storage growth is increasing at an alarming rate and, when combined with a trend to require more servers to support storage, is leading to an unmanageable situation as far as storage management is concerned. The growth of distributed systems is also giving concern in many organisations as standards of support in a devolved environment are not always adequate. Consequently, consolidation of both servers and storage is looking very attractive.

Networked storage solutions (of which SANs and NAS are examples see below) can offer increased flexibility

    for connecting storage, ensuring much greater utilisation of disk storage space and support for server consolidation (as storage and server capacity growth trends are no longer linked).

    Installing a SAN is large and complicated undertaking, needing institutional management commitment and is more suited to environments where a large proportion of the institution‟s data will reside on the SAN. NAS can provide “plug and go” solutions for file serving, but SANs are better able to support large corporate databases and provide enhanced resilience.

    3.0 The Technology

3.1 Defining the Storage Technology: DAS, SAN, NAS

    Traditionally, data storage resides on hard disks that are locally attached to individual servers. This is known as Direct Attached Storage (DAS). Although this storage may now be large (in the order of 100s of Gigabytes of data storage per server) the storage is generally only accessible from the server to which it is attached. As such, much of this disk space remains unused and plenty of „contingency‟ has to be built into storage needs when determining server specification. In addition, if the server were to fail, access to the data held on those local disks is generally lost.

    A Storage Area Network (SAN) is a separate “network” dedicated to storage devices and at minimum consists of one (or more) large banks of disks mounted in racks that provide for „shared‟ storage space which is accessible by many servers/systems. Other devices, such as robotic tape libraries may be attached to the SAN. See Figure 1 for a representation of both DAS and SAN storage.

    Network Attached Storage (NAS) is storage that sits on the ordinary network (or LAN) and is accessible by devices (servers and workstations) attached to that LAN. NAS devices provide access to file systems and as such are effectively file server appliances. Delivery of file systems is most commonly via NFS (Network File System) or CIFS (Common Internet File System) protocols, but others may be used e.g. NCP (NetWare Core

     Page 3 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    Protocol). These file systems require some sort of associated authentication system to check permissions for file access.

    A SAN functions as a high-speed network similar to a conventional local area network (LAN) and establishes a direct connection between storage resources and the file server infrastructure. The SAN effectively acts as an “extended storage bus” using the same networking elements of a LAN including routers, bridges, hubs and switches. Thus, servers and storage can be „de-coupled‟ allowing the storage disks to be located away from their

    host servers. The SAN is effectively transparent to the server operating system, which “sees” the SAN attached disks as if they were local SCSI disks. Figure 1 also shows the attachment of storage arrays and tape libraries via switches.

    A dedicated SAN carries only “storage data”. This data can be shared with multiple servers without being subject to the bandwidth constraints of the “normal network” (LAN). Practically, a SAN allows for data to be

    managed centrally and to assign storage “chunks” to host systems as required.

The main benefit of NAS devices is ease of deployment - most devices offering a “plug and play” capability,

    being designed as single purpose “appliances”. Modern NAS appliances can also serve large amounts of data

    with internal storage capacity measured in Terabytes. Some NAS appliances are limited by the authentication schemes supported and NetWare users in particular should seek clarification from vendors over compatibility issues.

Many regard SAN and NAS as competitors, but in reality they are complementary technologies SAN

    delivering effective block-based input/output, whilst NAS excels at file based input/output (usually via NFS or CIFS). A hybrid device called a NAS Head or a NAS Gateway has storage that resides in the storage arrays attached to a SAN whilst still delivering file systems over the LAN. A combination of a SAN with NAS Gateways may be an effective way for sites to deliver file-based functionality e.g. for user home directories.

In fact, DAS still has an ongoing use for many purposes the cost of connecting servers to the SAN can be high

    and for systems like DNS servers, for example, where redundancy is provided by other means (multiple equivalent servers), then highly critical data can be resident on the direct attached disks of the servers.

    The world of storage is rapidly changing and interested parties are advised to keep monitoring useful storage-related web sites [1].