DOC

Storage Area Networks

By Lisa Lawson,2014-02-24 05:26
6 views 0
JISC Technology and Standards Watch Report Storage Area Networks November 2003. TSW 03-07Staff productivity gains for backup administration

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    TSW 03-07

    November 2003

    ?JISC 2003

    JISC Technology and Standards Watch Report

STORAGE AREA NETWORKS

    Steve Chidlow, UNIVERSITY of LEEDS (S.Chidlow@leeds.ac.uk)

     Page 1 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

Contents

    Acknowledgement ............................................................................................................................. 2 1.0 Executive Summary ................................................................................................................ 3 2.0 Introduction ............................................................................................................................. 3 3.0 The Technology ...................................................................................................................... 3 3.1 Defining the Storage Technology: DAS, SAN, NAS .............................................................. 3 3.2 The Fabric Switches, Fibre Channel, iSCSI technologies .................................................. 5 3.3 Disk Technologies ............................................................................................................... 6 3.4 Storage Arrays .................................................................................................................... 7 4.0 Centralised Backup Systems ................................................................................................... 8 5.0 Strategic Fit and Industry Positioning ....................................................................................... 9 6.0 Data Growth Rates and Their Management ............................................................................. 9 7.0 Storage Management ............................................................................................................ 10 7.1 SAN Management ............................................................................................................. 10 7.2 Storage Virtualisation ......................................................................................................... 10 7.3 Storage Resource Management ........................................................................................ 11 7.4 SMIS/Bluefin Storage Management Initiatives .................................................................... 12 8.0 Data Categorisation Strategy................................................................................................. 12 9.0 Fit of a SAN into a Data Categorisation and DR Strategy ....................................................... 13 10.0 E-Science/Grid Support ..................................................................................................... 14 11.0 Benefits of a SAN .............................................................................................................. 14 11.1 Reduced hardware capital costs ........................................................................................ 14 11.2 Reduced effort to manage storage ..................................................................................... 14 11.3 Increased productivity through improved fault tolerance and DR capability ......................... 14 11.4 24x7 Availability ................................................................................................................. 15 11.5 More efficient backup ......................................................................................................... 15 11.6 Scalable Storage ............................................................................................................... 15 11.7 Interoperability between diverse systems ........................................................................... 15 11.8 Centralised Management ................................................................................................... 15 12.0 Justification for SANs Writing the Business Case ............................................................ 16 13.0 Risks/Issues ...................................................................................................................... 16 14.0 Glossary ............................................................................................................................ 18 15.0 References and Further Information ................................................................................... 20

Acknowledgement

    I am indebted to my colleague Adrian Ellison, who jointly authored with me the document “Business

    Justification for the Deployment of a University Storage Area Network”. This was the business case for a SAN

    at the University of Leeds and is the basis for several sections of this document.

     Page 2 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    1.0 Executive Summary

    ; Demand for data storage capacity and data availability across the UK HE and FE sector is growing

    rapidly; this demand is mirrored across other business sectors.

    ; A Storage Area Network (SAN) can provide a total solution to for the storage needs of HE/FE

    institutions in a cost effective way, despite perceived high initial purchase costs.

    ; A SAN provides significant benefits in terms of storage management, data availability and disaster

    recovery capability.

    ; Cost benefit analysis should be used to demonstrate benefits over the lifetime of the equipment,

    potentially 5 years, for example.

    ; Maximising benefits realisation will require buy-in from all areas of the HE/FE institution it is not

    just a solution for the institution‟s computing service, particularly in distributed environments.

    ; Deployment of a SAN has strong strategic fit with most HE/FE institutions‟ desires to support new

    patterns of learning (e-learning, lifelong learning, widened participation, for example) by supporting

    24x7 availability and reduced “data downtime”.

    ; It will also prove to be a key tool in compliance with security and disaster recovery audit requirements.

    ; As of late 2003, the procurement, installation and configuration of SANs is a highly complex and

    lengthy exercise with many unexpected interoperability problems, so achieving the benefits will be a

    challenge!

2.0 Introduction

    Many organisations, including those in the HE/FE sector are finding that storage growth is increasing at an alarming rate and, when combined with a trend to require more servers to support storage, is leading to an unmanageable situation as far as storage management is concerned. The growth of distributed systems is also giving concern in many organisations as standards of support in a devolved environment are not always adequate. Consequently, consolidation of both servers and storage is looking very attractive.

Networked storage solutions (of which SANs and NAS are examples see below) can offer increased flexibility

    for connecting storage, ensuring much greater utilisation of disk storage space and support for server consolidation (as storage and server capacity growth trends are no longer linked).

    Installing a SAN is large and complicated undertaking, needing institutional management commitment and is more suited to environments where a large proportion of the institution‟s data will reside on the SAN. NAS can provide “plug and go” solutions for file serving, but SANs are better able to support large corporate databases and provide enhanced resilience.

    3.0 The Technology

3.1 Defining the Storage Technology: DAS, SAN, NAS

    Traditionally, data storage resides on hard disks that are locally attached to individual servers. This is known as Direct Attached Storage (DAS). Although this storage may now be large (in the order of 100s of Gigabytes of data storage per server) the storage is generally only accessible from the server to which it is attached. As such, much of this disk space remains unused and plenty of „contingency‟ has to be built into storage needs when determining server specification. In addition, if the server were to fail, access to the data held on those local disks is generally lost.

    A Storage Area Network (SAN) is a separate “network” dedicated to storage devices and at minimum consists of one (or more) large banks of disks mounted in racks that provide for „shared‟ storage space which is accessible by many servers/systems. Other devices, such as robotic tape libraries may be attached to the SAN. See Figure 1 for a representation of both DAS and SAN storage.

    Network Attached Storage (NAS) is storage that sits on the ordinary network (or LAN) and is accessible by devices (servers and workstations) attached to that LAN. NAS devices provide access to file systems and as such are effectively file server appliances. Delivery of file systems is most commonly via NFS (Network File System) or CIFS (Common Internet File System) protocols, but others may be used e.g. NCP (NetWare Core

     Page 3 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    Protocol). These file systems require some sort of associated authentication system to check permissions for file access.

    A SAN functions as a high-speed network similar to a conventional local area network (LAN) and establishes a direct connection between storage resources and the file server infrastructure. The SAN effectively acts as an “extended storage bus” using the same networking elements of a LAN including routers, bridges, hubs and switches. Thus, servers and storage can be „de-coupled‟ allowing the storage disks to be located away from their

    host servers. The SAN is effectively transparent to the server operating system, which “sees” the SAN attached disks as if they were local SCSI disks. Figure 1 also shows the attachment of storage arrays and tape libraries via switches.

    A dedicated SAN carries only “storage data”. This data can be shared with multiple servers without being subject to the bandwidth constraints of the “normal network” (LAN). Practically, a SAN allows for data to be

    managed centrally and to assign storage “chunks” to host systems as required.

The main benefit of NAS devices is ease of deployment - most devices offering a “plug and play” capability,

    being designed as single purpose “appliances”. Modern NAS appliances can also serve large amounts of data

    with internal storage capacity measured in Terabytes. Some NAS appliances are limited by the authentication schemes supported and NetWare users in particular should seek clarification from vendors over compatibility issues.

Many regard SAN and NAS as competitors, but in reality they are complementary technologies SAN

    delivering effective block-based input/output, whilst NAS excels at file based input/output (usually via NFS or CIFS). A hybrid device called a NAS Head or a NAS Gateway has storage that resides in the storage arrays attached to a SAN whilst still delivering file systems over the LAN. A combination of a SAN with NAS Gateways may be an effective way for sites to deliver file-based functionality e.g. for user home directories.

In fact, DAS still has an ongoing use for many purposes the cost of connecting servers to the SAN can be high

    and for systems like DNS servers, for example, where redundancy is provided by other means (multiple equivalent servers), then highly critical data can be resident on the direct attached disks of the servers.

    The world of storage is rapidly changing and interested parties are advised to keep monitoring useful storage-related web sites [1].

Figure 1 A schematic illustrating the differences between Direct Attached Storage (DAS) and SAN models.

     Page 4 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    3.2 The Fabric Switches, Fibre Channel, iSCSI technologies

    The fabric for a SAN provides the connectivity between the host servers and the storage devices. The dominant architecture for SANs is based on Fibre Channel (FC) [2], that whilst expensive, does have advantages in terms of its connectivity options. Compared to SCSI devices, for example, many more storage devices may be connected over much larger distances with higher data transfer rates.

    Cables to connect SAN components are of three types: copper, short-wave fibre or long-wave fibre. Copper cables are only suitable for short connections (less than 12m), whilst short-wave fibre (multi-mode 50 micron) is used for distances up to 500m with long-wave fibre (single-mode 9 micron) needed for longer distances up to 10 Km [3]. A Fibre Channel transceiver unit called a GBIC (Gigabit Interface Connector) is then needed to connect the FC cables to the FC devices. Two types of GBIC are available: short-wave (for distances up to 500m) or long-wave (for distances up to 10Km) [4].

    In Fibre Channel topologies the host server may be connected via a Host Bus Adapter (HBA) to the storage directly, via a hub or a switch. Direct connections to storage do not constitute a “network” and so are not used in SANs. Hubs use a topology known as Fibre Channel Arbitrated Loop (FC-AL) that shares the loop‟s total

    bandwidth amongst all attached FC devices with a total device restriction of 126 attached FC devices. Switches provide a set of multiple dedicated, fully-interconnected, non-blocking data paths known as a “fabric”.

    Switches provide simultaneous routing of device traffic and are capable of supporting a theoretical maximum of 16 million FC devices. SANs of any degree of complexity should be therefore be based on a switched fabric using Fibre Channel switches with varying numbers of ports typically 8, 16, 24 or 32 ports. Switches with a

    large numbers of ports (typically 64 or above) are also available with additional fault-tolerant features and are known as “directors”. Director-class switches are very expensive, but are the best solution for large scale SANs that will have a large number of host servers attached. Switches can be cascaded and linked together, but the available port count can soon be diminished by the requirement for Inter-Switch Links (ISLs). For a large SAN an ideal solution would be large directors at the centre of the fabric with smaller switches connected off the director via ISLs the so-called “core-to-edge” solution [5].

    Interoperability between elements of a SAN fabric needs careful investigation and it may be prudent to stick with just one vendor for the provision of switches.

    Figure 2 A schematic illustrating how no single point of failure can be achieved through dual-pathing to each SAN component using separate fabrics „A‟ and „B‟. Even more resilience may be achieved through the use of

    two nodes located in separate sites. In reality, storage arrays and tape libraries would be connected with more than just one fibre connection to each switch.

     Page 5 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    To maximise the benefits of a SAN, ideally dual redundant fabrics should be used, meaning that each server has two HBAs, each attached to different Fibre Channel switches that then have separate connectivity to the storage arrays. With suitable additional software installed, such a dual-hosted environment can also be used to provide dual-pathing with automated path failover or even path load balancing (see Figure 2). Figure 2 also shows how an even greater level of resilience can be achieved by replicating the SAN infrastructure over two separate sites.

    Some SANs require additional specific software on the hosts connected to the SAN beyond the HBA driver itself. This may be needed to provide the resilience and management of the SAN and may be required even when only one HBA is installed in a host server. On some SAN systems this software can be expensive and an unexpected additional cost of purchasing an HBA.

    In the switches a technique called zoning is used to partition access between devices that are allowed to communicate with each other. Zoning might also be used to create barriers between different operating systems environments e.g. between UNIX and PC systems or between corporate business systems and student teaching systems. Further control of access between SAN components is possible by LUN masking, usually implemented in the storage arrays (see below).

    Fibre channel fabrics now generally run at 2Gbps, with 1Gbps ports also still available. Most 2Gbps products can switch to 1Gbps, thereby still preserving investments in the slower technology. A 2Gbps link translates to 200MBps transfer rates and Fibre channel also can offer full-duplex mode. However, some connection slots for the HBAs cannot sustain these throughput rates in full-duplex mode. PCI 64-bit cards at 66Mhz or PCI-X slots (133Mhz) are the best choice to ensure high end-to-end transfer rates to fully utilise the potential of Fibre Channel. Very recently, 4Gbps Fibre Channel has been announced, but many Fibre Channel proponents believe that 10Gbps should be the next leap in performance to match 10Gbit Ethernet. However, 10Gbps Fibre Channel is not intended to be backwards compatible with previous slower standards.

    In fact, whilst SANs have been mainly based on Fibre Channel technology, new IP based options using more commodity-like components (e.g. Ethernet switches) are a possibility in the future. In particular, the standard for iSCSI (Internet SCSI) was agreed during 2003 and many products supporting iSCSI [6] are now appearing on the market. Servers for iSCSI either require an iSCSI HBA (known also as a TCP Offload Engine) or a standard Ethernet Network Interface Card (NIC) with a special software iSCSI driver on the host server, with the former very much preferred. Storage for iSCSI needs either a native iSCSI interface or (perversely) can be Fibre Channel storage with an iSCSI to Fibre Channel gateway device.

    iSCSI as a replacement for Fibre Channel based SANs is not going to be realistic until 2005 and, whilst opinions vary on this topic, iSCSI may complement Fibre Channel, which might remain in the data centre to support enterprise systems. However, 10Gbit Ethernet is already available and there is a more aggressive roadmap for future Ethernet standards than Fibre Channel. These factors may reduce the theoretical advantages of a Fibre Channel fabric for the transport of storage data.

3.3 Disk Technologies

    Enterprise class storage arrays used in SANs generally use Fibre Channel interfaces to internal FC-AL connections in the storage array with FC disks attached. More modestly priced storage arrays are available with an internal connection to either SCSI or ATA disks. Fibre Channel disks are designed for enterprise-class use and usually have top-end performance and reliability characteristics and thus attract premium prices.

    However, in recognition of the fact that not all data needs to be treated equally (see the section Data Categorisation Strategy below), many SAN vendors now offer the option of storage arrays based on Serial ATA (SATA) disk technology [7]. Serial ATA disks are an evolution of the commodity parallel ATA (or IDE) disks used in PCs with a design intention of being at ATA-like price-points with SCSI-like performance. Such disks may not be suitable for mission critical enterprise database applications, but may have a role for less critical or low usage data. Indeed, some storage vendors now offer storage arrays that can accommodate various varieties of disk in the same cabinet e.g. FC and SATA. In such examples, the interface to the disk tray from the hosts/fabric is still Fibre Channel.

    Low priced RAID arrays based on traditional ATA/IDE disks that can be incorporated into SANs have also been available for some while, but are now likely to be displaced by these Serial ATA arrays for the lower end of the storage array market. Another new technology for disk drives is Serial Attached SCSI (SAS) [8] that, like SATA, continues the trend away from parallel methods of data transmission to serial methods (with simplified cabling and more flexible connectivity).

     Page 6 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    The threat of widespread adoption of Serial ATA and Serial SCSI disks should also have a beneficial knock-on effect on the prices of top-end Fibre Channel disks.

3.4 Storage Arrays

    Storage Arrays present a view to the host servers called a Logical Unit (LUN) that appears to the host as a disk volume. A LUN is in itself a level of virtualisation as it is usually associated with some degree of RAID level and thus formed from parts of several disks. Storage arrays typically implement several levels of RAID, with levels 0, 1, 3 and 5 very prevalent, with other combined levels (such as 0+1) also possible.

The degree of control over the placement and use of actual disks varies when defining a LUN some storage

    arrays offer a full level of virtualisation where storage administrators merely request a LUN of a given size, which is then created internally within the array and spread over many disks as determined by the array‟s in-

    built virtualisation. In many other arrays, however, there is full control over the placement of LUNs across the actual disks in the array and the array may need its storage to be partitioned into separate “RAID Groups”, with

    a particular RAID level associated with the group. In such cases, administrators must carefully bear in mind the potential needs for expansion when designing their LUN structure (and associated RAID types) to ensure extra disk space may be added and allocated to the LUN. Otherwise, expansion may sometimes need to be achieved by defining a new, larger LUN and copying all the data across.

    Storage arrays typically also include caches to improve read and write performance by acting as a buffer between the storage array and the server requesting the I/O operation. It is particularly beneficial for RAID types that require writing to multiple physical disks. However, events such as power failures or failure of the array‟s storage processor do require very careful attention and techniques such as battery backup and writing of the cache to disk when these events occur should be used.

Storage controllers in the array control the data flows between the array‟s Fibre Channel connecting ports and

    the actual disk modules constituting a LUN. The storage controller will also monitor the basic “health” of the array and its disks. An enterprise-quality storage array will also typically have more than one storage controller, providing extra resilience and sometimes extra performance as well. If multiple controllers are sharing the I/O load, then there is an additional level of complexity of cache management to ensure coherency between the caches.

    Storage arrays have varying numbers of external Fibre Channel interfaces to connect the disks to host servers via the fabric. Although 2Gbps fabric equates to 200MBps transfer rates, the total aggregate sustainable throughput into and out of the storage array needs careful consideration for the workload patterns to be supported. The number of internal loops inside the storage array and numbers of disks attached to these loops should also be considered when assessing the suitability of SANs for really I/O intensive work.

    The distribution of paths from LUNs through the storage controller(s), external Fibre Channel interfaces and the fabric may need careful consideration if load balancing software is not being used in order to ensure even distribution of I/Os through the SAN.

    Disks in a storage array are usually hot-pluggable so that service to users is not disrupted when disks are added or removed. The ability to allocate some disk as hot spares is also usually supported. Use of a hot spare means that in the event of a disk failure, the storage array will automatically begin to rebuild the failed drive‟s data

    onto the hot spare disk, continuing service if suitable RAID levels are in use. When the failed drive is then replaced, the storage array will usually rebuild onto the newly replaced disk, leaving the original hot spare available to handle any subsequent failure.

    The Storage Arrays possess varying levels of “intelligence” that depends on the storage controller(s) within the array and on the software products installed in the controllers. An example of such capability is LUN masking that is used to determine which host servers can have access to each LUN. This prevents unauthorised access to data from other servers or from server operating systems (e.g. Windows Server versions prior to 2003) that search round for available storage on booting. Although LUN masking is often implemented in the controller within the storage arrays, it is also often a feature of storage virtualisation software (see the Storage Virtualisation section below).

    The storage array controllers will also typically be able to provide other enhanced facilities such as:

     Page 7 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    ; Snapshots rapid point-in-time copy of a LUN with only changes recorded; may be attached to another

    server for analysis or to be backed up; unchanged blocks refer back to the original source LUN

    ; Clones full point-in-time copy of a LUN that can be used as a true copy of production data; uses

    same amount of disk space as source LUN and may take time to produce the clone, depending on its

    size

     the ability to replicate data to another storage array, either synchronously or ; Replication/Mirroring

    asynchronously; may be used for DR purposes and particularly useful when the two storage arrays are

    at physically separate locations

    Please note that vendor terminologies for the above may vary, and that these capabilities are usually provided by additional software options for the storage array‟s controller and may well be additional cost items, sometimes attracting premium pricing. Storage Virtualisation software if used (see the Storage Virtualisation section below) can also provide this enhanced functionality.

    The enhanced facilities above are those that distinguish SAN solutions from Direct Attached Storage solutions and are the basis of the additional flexibility, improved resilience and enhanced disaster recovery capability that will underpin the business case for a SAN. NAS devices can also incorporate some of these enhanced facilities such as snapshots, but are not generally designed for replication to other devices.

    When using these enhanced features of storage arrays to fully exploit the potential of SANs, ensure that any limits, both inbuilt/technical and through licensing, are known in advance when configurations are being planned. There may be limits on the number of snapshots allowed or number of LUNs that may be mirrored etc.

4.0 Centralised Backup Systems

    Ideally a SAN should be linked to a Centralised Backup System (CBS) to provide operational efficiencies in backup/restore operations and eliminate the plethora of disparate backup systems typically found in an HE or FE institution.

    These different backup systems typically arise when PC and UNIX support staff pursue their own backup utilities or use those supplied by the operating system vendor. Similarly, different schemes may be used to address differing business/academic requirements.

    Current backup systems are diverse, complicated and not easy to manage. Most SANs will be purchased with an associated tape library capable of reading/writing to tape media in several tape drives with robotic control of a large number of tapes held in slots in the library. Enterprise tape libraries will typically have features such as bar code readers to identify the tapes and be capable of exporting/importing tapes that need to be taken off site/brought back onsite to and from fireproof safes. Not all tape libraries have native Fibre Channel interfaces, so it may be necessary to attach them to the SAN via a Fibre Channel to SCSI bridge device.

    Various types of tape media are available for use in tape libraries. The DLT format has been the most popular for several years with the LTO/Ultrium format rapidly gaining ground. In the future these two formats are expected to dominate with a roughly equal market share [9] and convincing roadmaps for development [10, 11].

    Both these tape formats have already gone through stages of evolution with different generations of tapes and drives available with continually improving capacity and performance characteristics as new versions are introduced. The latest generation of LTO (LTO 2), for example, has excellent throughput characteristics and a single server may not be capable of sustaining such a drive in its optimum streaming mode. Backup products that allow the inter-leaving of backups from multiple sources can assist with more efficient utilisation of modern tape devices. Sites should, however, consider the impact on restore times of highly inter-leaved backups.

    A SAN based backup solution allows back up of data to be consolidated into one system architecture. With appropriate options purchased with the backup software product, backups may optionally be driven across the SAN thus reducing the bandwidth overhead on the campus network - the so called “LAN free” backup mode

    [12]. LAN free backups are attractive when network traffic levels on the LAN are an issue, but sites should note that LAN free options in backup products are usually additional cost items, often with a premium price attached.

    A further refinement is the concept of server-free backups [12], where data transfer occurs directly between the storage array and the tape library, although server-free backup products are not yet mature and proven.

     Page 8 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    Enhanced facilities offered by the SAN (such as snapshots) can also be used to reduce the impact of backup activities on production systems. A near-instantaneous snapshot may be taken and the newly created snapshot LUN then attached to the backup server to carry out the actual data backup, reducing the amount of time databases, for example, need to be offline or in hot backup mode.

    Increasingly, with the availability of cheaper disks (e.g. SATA) in SAN storage configurations, backup vendors are also providing options for disk-to-disk backup. In this scenario, data can be copied to disk in real-time over the SAN and then backed up to tape off-line, e.g. during the day. This greatly extends the „backup window‟.

    Deployment of a SAN would allow for consolidation (with matching cost saving) on backup infrastructure over its life-cycle along with increased productivity of systems and support personnel.

5.0 Strategic Fit and Industry Positioning

    The take up of NAS and SAN solutions is rising and a NAS or SAN solution is cheaper to run than DAS. Total cost of ownership in the generic business sector has been found to be 55-60% cheaper than an equivalent amount of DAS storage. The industry as a whole reports an average support cost reduction of 80% (based on FTEs per MB storage) compared with supporting the equivalent DAS infrastructure. Further cost savings are seen following backup consolidation (typically 50-75% in tape drive consolidation) [13].

    The benefits of using SAN and NAS technologies to consolidate storage are compelling [14]. The Butler Group believes that storage consolidation should be a primary objective for an organisation looking to optimise its IT infrastructure [14].

    Fibre Channel SANs and IP-attached NAS are now established technologies. The usability of management tools is rapidly improving as they provide greater automation and become available for more platforms. In most cases, the savings and improvements in staff productivity, utilization rates and data availability more than justify the additional cost of installing SANs.

    The future will lead to more interoperability and the adoption of open standards throughout the industry. New developments will see „intelligence‟ being combined with storage. For example, an application should be able to tell the storage system that it needs more storage and then be assigned that additional resource automatically.

    Major operating systems vendors are also acknowledging the greater uptake of SAN technologies. For example, Microsoft‟s Windows Server 2003 operating system has new features to enable SAN support [15]:

    ; Virtual Disk Service (VDS)

    ; Volume Shadow Copy Service (VSS)

    ; Multipath Input/Output (MPIO)

    ; Internet SCSI (iSCSI) support

    ; Ability to boot from a SAN

    ; Controlled volume mounting at boot time

Storage vendors are producing “plug-ins” for the Windows Server 2003 features above e.g. for VDS, VSS and

    MPIO this trend for more SAN awareness in operating systems will further aid the manageability of SANs.

6.0 Data Growth Rates and Their Management

    The explosive growth of the Internet, email (with attachments), integrated enterprise business suites and greater use of digital media in personal devices (e.g. cameras) is creating unprecedented demand to store, retrieve and communicate information. In fact the world‟s population is expected to create more information in the next three years than in all the years of prior existence! [9].

    Generic demand for storage across all business areas is growing. Storage growth estimates show a 76% increase in demand for storage per year across all data types. Big growth areas include e-mail (100-300% growth per year), data warehousing (72-115% per year) and internet content (75%). Customer Relationship Management (CRM) systems are also requiring more storage (growth 47% per year) [13].

     Page 9 of 20

JISC Technology and Standards Watch Report: Storage Area Networks November 2003

    Demand for storage in the HE/FE sector is growing in line with other business sectors. Growth is predicted within the e-mail and internet content data types and also newer functionality such as data warehousing and digital media storage.

    Storage may be becoming cheaper in terms of cost per megabyte, but high data growth rates and the cost of management and backup of all this data are becoming prohibitive.

    Fundamentally over recent years, the cost of storage has decreased in terms of the capital cost per megabyte of storage. However, the total lifetime cost of storage including its management, backup and maintenance should be considered. Many different industry analysts quote prices per megabyte of storage and varying factors of that cost per megabyte to manage it. A conservative estimate is a factor of three for costs of management of storage over its lifetime versus initial purchase costs.

    Industry analysts also publish varying figures for the different costs of managing DAS, NAS and SAN storage. However, the essential point is not the absolute value of any analyst‟s figures for these architectures, but the experience borne out in reality is that many more gigabytes, even terabytes of storage can be maintained by a given amount of staff resource for a SAN compared to a DAS scenario. These economies of scale are even more apparent when a Centralised Backup System (CBS) is an integral part of the SAN landscape.

    Management of escalating amounts of storage is indeed one of the chief challenges facing IT support organisations in all sectors.

7.0 Storage Management

    Storage management can encompasses several layers: management of the individual devices constituting the SAN (SAN Management), management of them as a virtual resource pool (Storage Virtualisation) and management/reporting of the data characteristics and growth patterns (Storage Resource Management).

7.1 SAN Management

    SAN Management software is needed to actually configure and monitor the components of the SAN to enable them all to function together. It is directly concerned with enabling and controlling the movement of data within the SAN infrastructure.

SAN Management products are typically able to:

    ; Discover devices attached to the SAN hosts, storage devices, switches and other fabric components

    ; Manage and monitor ports on the Fibre Channel switches

    ; Administer zoning on the switches to selectively enable access

    ; Administer LUN masking in the storage arrays to partition access to particular servers

    ; Monitor traffic levels and performance between components and through the switches

    ; Manage configuration changes within the SAN

7.2 Storage Virtualisation

    Virtualisation is an overused term in computing and in the specific area of storage, there is also much scope for confusion over the use of the term “storage virtualisation”. Some storage arrays, for example, have in-built

    virtualisation features whereby the location of data and the disposition of storage LUNs are hidden.

    Storage Virtualisation for the purposes of this report is an additional (optional) layer of storage management that can provide a centrally managed pool of storage with virtual volumes being made available to servers, as illustrated schematically in Figure 3 below. Such virtualisation solutions have the additional merit of operating in heterogeneous SAN environments, consolidating the storage devices from several vendors.

    Such virtualisation solutions [16, 17] fall into two camps: (a) in-band or symmetric and (b) out-of-band or asymmetric solutions. For in-band solutions all control functions, metadata and data pass through the storage virtualisation server or appliance. For out-of-band solutions, only control data and metadata (data about the data) passes through the storage virtualisation server or appliance with raw data flows being direct between host servers and the storage arrays. Unlike in-band products, out-of-band solutions require the installation of an agent on each host server to enable communication with the storage virtualisation server for volume information

     Page 10 of 20

Report this document

For any questions or suggestions please email
cust-service@docsford.com