Pobieranie prezentacji. Proszę czekać

Pobieranie prezentacji. Proszę czekać

Poziom podwyższony Ochrona danych

Podobne prezentacje


Prezentacja na temat: "Poziom podwyższony Ochrona danych"— Zapis prezentacji:

1 Poziom podwyższony Ochrona danych
Macierze dyskowe RAID Automatyczne biblioteki taśmowe Urządzenia do zapisu optycznego na dyskach CD.

2 Wprowadzenie Dyski i zasilacze stanowią najsłabsze ogniwo HAS
Dyski zawierają dane Dane muszą być chronione Dane muszą być odtwarzalne z pomocą dodatkowych systemów Disk storage systems podsystemy dyskowe JBOD (Just a Bunch of Disks) Dyski hot-pluggable warm-pluggable hot-spares write cache macierze dyskowe SAN NAS

3 SGI InfiniteStorage Product Line
High Availability DAS NAS SAN Redundant Hardware and FailSafe™ XVM Data Protection Legato NetWorker, XFS™ Dump, OpenVault™ HSM High Availability Data Protection HSM Data Sharing SGI Data Migration Facility (DMF), TMF, OpenVault™ Data Sharing XFS, CIFS/NFS, Samba, ClusteredXFS (CXFS™), SAN over WAN Storage Hardware TP900, TP9100, TP9300, TP9400, TP9500, HDS 99x0, STK Tape Libraries, ADIC Libraries, Brocade Switches, NAS 2000, SAN 2000, SAN 3000 Choose only the integrated capabilities you need Cracow ‘03 Grid Workshop

4

5

6

7

8

9

10

11

12

13 Parametry magistrali

14 Parametry magistrali, cd.

15 Porównanie, cd.

16 Dyski lustrzane i RAID Pojedyncze dyski - MTBF godzin
dyski lustrzane (RAID 1) hot-swap - niska wydajność wykorzystania powierzchni (~50%) RAID (D.A. Patterson, G. Gibson R.H. Katz, A case for Redundant Array of Inexpensive Disks or RAID, University of California, Berkeley, 1987) Grupa(-y) dysków sterowanych wspólnie Jednoczesny zapis i odczyt na różnych dyskach Bezawaryjność systemu - zapis informacji nadmiarowych cache

17 RAID, serwery danych, intensywny przepływ danych
umieszczony w jednej obudowie z własnym zasilaczem, wyposażony w elementy redundantne w celu zapewnienia wysokiej dostępności , wyposażony na ogół w dwa lub więcej kontrolerów we/wy, wyposażony w pamięć typu cache w celu przyspieszenia komunikacji, skonfigurowany tak, by zminimalizować mozliwość wystąpienia błędu (sprzętowa realizacja standardu RAID).

18 RAID, cd. Software/Hardware każdy komputer - redundantne i/f
HW RAID- ograniczony do jednej tablicy dysków Disk cache, procesor Tablice dysków: - kilka magistral dla kilku i/f komputerów, read-ahead buffers SW RAID - elastyczność, cenowo korzystniejszy HW RAID - Potencjalne pojedyncze punkty uszkodzeń: zasilacze chłodzenie instalacja zasilająca wewnętrzne sterowniki podtrzymanie bateryjne płyta główna

19 RAID features

20

21

22

23

24

25 Porównanie trybów RAID

26 Dobór trybu RAID

27 Zakres zastosowań

28 Storage Area Network, SAN
LAN SAN Zalety Centralizacja zarządzania i alokacji niezawodność i dostępność- rozbudowany failover sieciowy backup LAN-free backups Storage

29

30 SCSI vs. FC SCSI każdy z każdym 2 połączenia FC .......... komputery
koncentratory dyski

31 Dlaczego Fibre Channel?
Gigabit Bandwidth Now - 1 Gbs today, soon 2 and 4Gbs High Efficiency - FC has very little overhead Multiple Topologies - Point-to-point, Arbitrated loop, Fabric Scalability - from Point-to-Point FC scales to complex Fabrics Longer cable lengths and better connectivity than existing SCSI technologies Fibre Channel is an ISO and ANSI standard

32 Wysoka przepustowość Storage Area Networks today provide
1.06Gbs today 2.12Gbs this year 4.24Gbs in the future Multiple Channels expand bandwidth i.e.:800 Mbyte/s

33 Topologie FC Point-to-Point 100MByte/s per connection
Just defines connection between storage system and host

34 Topologie FC, dc. FC-AL, Arbitrated Loop Single Loop Dual Loop
Data flows around the loop, passed from one device to another Dual Loop Some data flows through one loop while other data flows through the second loop Each port arbitrates for access to the loop Ports that lose the arbitration act as repeaters

35 Topologie FC, cd. FC-AL, Arbitrated Loop with Hubs
Hubs make a loop look like a series of point to point connections. 2 3 1 HUB 4 Addition and deletion of nodes is simple and non-disruptive to information flow.

36 Topologie FC, cd. FC-Switches 1 2 3 4
Switches permit multiple devices to communicate at 100 MB/s, thereby multiplying bandwidth. 2 1 SWITCH 3 4 Fabrics are composed of one or more switches. They enable Fibre Channel networks to grow in size.

37 So how do these choices impact my MPI application performance ?
Let‘s find it out by running ... Micro Benchmarks to measure basic network parameters like latency and bandwidth Netperf PALLAS MPI-1 Benchmark Real MPI Applications ESI PAM-Crash ESI PAM-Flow DWD / RAPS Local Model

38 Benchmark HW Setup 2 Nodes HP N4600 each with 8 processors running HP-UX 11i Lots of NW Interfaces per node 4 * 100BT Ethernet 4 * Gigabit Ethernet Copper 1 * Gigabit Ethernet Fibre 1 * HyperFabric 1 Point-to-point connections, no switches involved J6000 benchmarks were run on a 16 node cluster at the HP Les Ulis benchmark center (100BT and HyperFabric 1 switched networks) HyperFabric 2 benchmarks were run on the L3000 cluster at the HP Richardson benchmark center.

39 Jumbo Frame Same Performance with Fibre and Copper GigaBit ?
GigaBit Ethernet allows the packet size (aka Maximal Transfer Unit MTU) to be increased from 1500 Bytes to 9000 Bytes. Can be done anytime by invoking lanadmin –M 9000 <nid> Can be applied to both Copper and Fibre GigaBit interfaces.

40 GigaBit MTU Size / Fibre vs Copper

41 What is the Hyper Messaging Protocol (HMP) ?
Standard MPI Messaging with remote nodes goes through the TCP/IP stack. Massive Parallelism with clusters is limited by OS overhead for TCP/IP. Example: PAM-Crash MPP on a HP J6000 HyperFabric workstation cluster (BMW data set) 25% System Usage / CPU on a 8x2 cluster 45% System Usage / CPU on a 16x2 cluster This overhead is OS related and has nothing to do with NW interface performance.

42 What is HMP ? HMP Idea: Create a shortcut path for MPI applications in order to bypass some of the TCP/IP overhead of the OS Approach: Move the device driver from OS kernel into the application. This requires direct HW access privileges. Now available with HP MPI 1.7.1 Only for HyperFabric HW

43 MPI benchmarking with the PALLAS MPI-1 benchmark (PMB)
PMB is an easy-to-use MPI benchmark for measuring key parameters like latency and bandwidth Can be downloaded from Only 1 PMB process per node was used to make sure that NW traffic is not mixed with SMP traffic. Unlike the netperf test this is not a throughput scenario.

44 Selected PMB Operations
PingPong with 4 Byte message (MPI_Send, MPI_Recv) measures message latency PingPong with 4 MB message (MPI_Send, MPI_Recv) measures half duplex bandwidth SendRecv with 4 MB message (MPI_SendRecv) measures full duplex bandwidth Barrier (MPI_Barrier) measures barrier latency

45 Selected PMB Results 548.0 MB/sec 544.8 MB/sec 1.8 μsec 1.7 μsec
Selected PMB Results 548.0 MB/sec 544.8 MB/sec 1.8 μsec 1.7 μsec Shared Memory 143.6 MB/sec 112.0 MB/sec 54 μsec 27 μsec HyperFabric 2 LowFat MPI 160.0 MB/sec 97.0 MB/sec 105 μsec 53 μsec 110.8 MB/sec 56.8 MB/sec 125 μsec 61 μsec HyperFabric 1 92.8 MB/sec 62.1 MB/sec 142 μsec 72 μsec GigaBit 9k MTU 79.8 MB/sec 62.7 MB/sec 135 μsec 1.5k MTU 22.1 MB/sec 10.8 MB/sec 112 μsec 100baseT SendRecv 4MBytes PingPong Barrier 4 Bytes

46 PingPong as measured

47 PingPong per formula t = t0+n*fmax

48 Cechy SAN ( Storage Area Networks )
Centralized Management Storage Consolidation/Shared Infrastructure High Availability and Disaster Recovery High Bandwidth Scalability Shared Data !? (niełatwe niestety !)

49

50

51 Centralne zarządzanie
Local Area Network IRIX NT Linux Storage today is Server attached, therefore at the server’s location. Difficult to manage Expensive to manage Difficult to maintain Building A Building ^B Building C Building D

52 Centralne zarządzanie, cd.
Local Area Network IRIX NT Linux Storage Area Network Storage is Network attached (NAS) independent from location of server easy and cost effective to manage and maintain IT-Department

53 Konsolidacja Storage still Linux NT Linux IRIX NT Storage now
Storage Area Network Storage still is split amonst different storage systems for different host NT Linux Storage Area Network IRIX Storage now shares a common infrastructure

54 High Availability and Disaster Recovery ( przykład )
Highly redundant network with no single point of failure.

55 Skalowalność Can scale to very large configurations quickly

56 Share Data!? Currently there is just shared infrastructure Linux NT
Storage Area Network Currently there is just shared infrastructure

57 Sieciowe zaplecze danych
NAS (Network Attached Storage) serwery danych lekki serwer, pamięć masowa, zarządzanie, sieć zastosowanie intensywny przepływ danych przeszykiwanie, analizy RT processing multimedia, videokonferencje video-server - QoS przetwarzanie wieloprocesorowe SAN (Storage Area Network) rozdzielenie od LAN

58 Central Management Console
Network Attached Storage (NAS) LANvault Backup Appliances for remote sites Conceptual View LAN at Remote Office Backup Server Easy to install Easy to manage, over the Web For remote or non-MIS offices WAN or Internet Central Management Console

59 NAS eliminates the factors that create risk for remote site backup...
Reliability of an Appliance Self-contained, dedicated to backup only Nothing complex to crash during backup Future performance enhancements via Gigabit Ethernet Control via Central Management Console No uncertainty about backup having been done correctly No need to rely on remote site operators No outdated or “renegade” remote sites…proactive alerts about upgrades, patches and service packs and automated update of multiple sites ensure consistency across entire enterprise

60 Comparing SAN and NAS

61 2 Buzzwords in IT Industry
Server Consolidation maybe in a commercial environment usually not in a technical environment a hammer is a hammer, a screwdriver is a screwdriver an HPC system cannot be used as a HPV system Storage Consolidation DAS -> NAS -> SAN Cracow ‘03 Grid Workshop

62 History of Storage Architectures DAS - Direct Attached Storage
pro appropriate performance con distributed, expensive administration data may not be where it is needed multiple copies of data stored Cracow ‘03 Grid Workshop

63 History of Storage Architectures NAS - Network Attached Storage
pro centralized, less expensive administration one copy of data access from every system con network performance is the bottleneck Cracow ‘03 Grid Workshop

64 History of Storage Architectures SAN - Storage Area Network
pro centralized administration performance equivalent to DAS con NO FILE SHARING multiple copies of data stored Switch Cracow ‘03 Grid Workshop

65 How does that translate to a GRID Environment?
Storage Consolidation useful in a local environment (GRID node) does not work between remote GRID nodes Current Data Access between GRID Nodes Data has to be copied before/after the execution of a job Problems copy process has to be done manually or included in the job script copy can take long multiple copies of data additional disk space needed revision problem Cracow ‘03 Grid Workshop

66 What if... ... a SAN would have the same file sharing capability as a NAS? ... one could build a SAN between different buildings/sites/cities and not loose performance? Cracow ‘03 Grid Workshop

67 Storage Area Networks (SAN) The High Performance Solution
A first step: each host owns a dedicated volume consolidated on a RAID array. Storage management is centralized. Offers a certain level of flexibility. SAN LAN Cracow ‘03 Grid Workshop

68 SGI InfiniteStorage Shared FileSystem (CXFS)
A unique high performances solution: Each host shares one or more volumes consolidated in one or more RAID arrays. Centralized storage management High modularity True High Performances Data sharing Heterogeneous Environment IRIX Windows NT, 2000 and XP Linux, Mac OS LAN SAN SOLARIS, AIX, HP-UX Cracow ‘03 Grid Workshop

69 Fibre Channel over SONET/SDH The High Efficiency, Long Distance Alternative
Hours Distance (kilometers) Data re-transmission due to IP packet loss limits actual IP throughput over distance New York Boston Chicago Denver Cracow ‘03 Grid Workshop

70 LightSand Solution for building a Global-SAN
Tape System Storage Servers Client LAN IP Router Fibre Channel Switch WAN DWDM Dedicated Fiber SDH SONET FC IP Cracow ‘03 Grid Workshop

71 LightSand Products S-600 S-2500 2 ports FC and/or IP 1Gb/s
Point-to-point SAN interconnect over SONET/SDH OC-12c (622 Mb/s bandwidth) Low latency (approximately 50 µSec) S-2500 3 ports FC and/or IP 1Gb/s Point-to-point SAN interconnect over SONET/SDH OC-48c (2.5 Gb/s bandwidth) Point-to-multipoint SAN interconnect over SONET/SDH (up to 5 SAN islands Mb/s per link) Cracow ‘03 Grid Workshop

72 Data Movement Today – A Recent Case Study
Server IP Network Server Los Alamos National Laboratory (LANL) Sandia National Laboratory (SNL) Fibre Channel Storage Area Network Fibre Channel Storage Area Network Scientists at LANL currently dump 100GB of supercomputing data to tape and FedEx it to SNL because it is faster than trying to use the existing 155Mb/s IP WAN connection Actual measured throughput of 16Mb/s! (10% bandwidth utilization) Cracow ‘03 Grid Workshop

73 The Better Way – Directly Between Storage Systems
IP Network Server Server Local Data Center Remote Data Center FC SAN FC SAN Telco SONET/SDH Infrastructure LightSand Gateway LightSand Gateway Using LightSand gateways, the same data could be transferred in a few minutes! Cracow ‘03 Grid Workshop

74 What does that mean for a GRID Environment?
Full Bandwidth Data Access across the GRID No Multiple Copies of Data avoid the revision problem do not waste disk space Make GRID Computing more efficient GDAŃSK POZNAŃ ŁÓDŹ KRAKÓW WROCŁAW WARSZAWA Cracow ‘03 Grid Workshop

75 Podsumowanie Problems in Storage Management
System design and configuration - device management Problem detection and diagnosis - error management Capacity planning - space management Performance tunning - performance management High Availability - availability management Automation - self management DATA GRID Management (GRID Projects) !! Replica management, optimization of data access software that depends of kind of data - expert systems, component-expert technology, time estimation of data availability .....

76

77 Producenci macierzy dyskowych
EMC Symmetrix Hewlett-Packard IBM Corp. MAXSTRAT Corp. SUN Corp.

78 EMC Corp. Firma EMC Corporation jest producentem macierzy dyskowych Symmetrix o wielkiej pojemności ze zintegrowaną pamięcią typu cache. Macierze Symmetrix charakteryzują się mozliwością współpracy ze sprzętem czołowych producentów serwerów: HP, IBM, SUN, Silicon Graphics, DEC, i innych. Macierz może być podłączona do serwera poprzez łącze FWD SCSI lub (w przypadku maszyn HP) przez łącze światłowodowe FC-AL. Obecnie firma EMC Corp. oferuje dwie rodziny macierzy: 3000 i 5000 o maksymalnej pojemności 2.96 Terabajta różniące się między sobą rodzajami interfejsów: FWD SCSI i FC-AL. w serii 3000 oraz ESCON w serii przeznaczonej do współpracy ze standardem komputerów IBM. Ze względu na powszechnośc występowania poniże zostaną opisane jedynie modele rodziny

79 Rodzina Symmetrix 3000 Rodzina serwerów Symmetrix 3000 obejmuje trzy modele różniące się od siebie maksymalną pojemnością, gabarytami i poborem mocy, maksymalną pojemnością pamięci cache i ilościa portów. Bardzo wysoką wydajność uzyskano dzięki zastosowaniu pamięci podręcznej (cache) o wielkości do 4 GB co zapewnia maksymalną szybkości przetwarzania danych oraz maksymalną szybkość działania aplikacji. W zależności od modelu macierz rodziny Symmetrix może zawierać do 128 napędów dyskowych. Pozwala to osiągnąć do 2.94 TB pamięci dyskowej zabezpieczonej.

80 Symmetrix 3430 ICDA

81 Porównanie trybów SRDF- Symmetrix Remote Data Facility - zdalny RAID 1

82 MAXSTRAT Corp.

83 Parametry LE i XLE

84 Macierze dyskowe ( < 1 TB )

85 Macierze dyskowe ( > 1 TB )

86 Przykład: HP storage – rodzina produktów
Klasa wyższa Storage Area Networking, Mission Critical High Availability/Disaster Recovery, E-Commerce, Data Warehousing and Consolidation Klasa średnia Mission Critical High Availability, ERP, Data Warehousing, Data Analysis & Simulation Klasa podstawowa E-Commerce, Application and File/Print Disaster tolerance XP512 XP48 VA7100/VA7400 FC60 High availability DS2100 12H with AutoRAID FC10 DS2300 ISP Mid-market Enterprise

87 XP 512/48 w pełni Fibre Channel w pełni redundantna
gwarancja 100% dostępności danych skalowalna do ok. 90TB crossbar – nowoczesna architektura przełącznika o przepustowości 6.4GB/s zaawansowane zarządzanie i funkcjonalność najbardziej atrakcyjny produkt tej klasy wg. Gartner Group

88 Virtual Array 7100/7400 2Gb/s Fibre Channel architektura wirtualna
prosta obsługa – pełne możliwości zarządzania zaawansowane metody zabezpieczania przed awarią skalowalna do 7.7TB technologia AutoRAID – automatyczna optymalizacja praca w środowiskach heterogenicznych

89 macierze dyskowe HP StorageWorks
rodzina XP rodzina EVA NOTE: THIS SLIDE POSITIONS OUR LONGTERM STRATEGIC PRODUCTS HP StorageWorks Array Portfolio HP StorageWorks MSA family—is ideal for small and medium business (SMB) customers and enterprise departments looking to make the first step toward external storage consolidation and SAN. Typical customers tend to believe investment protection critical, are price sensitive, and have moderate concerns about scalability. The MSA family is a perfect solution for customers with deployed Smart Array controllers and/or ProLiant Servers because it is built with HP Smart Array technology and features. With the MSA family and its unique DtS (DAS to SAN) migration technology, customer can easily move from internal ProLiant server storage to external storage and even SAN, leveraging common disk drives, management tools, data and technology. The MSA family consists of three products (MSA30, MSA500, and MSA1000) as well as optional server/ storage bundles. The MSA family delivers super simplicity and all the cost savings of consolidated storage. (MSA family – Easy, Safe, Affordable) HP StorageWorks EVA family—is ideal for mid-range and enterprise customers that require high availability, but not at the same level as the XP. They can tolerate some unavailability. They may want an identical remote copy of the data, for instance, but they have a greater tolerance for the amount of time to restore. These are customers who do not buy NonStop servers or high-end Superdomes. The EVA is an outstanding solution for customers who are looking for enterprise-class capabilities at a more cost effective price. Where TCO is very important to them and on-going management costs are very important. The EVA family is very simple to use and highly automated enabling customers to manage more storage/admin compared to any other array subsystem on the market, providing the best TCO.  (EVA family – Enterprise-class that’s powerfully-simple) HP StorageWorks XP family – is ideal for enterprise customers requiring the highest level of scalability, availability, Disaster Recovery and Business Continuity solutions, and guaranteed uptime service levels. The level of service provided here could be compared to the level of service from our Nonstop severs or high-end Superdome servers. The XP is an excellent solution for customers with applications than cannot afford any downtime—‘bullet proof’ if you will and are willing to pay the cost to implement that level of service. Their business demands this level of service. The XP also supports the largest number of OSs and offers very robust solutions. Plus you can mange up to 149TB in one box with one interface that controls it all. (XP family - Xceptional availability, scalability and throughput) It’s a matter of choice - we have customers who clearly understand this and they own MSAs, EVAs and XPs—because they have different needs within their business. rodzina MSA Przepustowość Zawsze dostępne <165 TB Konsolidacja Data center + disaster recovery Aplikacje typu Oracle/SAP o największych wymaganiach HP-UX, Windows, + 20 innych włączając mainframe Wyjątkowe TCO <72 TB Konsolidacja pamięci masowych + disaster recovery Uproszczenie przez wirtualizację Windows, HP-UX, Linux i inne systemy Unixowe Konsolidacja niskim kosztem <24 TB WEB, Exchange, SQL prosty DAS-to-SAN (ProLiant) Windows, Linux, Netware i inne Skalowalność

90 architektura fizyczna „modularna”, np. VA/EVA
1 2 3 105 FC-AL DFP FC Cache Memory

91 architektura fizyczna „monolityczna”, np. XP12000
CHIP ACP Fibre Channel crossbar 83 GB/s Fibre Channel ACP CHIP Fibre Channel max. 4 pary ACP do 1152 dysków 72GB 15K rpm oraz 146GB 10K rpm CHIP Fibre Channel max. 4 pary CHIP do 128 portów ACP CHIP Fibre Channel Fibre Channel ACP The xp1024’s crossbar architecture eliminates bus contention, creating a high-bandwidth path from server to disk. In simplified terms, the “crossbar” provides literally two (2) switches in the xp1024 (two for redundancy) that enable all data paths to remain open without any contention. This translates to an aggregation of backplane throughput of 15 GB/s. The XP architecture is highly redundant with no single point-of-failure. Redundant components and online firmware upgrades keep the array up and running, and ensure that critical data is available for use. This architecture is also highly efficient. The host side (CHIP) processes cache hits, checks access rights, and links protocols, so server requests are processed and handled quickly. Channel Host Interface Process (CHIP) is a PCB used for data transfer control between the host and cache memory. Array Control Process (ACP) is a PCB used for data transfer control between disk drives and cache memory. Four ports per PCB are mounted. Ports are controlled by their respective dedicated microprocessors and data is transferred concurrently between ports and disk drive. Cache memory is nonvolatile memory (battery backed up) and write data is duplicated so that data loss is not caused even when a failure of one component occurs in power supply or PCB. Shared memory is nonvolatile memory (battery backed up) used to store cache directories and disk control information. The required shared memory varies with the size of cache and number of LDEVs. cache and shared memory 128 GB

92 XP disk arrays have a proud heritage
More than 5000 XP Frames and 36 Petabytes of capacity installed so far! The XP Next Gen is the latest generation of the widely used and highly proven XP disk array platform. HP has shipped more than 4000 XP frames and more than 36 Petabytes (36,000 Terabytes) of XP storage capacity. As the latest generation of XP arrays, the XP Next Gen provides even more availability, scalability, and flexibility to fit your capacity requirements. The XP Next Gen products provide: Increased disk capacity including support for leading disk drives: 72GB 15K rpm and 146 GB 10K rpm dual ported Fibre Channel Arbitrated Loop disk drives increased maximum cache bandwidth increased shared memory bandwidth increased disk connection bandwidth XP1024 XP512 XP 12000 XP256 XP128 XP48

93 XP Next Gen performance
Max Sequential Performance 6.0 GB/s 2.1 1.2 Max Random Performance Cache 2,368,000 IOPS 544,000 272,000 Max Random Performance Disk 120,000 66,000 33,000 Data Bandwidth 68 10 5 Control Bandwidth 15 GB/s 5 GB/s 2.5 GB/s

94 XP Next Gen scalability
Max Disk Drives 1152 + Ext Storage 1024 128 Raw Capacity 165 TB 149 TB 18 TB Usable Capacity 144 TB 129 TB 16 TB Max Cache 128 GB 64 GB Max Shared Memory 6 GB 4 GB Supported RAID levels RAID 1 (2D+2D) RAID 1 (4D+4D) RAID 5 (3D+1P) RAID 5 (7D+1P) (RAID 5 (6D+2P)) Disk Drive Types 72 GB 15K rpm 146 GB 10K rpm (146 GB 15K rpm) (300 GB 10K rpm) 36 GB 15K rpm 73 GB 10K rpm 73 GB 15K rpm 2nd Future

95 XP Next Gen scalability
Start at the size you need, scale to the highest usable capacity of any array on the planet The XP Next Gen allows you to start with the configuration you need today and scale of the highest usable capacity of any disk array as your needs grow! With the XP Next Gen, you can mix drives of various capacities and speeds in the same array to achieve an optimal balance of performance and capacity for your particular application demands. The XP Next Gen also allows you to size and performance of Cache, ACP pairs, Shared Memory, and the number of host ports and spare drives to meet your particular requirements. The XP Next Gen is the right choice if you need the very best performance. With its high performance and storage capacity the XP Next Gen is ideally suited for business intelligence, that is a combination of data warehousing, data mining and information generation. There are 4 slots reserved exclusively for spare disks, another 36 slots can be used either for spares or data disks for a maximum of 40 spare disks. Min Increment Max Data Drives 8 4 1148 Spare Drives 1 40 Capacity 576 GB raw 288 GB usable - 165 TB raw 144 TB usable ACP Pairs Cache 4 GB 128 GB Cache Bandwidth 17 GB/s 17/34 GB/s 68 GB/s Shared Memory 1 GB 6 GB Shared Memory Bandwidth 7.5 GB/s 15 GB/s Host Ports 8/16/32 128 LDEVs 8,192 16,384 Frames 5 2nd

96 XP Next Gen scalability
Start at the size you need, scale to the highest usable capacity of any array on the planet NOTES This slide is animated, each step depicts the XP scaling up. ACP pairs, DKUs, and disk drives can be added in different order than what is shown by animation Order depends on performance/cost preferences of the customer The XP Next Gen allows you to start with the size you need today and scale as your need grow. The smallest XP Next Gen consists of one DKC (control cabinet) Click It can contain as little as One CHIP pair 4 Gigabytes of Cache Memory 1 Gigabyte of Shared Memory One ACP pair And 8 disk drives As your needs grow you can continue to add disk drives Up to 128 drives in the DKC You can add an additional DKU (disk cabinet) And add more disk drives Additional ACP pairs and disk can be added You can also add additional infrastructure. The XP Next Gen will hold up to: 4 Chip pairs 128 Gigabytes of Cache 12 Gigabytes of Shared Memory Additional DKUs can be added Each DKU can hold up to 256 disk drives Up to 512 disk drives per ACP pair Continue to expand capacity as your needs grow Scale to the highest usable capacity of any disk array on the planet! 1 2 15 1 2 15 ACP SM 1 GB Cache Switch 4 GB Path CHIP 1 2 15 1 2 15 1 2 15 1 2 15 ACP ACP ACP SM 1 GB 6 GB SM Path Cache Switch Cache 4 GB 128GB CHIP CHIP CHIP

97 XP Next Gen RAID types RAID 1 2D+2D RAID 1 4D+4D RAID 5 3D+1P RAID 5
2nd RAID 1 2D+2D RAID 1 4D+4D RAID 5 3D+1P RAID 5 7D+1P RAID 5 6D+2P Data Data Data Data Data RAID 1 2 data + 2 mirrored data configuration. The array can read from either set of identical information. RAID 0/1 results in 50% efficiency of usable storage compared to raw capacity. RAID 1 4 data + 4 mirrored data, doubles the maximum size of a RAID 1. For certain operating systems, having a smaller number of larger capacity stripes makes configuring the storage easier and simpler. RAID 5 3 data + 1 parity drive configuration. This results in 75% efficiency of raw storage. RAID 5 7 data + 1 parity configuration. This configuration improves the storage efficiency to 87.5% of usable compared to raw storage. RAID 5 6 data + 2 parity configuration. This configuration improves the fault tolerance. Two drives from one group can fail and data is still available. Trade-off is slightly lower write performance. This RAID type can also properly be called RAID 6. Scheduled to be available for 2nd release. All RAID types can be configured in the same XP array at the same time. RAID level 1 is a combination of RAID levels 0 and 1. The array is first set up as a group of mirrored pairs (RAID 1), then striped (RAID 0). A RAID level 0/1 array done in this manner can sustain multiple drive failures, as long as both drives of a mirrored pair do not fail at the same time. The probability of this occurring is very low. Performance is very good with RAID 0/1 arrays, and redundancy is also very high, but it comes at the cost of additional disk drives. RAID 5 uses block level striping and distributed parity. RAID 5 distributes the parity over all of the drives to increase performance by decreasing the bottleneck to a single drive. RAID 5 parity is used for fault tolerance. Write Performance RAID 1 writes require 2 I/Os (the same data written twice)—RAID 1 is the fastest write performing RAID type. RAID 5 (3D+1P) and RAID 5 (7D+1) writes require 4 I/Os (1-read old data, 2-read old parity, 3-write new data, 4-write new parity)—RAID 5 write performance can be as low as one half the performance of RAID 1 writes. RAID 5 (6D+2P) writes require 6 I/Os (1-read old data, 2-read old parity , 3-read old parity, 4-write new data, 5-write new parity, 6-write new parity)—RAID 5 (6D+2P) write performance can be as low as one third the performance of RAID 1 writes. Read Performance The read performance of all RAID types is similar. The XP disk arrays automatically use a spare drive when a disk failure occurs, making the time of degraded protection small. After the failed drive is replaced the data is rebuilt and the spare drive is free. Data Data Data Data Data Mirror Data Data Data Data Mirror Data Parity Data Data Mirror Data Data Mirror Data Data Mirror Data Parity 1 Mirror Parity Parity 2 Efficiency 50% 75% 87.5% Write Performance Read Modify Read + Modify + Write + Fault Tolerance 1 of 4 +1 of 2 1 of 8 2 of 8

98 Array Control Processor (ACP)
The ACP performs all data movement between the disks and cache memory The ACP also provides data protection through RAID 1 mirroring and RAID 5 parity generation Each ACP provides 8 redundant 2 Gbps FC-AL loops (16 loops total per pair) and supports up to 64 dual ported drives per loop Maximum of four ACP pairs per XP Next Gen system ACPs are configured in pairs for redundancy Array Control Processor (ACP) The ACP controls data transfer between the disk drive and cache memory. Each XP Next Gen ACP has eight 2 GB/s FC-AL loops compared to four 1 GB/s loops in the XP1024/XP128. The ACP also provides data protection through the use of RAID 1 mirroring and RAID 5 parity generation The ACP is configured in pairs for redundancy The XP Next Gen can have one, two, three or four pairs of ACPs. ACP 2A ACP 2B ACP 4B ACP 3B ACP 4A ACP 3A ACP 1A 8-port ACP 1B

99 CHIPs Client Host Interface Processors (CHIPs) provide connections to hosts or servers and come in pairs Maximum of four pairs per XP Next Gen system Fibre Channel 16 and 32 port 1 to 2 Gigabit/sec Auto sensing Fibre Channel in both short and long wave versions ESCON 16 port ExSA Channel (ESCON compatible) FICON 8 and 16 port 1 to 2 Gigabit/sec auto-sensing FICON in both short and long wave versions iSCSI 16 port 1 Gigabit Ethernet short wave Client Host Interface Processors (CHIPs) Client Host Interface Processors provide connections to host or servers that use the XP Next Gen for data storage. CHIPs come in board pairs with a minimum of one pair and up to a maximum of four pairs of CHIPS per XP Next Gen. CHIP pairs available for use in the XP Next Gen include: 16 and 32 port 1 to 2 Gbps Auto sensing short-wave and long-wave Fibre Channel with Continuous Access support 16 port ExSA Channel (ESCON compatible) 8 and 16 port 1 to 2 Gbps auto-sensing FICON 1 Gb in both short- and long-wave versions 16 port 1 Gbps iSCSI Gigabit Ethernet All XP CHIPs provide native connections and do not require any external converters. Future

100 Disk drives The number and type of disk drives installed in an XP array is flexible Disks must be added in groups of four Additional capacity can be installed over time as capacity needs grow New technology (faster/larger) disks may be become available for future upgrades All disks use the industry standard dual ported 2 Gbps Fibre Channel Arbitrated Loop (FC-AL) interface Each disk is connected to both of the redundant ACPs in a pair by separate Fibre Channel arbitrated loops Spare drives are automatically used in the event of a disk drive failure Disks must be added in groups of four (array groups) New technology (faster/larger) disks may be become available for future upgrades Each disk is connected to both the primary and secondary ACP by separate Fibre Channel arbitrated loops Each XP must include at least one spare disk drive for every type of disk configured in the array. For larger configurations, and especially configurations where RAID5 7D+1P is implemented, more than one spare disk is strongly recommended of the type used in the RAID5 7D+1P. Additional spares can be configured in the field by ordering additional array groups and configuring the drives as spares. The maximum number of spares configurable is 40 drives for the XP Next Gen. There are 4 slots reserved exclusively for spare disks, another 36 slots can be used either for spares or data disks. Spare drives are automatically used in the event of a disk drive failure Disk Drive Specifications 72 GB, 15K 146 GB, 10K 146 GB, 15K 300 GB, 10K Raw capacity (User area) 71.5 GB GB Disk diameter 2.5 inches 3 inches Rotation speed rpm 10,025 rpm Mean latency time 2.01 ms 2.99 ms Mean seek time (Read/Write) 3.8/4.2 ms 4.9/5.4 ms Internal data transfer rate 74.5 to MB/sec 57.3 to 99.9 MB/sec Future Future

101 klastry lokalne MC/ServiceGuard on HP-UX
Veritas Cluster Server or Sun Cluster on Solaris HACMP on AIX Microsoft Cluster Server on NT Microsoft Cluster Service on Windows2000 TruCluster on Tru64 OpenVMS Clustering on OpenVMS Network Cluster Services on Netware Failsafe on Irix (SGI) AIX Windows cluster cluster Solaris HP-UX cluster cluster XP Disk Array

102 dostępność pełna redundancja podzespołów
wszystkie podzespoły podłączane „hot swap” dyski zapasowe aktualizacja mikrokodu „online” z możliwością powrotu funkcja ”call home" zaawansowana zdalna diagnostyka

103 Online firmware update
Online firmware update without any host ports going down Each CHIP has multiple processor modules. Each processor module contains a pair of microprocessors One microprocessor of a pair is updated, while the other microprocessor in the pair continues to service the host ports for the pair with little impact to performance Enables remote firmware update capability independent of host connection topology Firmware executes on microprocessors on CHIPs and ACPs (SVP and environmental functions also have some firmware). A CHIP has multiple processor modules. Each module has 2 microprocessors. The requirement for online update is that one microprocessor on a module gets updated while the other microprocessor continues to operate the host ports being served by both microprocessors. Because of the options that exist for how the firmware gets updated, the actual updates may occur from between 1 microprocessor in a cluster at a time (which would be one microprocessor on one CHIP) to half of the microprocessors in the entire cluster at a time (which is half of the microprocessors on all the CHIPs in one cluster). Online firmware update works with every connection configuration instead of requiring redundant connections. Therefore, firmware updates can be done remotely without a visual inspection of the machine. XP Next Gen enables a simpler online firmware update capability by eliminating the need for redundant host paths in the SAN or direct FC connections to hosts. This makes it possible to provide firmware updates remotely without complicated site qualification audits. Microcode update usually requires 5-10 minutes of time to complete. Host Port Micro processor Host Port Micro processor Host Port Micro processor Host Port Micro processor Before Update 1st Processor Update 2nd Processor Update After Update

104 Battery Internal environmentally friendly nickel-hydride batteries allow the XP Next Gen to continue full operation for up to one minute when the AC power is lost. During this minute all the disk drives continue to operate normally. When duration of the power loss is longer than one minute, the XP Next Gen executes either the De-stage Mode or Backup Mode, protecting data for up to 48 hours. The choice of De-stage vs. Backup mode has dependencies. If the array has 2 ACP pairs or less and no external storage, either mode can be chosen. If external storage is connected or there are 3 or more ACP pairs, then Back-up mode must be used. The Auto-Power-On switch choices of behavior are: automatic restart or manual intervention restart for sites that want to be sure the power has become stable before restarting operations. Using an external “uninterruptible power supply” (UPS) can continue operation for as long as the UPS can supply power.

105 Storage Partition XP Cache Partition XP
2nd 2nd Easy Secure Consolidation Divide an XP Next Gen into independent configurable and manageable “sub arrays” Partitions Cache or Array (cache, storage, port) resources Allows array resources to be focused to meet critical host requirements Allows for service provider oriented deployments Array partitions are independently & securely managed Can deploy in an mixed deployment (Cache, Array, Traditional) Site B Site C Site A HP StorageWorks Storage Partition and Cache Partition XP allow resources on a single XP Array to be divided into separately configurable/manageable array ‘domains’. A single XP can, in effect, be provisioned as a series of distinct subsystems. Cache Partition XP allows array cache to be allocated to particular host/port combinations to ensure that those hosts/ports enjoy optimized performance of cache-oriented I/O. Cache partitions are assigned to specified disk array groups Up to 32 partitions of at least 4GB can be created Provides another method for tuning performance for data access for performance critical applications Sequential cache performance is increased due to improved internal cache design Cache segments are larger - 256kB Larger segments reduce processing overhead, increasing the processor power available to perform IO activities Cache can be partitioned separate from storage partitions With Storage Partition XP a single XP can, in effect, be provisioned to appear as a series of distinct virtual subsystems. Each virtual subsystem can be independently provisioned and securely administered without impacting other subsystems. Each partition has assigned elements Host Ports, Disk Array Groups, and Cache Partitions Host access is through HW ports assigned to the partition Storage partitions are individually managed Partitions are not visible to other partitions Some or all of the XP Next Gen elements can be assigned to one or more Storage Partitions CHIP CHIP CHIP CHIP CHIP CHIP Cache Cache Cache Cache Cache Site B Admin Site A Admin Non-partitioned Disk Main Site CHIP CHIP CHIP CHIP CHIP CHIP Cache Cache Cache Cache Cache Main Site Super Admin

106 XP Next Gen Configuration Management
HP Command View XP* Web-based XP Disk Array management, configuration & administration HP LUN Configuration & Security Manager XP Web-based LUN management and data security configuration HP Performance Advisor XP Web-based real-time performance monitoring HP Performance Control XP* Web-based performance resource allocation HP LUN Security XP Extension Web-based Write-Once/Read-Many archive storage configuration HP Cache LUN XP Web-based cache volume lock performance acceleration HP Auto LUN XP Web-based storage performance optimization HP Data Exchange XP Mainframe-open system host data sharing Start with Command View XP, the web-based management platform, then tailor your solution with point products that provide a more precise level of control enabling you to: Configure storage volumes and provide for secure connection to hosts using HP LUN Configuration and Security Manager XP Monitor and tune the performance of the XP Disk Array using HP Performance Advisor XP Allocate performance resources to mission critical hosts/applications/processes using HP Performance Control XP Configure XP storage for archiving applications or for data retention/SEC requirements using HP LUN Security XP Extension Fine tune storage resources by configuring frequently accessed files in cache using HP Cache LUN XP Optimize performance through intelligent volume migration using HP Auto LUN XP Share/exchange data in mixed Mainframe/Open Systems environments using HP Data Exchange XP HP StorageWorks Command View XP provides a web-based management platform for XP management applications. Command View's graphical mapping of host and storage status and host to array volumes increase your staff's efficiency and reduce costs. HP LUN Configuration and Security Manager XP is a combination product combines a configuration and capacity expansion tuning tool together with the best security available. In summary it combines the features of LUN Configuration Mgr. and Secure Mgr. into a single product. It is available for the XP128 and XP1024 disk arrays only and is a required purchase with either of these arrays. HP StorageWorks Performance Advisor XP is a web-based application that collects and monitors real-time performance of XP disk arrays, either standalone or integrated with Command View XP. Web-based performance monitoring can be used almost anywhere to check the status of your XP array, and a CLUI is also provided to integrate into third-party packages. Performance Advisor XP enables you to monitor many-to-many hosts to arrays from a single management station. HP StorageWorks Performance Control XP enables service providers using XP disk arrays to set and relax performance caps by worldwide name or user-assigned nicknames, specified in MB/sec or IO/sec. HP StorageWorks LUN Security XP Extension provides evolved means of LDEV access control, allowing a storage administrator to protect key datasets from being changed after initial write activities. As part of an overall storage/server/application solution, it can help customers deploy a storage solution to address SEC/congressional requirements for data integrity and retention . HP StorageWorks Cache LUN XP can speed up access to mission-critical data by locking it in cache. Wherever the rapid access to information is important, Cache LUN XP is the answer for heavily accessed LUNs, like database logs or index files. HP StorageWorks Auto LUN XP provides automatic monitoring and workload balancing for your XP disk arrays. It allows you to move frequently accessed data to underutilized volumes, replicate volumes for backup and recovery, and simulate the results of any changes before any data is moved. HP StorageWorks Data Exchange XP facilitates sharing of information across computing platforms. * New/Enhanced compared to the XP128/XP1024

107 XP Next Gen Availability Management
HP Business Copy XP* Real-time local mirroring HP Continuous Access XP/XP Extension Real-time synchronous/asynchronous remote mirroring HP Continuous Access XP Journal* Real-time extended distance multi-site mirroring HP External Storage XP* Connect low cost external storage to the XP. HP Flex Copy XP Point-in-time copy to/restore from external HP MSA HP Secure Path XP/Auto Path XP Automatic Host Multi-Path Failover and Dynamic Load Balancing HP Cluster Extension XP Extended High Availability Server Clustering HP Fast Recovery Solutions Accelerated Windows Application backup & recovery management Enhance the Availability and Continuity of a XP Next Gen solution by deploying a variety of tailored replication/HA software products: HP StorageWorks Business Copy XP is a local mirroring product that maintains one or several copies of critical data through a split-mirror process. Asynchronous copy volume updates ensure I/O response time for primary applications is not adversely affected. It provides full-copy/clone or snapshot/space-efficient local replication. Nine simultaneous clone copies or 32 simultaneous snapshot copies can be concurrently maintained HP StorageWorks Continuous Access XP and Continuous Access XP Extension are high-availability data and disaster recovery solutions that deliver host-independent real-time remote data mirroring between XP disk arrays. Continuous Access XP provide synchronous replication and Continuous Access XP Extension extends capabilities to include asynchronous replication. A variety of remote connectivity infrastructures can be deployed to facilitate array-to-array connections – ESCON, Fibre Channel, ATM, and IP. HP StorageWorks External Storage XP is a software product which allows the XP to connect to low cost external storage via a fibre channel connection. The XP can then recognize the LUN’s created on the external storage and operate with them as if they were internal to the array. HP StorageWorks Flex Copy XP allows static point-in-time copies of XP array data to be copied to an external MSA1000 disk array. This product facilitates high-speed backup (using the external MSA as an external backup ‘staging’ device) or allows critical data to be ‘broadcast’ out to multiple external MSA arrays for more efficient, low impact offline data operations. HP StorageWorks Secure Path XP and HP StorageWorks Auto Path XP provides host-based automatic path failover and dynamic load balancing of array-to-host paths. This enhances overall solution availability and optimizes performance. HP StorageWorks Cluster Extension XP allows advanced Continuous Access XP remote mirroring to be tightly integrated with data center-level High Availability server clustering to provide true multi-site server/storage disaster recovery. Available for Sun Solaris (VCS), IBM AIX (HACMP), MS Windows 2000/2003 (MSCS), or Linux (Service Guard) environments HP StorageWorks Fast Recovery Solutions XP provides for tight integration of Business Copy XP local replication with Windows SQL and Exchange application environments to allow for rapid application recovery * New/Enhanced compared to the XP128/XP1024

108 High-speed, real-time data replication
Business Copy XP Features real-time local data mirroring within XP disk arrays full copies or space-efficient snapshots enables a wide range of data protection data recovery and application replication solutions instant access/restore flexible host agent for solution integration Benefits powerful leverage of critical business data for secondary tasks allows multiple mission-critical storage activities simultaneously High-speed, real-time data replication HP StorageWorks Business Copy XP is a local mirroring product that maintains one or several copies of critical data through a split-mirror process. Asynchronous copy volume updates ensure primary application performance is not adversely affected. With XP Next Gen, space efficient snapshot local copy will be available in addition to the existing full-copy/clone capability. Problem Several workloads or processes may require the same set of data across multiple applications. However, in order to run one application, the “competing” application cannot be run and is therefore unavailable. A simple example of this is the backup of a database. To ensure a coherent database backup, all online transactions need to be halted. The database is therefore unavailable during this backup process. Solution HP Business Copy XP provides a solution by supporting multiple copies of the production data. (Up to nine full-copy or 32 snapshot/space-efficient concurrent copies). Each copy of the data can be used for various purposes—backup, new application testing, or data warehousing loading—without any disruption to the primary application and primary data, and most importantly, all without any disruption to your business operations. At the same time, you will be creating faster implementation of new requirements and capabilities so that your business will succeed. Additional note May be used in conjunction with Continuous Access XP to maintain multiple copies of critical data at local and remote sites. Note: Continuous Access XP is only supported with Data Protector on HP-UX servers. Both HP Data Protector and VERITAS NetBackup products have been tightly integrated with HP Business Copy XP to provide Zero Downtime and Split-Mirror Backup solutions for the HP Surestore XP disk arrays on multiple server platforms. Can the Primary volume be resynced from a Business Copy? Yes Can a full physical Primary volume be created from a Business Copy?  No  At first release Business Copy will support a max of 4,096 pairs Second release will support a max of 8,191 pairs P P C9 C32 Cn Cn C1 C1 Full Copy Snapshot Future

109 zero downtime backup hp business copy xp
brak wpływu na wydajność Aplikacji/Bazy Danych większa dostępność danych szybkie odtwarzanie z backupu online zautomatyzowane i całkowicie zintegrowane rozwiązanie: zintegrowane z Oracle RMAN zintegrowane z SAP brbackup zintegrowane z MS Exchange Data Protector Control Station Production Server Data Protector Backup Server LAN P-VOL C1 Business Copy XP Tape Libraries XP

110 Business Copy XP Full copy vs. Snapshot
There are pros/cons to using either Full copy or Snapshot: Full-Copy Because an independent copy is made when a clone is created, any activity against that copy do not affect the primary data from which it was created. And in comparison to Snapshots, where a copy-on-write operation must be completed before primary/snapshot-common data can be overwritten, Write activity against cloned primary data is accomplished unencumbered by local mirroring activities. Clones are storage intensive – an equal amount of storage capacity must be provisioned for each primary data source copied. Clones are most useful when performance concerns are paramount (when unfettered access to primary or mirror image data must be insured) or when a completely independent local copy image is required (e.g. for recovery from disk implementations). Snapshot/Space-efficient Because only differential data must be additionally committed to disk, Snapshots can be very storage capacity efficient. For primary or mirror data that is not updated very frequently or very much, incremental capacity consumption can be very low. Due to the logical/metadata nature of snapshots, many more concurrent copies can be created and managed. A Read operation of a Snapshot image can impact I/O operations to the primary dataset if the read data resides on the primary volume – and potentially contend with whatever operations that are continuing on that volume. A Write operation of a Snapshot image can also impact I/O operations to the primary dataset if the data to be written resides on the primary volume. That data must be copied over to the snapshot pool so as to preserve the data integrity of the primary volume. A Write operation of a Primary image can be impacted if Snapshots of that image are being maintained. Any data that is required to maintain integrity of a snapshot image must be copied over to the snapshot pool before the write operation can be performance so as to preserve the data integrity of the primary volume. Snapshots are most useful when cost/performance issues are paramount. They are even more practical/appropriate for low I/O activity environments, where Snapshot maintenance does not unduly impact primary data I/O operations. Cn Future Cn C1 C1 Pros Space efficient – only the delta data consumes extra space More concurrent images (32 per Primary) Cons Performance impact – Snapshot Read impacts primary data Virtual Snapshot Image – Copy shares data with Primary Heavy Write environments reduce efficiency Best for… Low cost/performance Low Write/Read environments Pros Isolates primary data from Copy Read/Write performance impact Full speed Copy Read/Write access Cons Copy requires full amount of storage Fewer Images (9 Per Primary) Best for… High performance requirements Heavy Primary-Write/Copy-Read datasets Recover-from-copy implementations

111 Continuous Access XP and Continuous Access XP Extension
Features real-time multi-site remote mirroring between local and remote XP disk arrays synchronous and asynchronous copy modes Sidefile and Journaling options remote link bandwidth efficient flexible host agent for solution integration Benefits enables advanced business continuity reliable and easy to manage offers geographically dispersed disaster recovery High-performance real-time multi-site remote mirroring HP StorageWorks Continuous Access XP and HP StorageWorks Continuous Access Extension XP are high-availability data and disaster recovery solutions that deliver host-independent real-time remote data mirroring between XP disk arrays. With seamless integration into a full spectrum of remote mirroring-based solutions, Continuous Access XP and XP Extension can be deployed in solutions ranging from data migration to high availability server clustering — available over ESCON or fibre channel. Remote mirroring solutions with Continuous Access XP--the ultimate in remote data mirroring between separate sites maintains duplicate copy of data between two XP arrays ensures data integrity between two XP arrays eliminates storage and data center cost of “multi-hop” solutions OS-independent High availability cluster integration: leverage advanced features like fast failover/failback for seamless interoperation in high availability server clustering solutions for HP-UX, Microsoft Windows NT, Sun Solaris, and IBM AIX. Works with Cluster Extension, Metrocluster and Continentalclusters for full long-distance clustering solution Synchronous or asynchronous copy modes to meet stringent requirements for data currency and system performance while ensuring uncompromised data integrity highest data concurrency with synchronous copy highest performance and distance using asynchronous copy Deploy Continuous Access XP remote mirroring using a wide range of remote link technologies, including pure fiber/DWDM, and high performance WAN/LAN Converters for ATM (OC-3), DS-3 (T3), and IP, cross-town, or all the way across the globe Continuous Access connection between XP disk arrays can be Fibre Channel (can use multiple channels) or ESCON For best performance, Fibre Channel is recommended between XP512/48s and XP128/1024s. ESCON is primarily used to connect with older XP256 XP disk arrays that don’t support Fibre Channel Continuous Access HP StorageWorks Continuous Access XP Multi-site is a remote replication tool that is the basis of 1:2, and 1:1:1 multi-site replication solutions. Using it’s unique Journal-based replication technology, it enables a single volume to replicated to two distinct arrays or a single volume to be cascaded to two distinct arrays (see other slide). Journal-based replication results in better remote link bandwidth utilization. Replication operations consume less bandwidth and therefore allow for relatively lower bandwidth link capacity to be deployed for such a solution. At First Release 8,191 pairs will be supported XP Next Gen, XP1024, and XP128 supported At Second Release 16,383 pairs will be supported XP512 supported metro mirror data continental mirror

112 CA latency/IOPS Study

113 Async-CA Guaranteed Data Sequence
Asynchronous-CA Control Asynchronous-CA Control Sort by Sequence Control Information Near Site Far Site Cache Cache Application Data 1 Data 1 Data 1 4 2 3 1 Sort Data 2 Data 2 Data 2 Asynchronous Transfer with Out of Order repaired (per consistency group) Data 4 Data 3 Data 3 Out of order transfer resolved at the far site. Data 3 Data 2 Data 3 Data 4 Data 1 Data 4 Data 4 Guaranteed Data Sequence Guaranteed Data Sequence At far site, I/O #3 will be held until I/O #2 shows up

114 Async-CA Lost Data in Flight
What of What of Lost Data in Flight? Lost Data in Flight? Near Site Sort by Sequence # and CT group Far Site Cache Cache Application Data 1 Data 1 Data 1 4 2 1 Sort Data 2 Data 4 Data 4 Time Out Data 3 Lost 4 Default=3min. Data 2 0, 3-15min.** Data 2 Data 2 Writes 1 and 2 make it to Write cache and disk. Write #3 has until a timeout to show up. If the timer pops, the link is suspended until operator intervention (pairresync command). Converters like CNT do automatic re-transmission which should help. XP disk array behavior must be overlaid with expected FC driver and LVM behavior so that proper Threshold and timeouts are purposefully chosen. For example, if the FC driver receives a response within 10 seconds, all is OK. If not, it waits 2 seconds and retries. LVM watches all this with a 60 second timeout. If I/O completes in that time (up to 5 FC driver retries), all is OK. If not, the port is considered down and LVM switches to a pvlink. While the link is suspended, both sides start using a bit map (#4 in RCU goes into a bit map, all new and unsent sidefile writes go into an MCU bitmap. After the reason for the link problem is rectified (and possibly after a data consistent BC copy is made at the far side) a pairresync command causes a non-ordered bitmap copy to bring the pairs back into synchronization. 4 Data 4 Data 1 Data 3 Suspend after Time Out* Guaranteed Data Sequence per CT group Guaranteed Data Sequence until * All pending correct Record Set is completed. ** Under Consideration

115 Async-CA Consistency Group
Consistency Group Summary Consistency Group Summary Near Site System Far Site System RAID Manager RAID Manager Remote Copy links RCP LCP Consistency XP256 (MCU) XP256 (RCU) Group P S Consistency group (CT group) - unit of granularity for I/O ordering assurance. A consistency group is a grouping of LUNs that need to be treated the same from a data consistency (I/O ordering) perspective. P P S S 4 Consistency Group ( Max. 16 / Array) Update sequence consistency maintained across volumes that belong to a consistency group

116 Continuous Access XP Journal vs. Continuous Access XP Extension
Remote Copy Primary 1 4 Primary 1 Remote JNL HP StorageWorks Continuous Access XP Journal can be compared and contrasted with traditional Continuous Access XP Extension (asynchronous) operation: CA XP Journal architecture is uniquely suited for one-to-many or one-to-one-to-one configurations. CA XP Async can only facilitate one-to-one configurations. With CA XP Journal, update data (in the form of the journal) is stored on traditional disk, while CA XP Async queues change data in array cache. This means that CA XP MS can be more tolerant of high or fluctuating workload environments where tracking/recording or update data can ‘fall behind’ primary data operations. Much more capacity in the form of disk can be provisioned to store such data with CA XP Journal and, thus, forms a more resilient ‘buffer’ for remote copy operations. With CA XP Journal, the journaled data consists of track-level change information. When the remote site journal ‘reads’ changes on the primary site journal, multiple track-level changes can be aggregated and retrieved . With CA XP Extension (Async) , transaction must be replicated across the remote link on an I/O by I/O basis. Depending on the application/database environment, such an ‘atomic’-level replication scheme can be very inefficient. Data 3 cache cache 3 2 2 4 Data Copy JNL Remote 2nd Copy 4 3 JNL CA XP Journal CA XP Extension (using Sidefiles) Update data (Journal) is stored on Disk Remote side initiates updates Enables logical 1:2, 1:1:1 configurations Strengths Updates very efficient, less bandwidth required Provides hi-capacity lower cost disk-based update data tracking Best for… Multi-Site implementations Update data (Sidefile) is stored in Cache Primary side initiates updates 1:1 configurations only Strengths Potential for better data currency Best for… 1:1 implementations

117 Multi-site data replication
Primary DWDM Secondary Business Copy Business Copy WAN Tertiary Note: This slide should be presented in animated form as it builds up site by site Continuous Access XP Journal can enable an optimized cascaded multi-site Disaster Tolerant solution Management overhead is reduced as split occurs via Journaling, not Business Copy Two Architectures are possible Multi Target Cascading With a traditional cascaded replication solution, an intermediate local image must be created on the Hot Standby Site array in order to ensure ongoing data consistency at all times among all remote images of the primary data. Furthermore, these solutions result in compromised mirror data currency on the Contingency Site as replication to that site does not occur on a real time basis with respect the Primary Site. In a traditional cascaded configuration, a synchronous copy image is created on the Hot Standby Site array using Continuous Access XP. From that copy image, a local copy is paired/created on the Hot Standby Site array using Business Copy XP. On a ongoing basis, the local Business Copy image does not represent a logically consistent image of its parent volume. In order to obtain such a consistent image, the local copy image must be ‘split’ from the parent image, creating a point-in-time image of the parent. That local copy image can then be copied asynchronously to the Contingency Site XP array using Continuous Access XP Extension (Async.) Because the Hot Standby Site local copy represents a point-in-time image, the Contingency Site remote copy image can, at best, only be current with this image. In the meantime, the Hot Standby Site local copy falls behind the Primary Site/Volume, with data currency of the Contingency Site remote mirror being compromised. This operation is highly coordinative in nature, with the local image on the Hot Standby Site having to be continuously split and resynched with it’s parent volume. With HP StorageWorks CA XP Journal cascaded solutions, all replication activities can happen in real-time, without compromising data currency or posing a process/solution risk. Primary DWDM Secondary Tertiary 2nd Journal Journal WAN Secondary Primary DWDM Tertiary Future Journal Journal WAN Continuous Access XP synchronous mirror <100km Continuous Access XP Extension asynchronous mirror unlimited distance Primary Site Hot Standby Site Contingency Site

118 Availability management software
wide area When disaster strikes, it can mean much more than a temporary loss of computing power. Work delays, data degradation and data loss can quickly translate into lost revenue, lost profits, and lost customers. HP provides a range of solutions that address varying degrees of availability and scalability. These solutions range from Internal Mirrors – Business Copy XP Path Management – Auto Path XP (AIX, Solaris), Secure Path XP (HP-UX, Windows, Linux) Clustering – Local Cluster Solutions Remote Mirrors – Continuous Access XP Long Distance Clusters – Cluster Extension XP Worldwide Clusters – Continentalclusters Wide-area Clustering - Continentalclusters Availability Cluster Extension XP Continuous Access XP p c1 cn Clustering Auto Path XP Secure Path XP Business Copy XP XP Scalability

119 Cluster Extension XP Features extends local cluster solutions up to the distance limits of the software rapid, automatic application recovery integrates with multi-host environments Serviceguard for Linux VERITAS Cluster Server on Solaris HACMP on AIX Microsoft Cluster service on Windows Benefits enhances overall solution availability ensures continued operation in the event of metropolitan-wide disasters Seamless integration of remote mirroring with industry-leading heterogeneous server clustering HP Cluster Extension XP continues your critical applications within minutes or seconds after an event. It offers disaster tolerance against application downtime from fault, failure, or site disaster by extending clusters between data centers. Cluster Extension XP works seamlessly with your open-system clustering software and XP storage systems to automate fail-over and fail-back between sites. What problems does it to solve? Heterogeneous disaster recovery protection over MAN distances (up to 10 kilometers) Lack of integrated, fully tested, turnkey heterogeneous disaster recovery solutions Simplifies installation and reduces the amount of time required to implement disaster recovery solutions Protection against Microsoft Quorum disk fatal errors Features extends leading high availability server clustering solutions over geographically dispersed data centers up to 10 km integrates XP remote mirroring with cluster monitoring and failover/fallback operations continuous mirroring-pair synchronization monitoring and recovery cluster server solutions for Solaris, AIX, and Microsoft Windows 2000, Advanced Server, and Datacenter Server tightly integrated with Serviceguard for HP-UX and Linux Support DWM, ATM, IP and WAN connectivity Reliable and efficient: ensures fast fail-over and recovery with extensive condition checking Scalable: extends a single cluster solution over metropolitan or global distances Manageable: detects failures and automatically manages recovery Flexible: operates with Windows, Linux, Solaris, and AIX for compatibility with your system environment Available: utilizes Continuous Access XP mirroring for high-performance synchronous and long-distance asynchronous solutions Customer benefits Consolidated disaster recovery; saves money and improves operational efficiency Rapid disaster recovery implementation for protection of heterogeneous resources; information assets are safe; one point of contact for all support Stress-free automatic failover and fast fallback and data replication; requires no user intervention HP Cluster Extension XP functions as middleware between HP Continuous Access XP Mirror Control tools and your cluster software, whether it is VCS on Sun Solaris, HACMP on IBM AIX, HP Serviceguard for Linux, or Microsoft Cluster Service for Windows 2000 with HP’s Quorum Filter Service. HP Cluster Extension XP runs on Solaris, AIX, Linux, or Windows A specific design should be developed by HP Technical Consultants to ensure satisfaction with a particular configuration. Cluster Cluster Extension XP Datacenter A Datacenter B data mirror Continuous Access XP XP array XP array

120 continentalclusters

121 External Storage XP Connect low-cost and/or legacy external storage utilizing the superior functionality of the Next Generation XP array Accessed as a full privilege internal XP Next Gen LU No restrictions; use as: A regular XP Next Gen LUN Part of a Flex Copy XP, Business Copy XP or Continuous Access XP pair With full Solutions support Facilitates data migration Reduce costs by using less expensive secondary storage External Storage XP is a software product External storage devices must be from a list of tested and supported devices. At first release supported devices will be the XP512, XP48, XP1024, XP128, and MSA1000. The XP256 is targeted to be supported at second release Some EMC and IBM arrays will be supported for data migration only, details to follow Any port from any CHIP pair installed in any slot (CHIP or ACP) can be connected to external storage. At first release External Storage capacity will be limited to 16 PB Business Copy and Auto LUN are supported At second release External Storage capacity will be limited to 32 PB LUN Security Extension will be supported At Future Release No ACP configuration will be supported Continuous Access will be supported Journaling will be supported Future CHIP CHIP LUN MSA1000 XP FC CHIP FC CHIP SAN MSA1000 Hosts XP Next Gen External Storage

122 Included Support and Services
Site Preparation Array Installation and Start Up Proactive 24 Support, 1 year Reactive Hardware Support, 24 x 7, 2 years Software Support, 1 year (included with software title) There are a number of services available from HP that are tailored for XP disk arrays LUN design and implementation Expert configuration assistance that allows for a faster time to production deployment HP Continuous Access XP implementation Rapid deployment of a high speed host independent data mirroring operation between local or remote XP disk arrays HP Business Copy implementation for the XP Speeds the deployment of mirroring configurations without the concern of complexity or operational impact High availability storage assessment Exposes potential risks to business continuity and minimizes costly production outages by providing recommendations to avoid such situations Performance analysis Provides a greater return on storage hardware investment by enhancing the throughput and usability of the device Performance tuning and optimization Provides a greater return on storage hardware investment by enhancing the throughput and usability of the device continually throughout a period of a year. Data migration services Provides a smooth transition to an HP storage platform for open systems, mainframe and mixed environments SAN solution service (available Q3 2003) Provides the services needed for SAN implementation or integration with an XP disk array Accelerates your time to production with efficient installation of your new XP Disk Array by HP certified Storage Technical Specialists.

123 Mission Critical Support (Proactive 24)
Environmental services Customer support teams Account support plan Activity reviews Reactive services 24x7 HW Support, 4 hr. response 24x7 SW Tech Support, 2 hr. response Escalation management Technology-focused proactive services Patch analysis and management SW Updates Annual System Healthcheck Storage array preventive maintenance SAN supportability assessment Network software and firmware updates HP excels at mission-critical support, not only for enterprise storage (XP and EVA), but for your entire IT environment. Here is a summary of the services we provide for mission-critical support. (review each bullet briefly) Care Pack Critical support is available for the XP today with a 100% data availability guarantee that even covers loss of access to data. Note to presenter: Proactive 24 support for the EVA is planned to be available in the second quarter of 2003. HP Critical Support adds data availability guarantee Ongoing support Mission critical, multivendor environment

124 Web-based device management platform for XP disk arrays
Command View XP Features Web-based, multi-array management framework Centralized array administration Advanced fibre channel I/O management supports XP Next Gen and legacy XP arrays from single mgt station Benefits common management across HP storage devices graphical visualizations speed up problem resolution multiple security layers ensure information is safe & secure manage storage anytime, from any where Web-based device management platform for XP disk arrays HP StorageWorks Command View XP provides the common user interface for all XP disk array management applications. You and your staff only need to learn a single user interface, reducing the learning curve and increasing usage of the tool. It’s web-based so you can monitor your storage resources any time, from anywhere with a web browser. Using the web, your storage expert can participate in problem resolution across multiple installations or easily train junior staff. The tabular user interface in HP Command View XP gives you access to information quickly, and the graphical representations of your storage resources reduce the amount of time your staff will spend troubleshooting problems. The Command View User Interface is being used by enterprise, modular, tape and virtualization products. In the future, HP will be merging these solutions to enable management of all these products and services from a single management console. A clear differentiator contained within Command View is Path Connectivity. This module provides customers with a series of reports detailing and configurations, connections and paths being used by hosts, switches into the XP array. This module eliminates the need for the customer to maintain their own binder with printed reports that they have to manually update. Command View updates the configuration automatically eliminating this tedious task. Path Connectivity also provides diagnostic capabilities for the fibre channel connection between hosts and the XP. This feature will identify if a particular connection has begun to degrade and then provide diagnostic information to speed up the troubleshooting process by identifying potential causes. Command View XP runs on a Windows 2000 or Windows 2003 management station.

125 Command View XP – cross platform support
Seamlessly manage XP Next Gen arrays along side complete family of legacy XP systems Simplify storage management Reduce storage management complexity and cost Accelerate storage deployment Can CommandView XP run on any Java enabled platform, or is restricted to certain operating systems?  CommandView XP runs on a Windows 2000 / Windows 2003 Management Station only.  How can redundant CommandView XP workstations be configured? High Availability CommandView XP has been discussed, but not committed to. CommandView XP as a Single-Point-Of-Failure does not impact data availability. web client web client Command View mgt station XP512/XP48 XP1024/XP128 XP Next Gen

126 LUN Configuration & Security Manager XP
Features assignment of LUNs via drag ‘n drop assignment of multiple LUNs with single click checks each I/O for valid security LUN and WWN group control Benefits flexibility to configure the array to meet changing capacity needs permissions for data access can be changed with no downtime every I/O is checked to ensure and maintain secure data access Easy-to-use menus for defining array groups, creating logical volumes and configuring ports HP StorageWorks LUN Configuration Manager and Security Manager XP is a new combination product combines a configuration and capacity expansion tuning tool together with the best security available. LUN Configuration Manager contains three applications in a single easy-to-use tool that launches directly from the HP Command View XP user interface. LUN Management lets you add or delete paths, set the host mode, set and reset the command device, and configure Fibre Channel ports. LU Size Expansion (LUSE) lets you create volumes that are larger than standard volumes. Volume Size Configuration (VSC) allows configuration of custom size volumes that are smaller than normal volumes—improving data access. Security Manager is your LUN security watchdog that gives you continuous, real-time, I/O-level data access control of your XP array. It allows you to restrict the availability of a single LUN or group of LUNs to a specified host or group of hosts. Secured LUNs become inaccessible to any other hosts connected to the array. Unauthorized hosts no longer have access to data that is off-limits. HP LUN Configuration Manager XP allows a system administrator to set up and configure LUNs and assign them to ports. Additionally there are two programs that allow LUNs to be created which are smaller and larger than the available open emulation types. For example, many smaller LUNs can be combined to form a single large LUN using LUN size expansion (LUSE), which is important in environments where there are restrictions on the total number of LUNs supported. Custom-size volumes are smaller than the standard emulation types and can be easily created using Open Systems Custom Volume Size (OCVS). This is important when LUNs may need to be downsized in capacity to avoid wasting disk space. A good example here is when there is a need to create a command device for an XP application. This sample screen shot shows part of the process involving LUN setup and shows a table mapping SCSI ID, and LUN number, volume and emulation type. HP Surestore Secure Manager XP restricts access to LUNs or groups of LUNs to a specific host or group of hosts by checking every I/O. Permissions to access data can be changed on-the-fly with no downtime. Create LUN groups and WWN groups to simplify the configuration and management of data. Integrated into HP Command View and accessible from the same management station. Security is enabled at the port level for flexible deployment with all host server environments supported by the XP disk array family. New drag and drop capability integrated into CommandView. NOTE: HDS/SUN will have similar capability. We expect them to expose this functionality through HDS’s HighCommand device manager. Note: XP128/1024 LUN Configuration & Secure Manager functionality is bundled together as a single product. For legacy systems, LUN Configuration Manager and Secure Manager XP are used by customers. Both products provide the same functionality level; however, it is bundled slightly differently for the XP128/1024. Under volume management, customers can group LUNs – even if they are not concatenated providing more flexibility as customers do not need a free list of LUNs – they can just pick and choose available LUNs.

127 Performance Advisor XP
Features real-time data gathering and analysis system view of all resources flexible event notification and large historical repository Benefits identify performance bottlenecks before they impact business helps maintain maximum array performance precise integration eliminates “blame storming” across systems, database and storage administrators Collect, monitor, and display real-time performance HP StorageWorks Performance Advisor XP is a web-based application providing real-time collection and monitoring of your XP disk arrays. HP Performance Advisor XP collects data on CHIP & ACP utilization, and sequential and random reads/writes and I/O rates at each LDEV. No need to worry that your XP resources are not performing XP resources are not performing up to expectations. Its performance alarm and notification capabilities will keep you ahead of any problems. Collects data on a real-time basis, making it easy to see up-to-date and current performance patterns. (Some competing products collect data only on a sporadic basis, several times a day, making their data stale and dated.) Can be accessed by a tab on the Command View (CV) screen. By clicking the Performance Advisor tab, you automatically click the PA URL and start the PA application, saving administrator time and effort. Alternatively, Performance Advisor XP can run as a completely standalone application with its own GUI. Performance Advisor is installed on a PC-based management station (it can be on the same station that CV is installed on), and multiple arrays and multiple hosts can be examined from the same station. Select one array and look at all the hosts attached to it. Or select one host and look at all the array components. This makes it easy to see which hosts are imposing the heaviest workload on the XP disk array. Alert thresholds can be set to send alarms when the threshold has been reached to warn you that attention may be needed or workloads are approaching a critical stage where you may need to take some action to head off a rash of user complaints. VantagePoint Operations (OpenView performance monitor) can access the same performance data that is collected by Performance Advisor. Precise (a third-party Oracle performance optimization tool) can access the same data generated by Performance Advisor and solve database processing bottlenecks. Data from up to 8000 LUNs can be displayed very quickly in one PA window. It takes only two minutes to display this large amount of data, even though it is unlikely you would want to view that many LUNs at one time. Up to 2 GB of historical performance data can be stored making it easy to analyze trends (a leading competitor stores only 500 MB). Additional notes: Flexibility to view “many-to-many” hosts, arrays, LUNs, or components works with Windows 2000/NT, HP-UX, SUN, IBM AIX, Linux, (mainframes soon) Event notification by to pager or PC or VPO console Web-based integration with HP Command View XP

128 Boost performance by redirecting I/O requests to the cache
Cache LUN XP Features stores critical data in the XP array cache user configurable, easy to use fast file access and data transfer scalable Benefits speeds access to mission critical data provides access to critical data such as database indices in nanoseconds rather than milliseconds Boost performance by redirecting I/O requests to the cache HP StorageWorks Cache LUN XP lets you reserve areas of cache memory on your HP StorageWorks XP disk array to store frequently accessed information. You’ll see improved file access times and faster data transfers. Cache-resident data is accessed in nanoseconds instead of milliseconds for both read and write I/O operations. HP Cache LUN redirects I/O requests from the XP array disk drives to the XP array cache memory to boost performance for frequently accessed data. Simple to implement with HP Command View XP integration, and transparent to the array operation, performance gains are immediate. You can cache up to 1024 volumes for the ultimate in efficiency. HP Cache LUN is the answer for LUNs that are heavily accessed like index files or database logs. When you want to use the cache for something else, destage the data back to its original location. Once the “hot” data is cache resident, it is accessed in nanoseconds rather than in milliseconds. Performance improvements occur immediately. Increase cache memory before running HP Cache LUN XP to avoid degradation when accessing non-cached data.

129 Performance Control XP
Features Control consumption of array performance by IO/s or MB/s Precise array port level settings reports and online graphing Benefits allows customer to align business priorities with the availability of array resources Effectively manage mission-critical and secondary applications more efficient use of array bandwidth allows for performance-level service-level oriented deployment Allocates performance bandwidth to specific servers or applications HP StorageWorks Performance Control XP lets you allocate XP disk array performance resources to hosts, so you can align those resources with computing priorities, maximize storage ROI, and deploy XP disk arrays in service-provider solutions. PCXP is Web-based software that plugs into Command View XP and allocates XP array performance bandwidth to specific servers or applications, based on user-defined policies What problems does it solve? Lack of complete systems performance management solutions and flexible interfaces Lack of tools that provide granular control of resources or groups of resources which have an impact on overall performance Keeping things simple yet powerful enough to really have an overall effect on performance Customer benefits Entire configuration is better understood, relationships are clear, and policy decisions are more accurate and are arrived at more quickly Applications automatically get the bandwidth they need, when they need it Simplified performance management and reporting Information is secure Performance policies can be assigned by HBA, WWN or user assigned-nicknames Schedule by hour or by day

130 Non-disruptive volume migration for performance optimization
Auto LUN XP Features optimizes storage resources moves high-priority data to underutilized volumes identifies volumes stressed under high I/O workload creates a volume migration plan migrates data across different disks or RAID levels Benefits improves array performance and reduces management costs by automatically placing data in highest performing storage (cache, RAID 1, RAID 5) Non-disruptive volume migration for performance optimization HP StorageWorks Auto LUN XP provides automatic monitoring and load balancing of your HP StorageWorks resources. HP Auto LUN XP compares disk array LUN utilization to limits set by you. It even proposes a data and volume migration plan so that hot spots and other potential disk bottlenecks are avoided, providing optimal, load-balanced performance. HP Auto LUN XP analyzes the current array operation and uses its knowledge about the data layout on the array to suggest how best to relocate data on the XP disk array to improve performance. The administrator then has the option to say “yes” or “no” before any data is moved according to the plan generated by Auto LUN. It works in several discrete phases, reports on its findings, generates an action plan, but doesn’t move anything on the array until you give it your command. Benefits include: Keeps highly accessed data on higher performance disk drive groups. Migrates across different disks or RAID levels for better performance. Shorter random I/Os (2k-4k) may be better suited for storage on smaller, faster disk spindles. Under certain circumstances, RAID 1 may be faster than RAID 5, etc. Simple, easy-to-use operation: Easy-to-follow menus Up to 90 days of history are stored and usable reports are generated. Data can be exported to Excel for further graphing and charting. Moving volumes is easy as the data is copied on the source volume to the target volume. Then, host access is transferred to the target volume to complete the migration. At first release 4096 pairs are supported At second release 8191 pairs are supported High Capacity Disks 73 GB Policy-based LUN Migration 36GB High Performance Disks

131 Secure Path XP Auto Path XP
Features automatic and dynamic I/O path failover and load balancing automatic error detection heterogeneous support NT, W2K, HP-UX, Linux, AIX path failover & load balancing for MSCS NT & WIN2K - Persistent Reservation Benefits eliminates single points of I/O failure self-managing automatic error detection and load balancing same software interface across XP and virtual arrays simplifies training and administration Automatic protection against HBA and I/O Path failures with load balancing HP StorageWorks Auto Path XP helps the smooth flow of data around bottlenecks and failed I/O paths Dynamic load balancing helps share the load between available paths Avoids excessive queues on any one path and smoothes out the transaction workload What is it? Host-based software that provides automatic protection against HBA path failures between the server and storage. What problems does it solve? Complete protection of all HBA and I/O data paths, no single point of failure Lack of heterogeneous HBA and I/O protection solutions Full integration with vendor’s management software and third party software Load balancing after a failure Customer benefits No administrator intervention in the event of a failure; training is simplified Full integration with HP XP management software; simplifies training and support, common interface Heterogeneous support for HP-UX, Windows NT/2000, IBM AIX and Linux Performance is maintained despite an I/O path failure HP-developed Auto Path XP products are available for Windows NT/2000 and HP-UX. We are currently working on HP Auto Path XP for Linux and for HP-UX for future releases. The HP-developed products will work with the XP disk arrays as well as the VA family of arrays, thus providing flexibility for SANs that incorporate both products. In addition, we sell an HDS-developed Auto Path XP product for AIX. This one works only with the XP disk arrays. Note: We also have an NT product from Hitachi, which was placed on blind CPL in June to encourage orders on the HP-developed NT product. This product also supports SCSI failover. Sun Solaris support is provided only on a business need basis through the NSSO big deal escalation process. To date, Windows NT and 2000 have supported a limited "SCSI Reservation Protocol" that is not fully optimized for Clustered Servers using Fibre Channel Storage. The SCSI Reservation Protocol must be "re-distributed" every time an I/O attempt is not properly executed causing performance problems, specifically with I/O path load-balancing. HP is modifying the XP firmware and HP's I/O Path failover software, HP Auto Path XP, to support the "Persistent Reservation Protocol." Persistent Reservations support is built into the XP128 and XP1024 firmware and will be built into all HP Auto Path XP software, allowing HP's I/O path failover solution to support "dynamic load balancing" in MSCS Windows NT and 2000 environments as well as Serviceguard for HP-UX and Serviceguard for Linux environments. HP is the 1st in the industry to have this capability.

132 Footprint Incredible Density
The XP Next Gen can store data at a density of up to 35 gigabytes/inch2 GB disks in 4849 square inches of cabinet floor space

133 Zastosowania serwerów baz danych
praca instytucji obsługujących wielką liczbę klientów, gdy wymagane jest przeszukiwanie wielkich baz danych (np. obsługa systemu podatkowego, obsługa kont bankowych, obsługa odbiorców energii elektrycznej, gazu i usług telekomunikacyjnych oraz wiele innych. animacja komputerowa, obliczenia i wizualizacja w trójwymiarowych zagadnieniach CAD/CAM i w dynamice płynów, bankowość i finanse - do analizy rynku i trendów giełdowych, geofizyka – np. przetwarzanie trójwymiarowych danych sejsmicznych, procesy chemiczne – akwizycja danych, sterowanie procesami, wizualizacja, chemia komputerowa, energetyka – modelowanie i zarządzanie dystrybucją energii, elektronika – np. projektowanie i symulacja półprzewodników,

134 Podstawowe parametry nośników

135 Helical vs. DLT

136 Cechy DLT

137 Cechy produktów ATL

138 Implementacja wymagań RAS
Redundancja przykłady: SUN E6500, 10000 IBM RS6K M80, S80 HP N4000, V2500, SuperDome Elementy (na przykładzie komputerów klasy podstawowej HP L): monitorowanie pracy z pomocą oddzielnego procesora Pamieć (RAM), cache typu ECC dynamiczna dealokacja procesorów i modułów pamięci oddzielne podsystemy magistrali dla sterowników dysków lustrzanych wymiana, hot swap, redundancja w odniesieniu do zasilaczy, wentylatorów ..... gwarancja 3 lata

139 SUN E10000 4 magistrale adresowe
Pamięć 4 x UltraSPARC II Interfejsy SBus Płyta systemowa 16 Płyta systemowa 2 Płyta systemowa 1 4 magistrale adresowe Przełącznica krzyżowa 16 x16 dla danych Przepustowość 12.8 GB/s

140 HP V2500 Przełącznica krzyżowa 8 x 8 (przepustowość 15.3 GB/s) X
4 x PA8500 Interfejsy PCI Płyta systemowa 8 Płyta systemowa 2 Płyta systemowa 1 Przełącznica krzyżowa 8 x 8 (przepustowość 15.3 GB/s) Pamięć operacyjna Połączenia X/Y SCA (przepustowość 3.84/3.84 GB/s) X Y

141 4GB – 32GB SDRAM Physical Memory
HP V2500 HyperPlane Agent 2-32 PA-8500 CPUs per V2500 Engine (56 GFLOPS) 8x8 Crossbar 15.3 Gbytes/sec 1.9GB/sec crossbar ports (8) 240 MB/sec I/O Ports (8) 2x PCI I/O Controllers (28) 4GB – 32GB SDRAM Physical Memory 256-Way Interleaved 29 30 31 32 c 32-way 2x Memory Controller 9 10 11 12 5 6 7 8 1 2 3 4 New

142 Katastrofy Fizyczne uszkodzenie sprzętu komputerowego
Uszkodzenie łączy komunikacyjnych Utrata zasilania


Pobierz ppt "Poziom podwyższony Ochrona danych"

Podobne prezentacje


Reklamy Google