Pobieranie prezentacji. Proszę czekać

Pobieranie prezentacji. Proszę czekać

Poziom podwyższony Ochrona danych Macierze dyskowe RAID Automatyczne biblioteki taśmowe Urządzenia do zapisu optycznego na dyskach CD.

Podobne prezentacje


Prezentacja na temat: "Poziom podwyższony Ochrona danych Macierze dyskowe RAID Automatyczne biblioteki taśmowe Urządzenia do zapisu optycznego na dyskach CD."— Zapis prezentacji:

1 Poziom podwyższony Ochrona danych Macierze dyskowe RAID Automatyczne biblioteki taśmowe Urządzenia do zapisu optycznego na dyskach CD.

2 Wprowadzenie zDyski i zasilacze stanowią najsłabsze ogniwo HAS zDyski zawierają dane zDane muszą być chronione zDane muszą być odtwarzalne z pomocą dodatkowych systemów zDisk storage systems ypodsystemy dyskowe xJBOD (Just a Bunch of Disks) xDyski hot-pluggable warm-pluggable hot-spares write cache ymacierze dyskowe ySAN yNAS

3 Cracow 03 Grid Workshop Storage Hardware TP900, TP9100, TP9300, TP9400, TP9500, HDS 99x0, STK Tape Libraries, ADIC Libraries, Brocade Switches, NAS 2000, SAN 2000, SAN 3000 High Availability Data ProtectionHSMData Sharing NASDASSAN High Availability Data Protection HSM Data Sharing Redundant Hardware and FailSafe XVM Legato NetWorker, XFS Dump, OpenVault SGI Data Migration Facility (DMF), TMF, OpenVault XFS, CIFS/NFS, Samba, ClusteredXFS (CXFS), SAN over WAN Choose only the integrated capabilities you need SGI InfiniteStorage Product Line

4

5

6

7

8

9

10

11

12

13 Parametry magistrali

14 Parametry magistrali, cd.

15 Porównanie, cd.

16 Dyski lustrzane i RAID zPojedyncze dyski - MTBF godzin zdyski lustrzane (RAID 1) zhot-swap - niska wydajność wykorzystania powierzchni (~50%) zRAID (D.A. Patterson, G. Gibson R.H. Katz, A case for Redundant Array of Inexpensive Disks or RAID, University of California, Berkeley, 1987) yGrupa(-y) dysków sterowanych wspólnie yJednoczesny zapis i odczyt na różnych dyskach yBezawaryjność systemu - zapis informacji nadmiarowych ycache

17 RAID, serwery danych, intensywny przepływ danych zumieszczony w jednej obudowie z własnym zasilaczem, zwyposażony w elementy redundantne w celu zapewnienia wysokiej dostępności, zwyposażony na ogół w dwa lub więcej kontrolerów we/wy, zwyposażony w pamięć typu cache w celu przyspieszenia komunikacji, zskonfigurowany tak, by zminimalizować mozliwość wystąpienia błędu (sprzętowa realizacja standardu RAID).

18 RAID, cd. zSoftware/Hardware zkażdy komputer - redundantne i/f zHW RAID- ograniczony do jednej tablicy dysków zDisk cache, procesor Tablice dysków: - kilka magistral dla kilku i/f komputerów, read-ahead buffers zSW RAID - elastyczność, cenowo korzystniejszy zHW RAID - Potencjalne pojedyncze punkty uszkodzeń: yzasilacze ychłodzenie yinstalacja zasilająca ywewnętrzne sterowniki ypodtrzymanie bateryjne ypłyta główna

19 RAID features

20

21

22

23

24

25 Porównanie trybów RAID

26 Dobór trybu RAID

27 Zakres zastosowań

28 Storage Area Network, SAN LAN SAN Storage zZalety yCentralizacja zarządzania i alokacji yniezawodność i dostępność- rozbudowany failover ysieciowy backup yLAN-free backups

29

30 SCSI vs. FC komputery dyski koncentratory każdy z każdym 2 połączenia SCSI FC

31 Dlaczego Fibre Channel? y Gigabit Bandwidth Now - 1 Gbs today, soon 2 and 4Gbs y High Efficiency - FC has very little overhead y Multiple Topologies - Point-to-point, Arbitrated loop, Fabric y Scalability - from Point-to-Point FC scales to complex Fabrics y Longer cable lengths and better connectivity than existing SCSI technologies y Fibre Channel is an ISO and ANSI standard

32 Wysoka przepustowość zStorage Area Networks today provide y1.06Gbs today y2.12Gbs this year y4.24Gbs in the future zMultiple Channels expand bandwidth i.e.:800 Mbyte/s

33 Topologie FC zPoint-to-Point 100MByte/s per connection Just defines connection between storage system and host

34 Topologie FC, dc. zFC-AL, Arbitrated Loop Single Loop Data flows around the loop, passed from one device to another Dual Loop Some data flows through one loop while other data flows through the second loop Each port arbitrates for access to the loop Ports that lose the arbitration act as repeaters

35 Topologie FC, cd. zFC-AL, Arbitrated Loop with Hubs Hubs make a loop look like a series of point to point connections. Addition and deletion of nodes is simple and non-disruptive to information flow HUB

36 Topologie FC, cd. zFC-Switches SWITCH Fabrics are composed of one or more switches. They enable Fibre Channel networks to grow in size. Switches permit multiple devices to communicate at 100 MB/s, thereby multiplying bandwidth SWITCH

37 So how do these choices impact my MPI application performance ? Lets find it out by running... Micro Benchmarks to measure basic network parameters like latency and bandwidth Netperf PALLAS MPI-1 Benchmark Real MPI Applications ESI PAM-Crash ESI PAM-Flow DWD / RAPS Local Model

38 Benchmark HW Setup y2 Nodes HP N4600 each with 8 processors running HP-UX 11i yLots of NW Interfaces per node x4 * 100BT Ethernet x4 * Gigabit Ethernet Copper x1 * Gigabit Ethernet Fibre x1 * HyperFabric 1 yPoint-to-point connections, no switches involved yJ6000 benchmarks were run on a 16 node cluster at the HP Les Ulis benchmark center (100BT and HyperFabric 1 switched networks) yHyperFabric 2 benchmarks were run on the L3000 cluster at the HP Richardson benchmark center.

39 Jumbo Frame Same Performance with Fibre and Copper GigaBit ? GigaBit Ethernet allows the packet size (aka Maximal Transfer Unit MTU) to be increased from 1500 Bytes to 9000 Bytes. Can be done anytime by invoking lanadmin –M 9000 Can be applied to both Copper and Fibre GigaBit interfaces.

40 GigaBit MTU Size / Fibre vs Copper

41 What is the Hyper Messaging Protocol (HMP) ? Standard MPI Messaging with remote nodes goes through the TCP/IP stack. Massive Parallelism with clusters is limited by OS overhead for TCP/IP. Example: PAM-Crash MPP on a HP J6000 HyperFabric workstation cluster (BMW data set) 25% System Usage / CPU on a 8x2 cluster 45% System Usage / CPU on a 16x2 cluster This overhead is OS related and has nothing to do with NW interface performance.

42 What is HMP ? HMP Idea: Create a shortcut path for MPI applications in order to bypass some of the TCP/IP overhead of the OS Approach: Move the device driver from OS kernel into the application. This requires direct HW access privileges. Now available with HP MPI Only for HyperFabric HW

43 MPI benchmarking with the PALLAS MPI- 1 benchmark (PMB) PMB is an easy-to-use MPI benchmark for measuring key parameters like latency and bandwidth Can be downloaded from Only 1 PMB process per node was used to make sure that NW traffic is not mixed with SMP traffic. Unlike the netperf test this is not a throughput scenario.

44 Selected PMB Operations PingPong with 4 Byte message (MPI_Send, MPI_Recv) measures message latency PingPong with 4 MB message (MPI_Send, MPI_Recv) measures half duplex bandwidth SendRecv with 4 MB message (MPI_SendRecv) measures full duplex bandwidth Barrier (MPI_Barrier) measures barrier latency

45 Selected PMB Results MB/sec544.8 MB/sec1.8 μsec1.7 μsec Shared Memory MB/sec112.0 MB/sec54 μsec27 μsec HyperFabric 2 LowFat MPI MB/sec97.0 MB/sec105 μsec53 μsec HyperFabric MB/sec56.8 MB/sec125 μsec61 μsec HyperFabric MB/sec62.1 MB/sec142 μsec72 μsec GigaBit 9k MTU 79.8 MB/sec62.7 MB/sec135 μsec72 μsec GigaBit 1.5k MTU 22.1 MB/sec10.8 MB/sec112 μsec61 μsec 100baseT SendRecv 4MBytes PingPong 4MBytes BarrierPingPong 4 Bytes

46 PingPong as measured

47 PingPong per formula t = t0+n*fmax

48 Cechy SAN ( Storage Area Networks ) yCentralized Management yStorage Consolidation/Shared Infrastructure yHigh Availability and Disaster Recovery yHigh Bandwidth yScalability y Shared Data !? (niełatwe niestety !)

49

50

51 Centralne zarządzanie zStorage today is yServer attached, therefore at the servers location. xDifficult to manage xExpensive to manage xDifficult to maintain Local Area Network IRIX NT Linux Building A Building D Building ^B Building C

52 Centralne zarządzanie, cd. zStorage is y Network attached (NAS) yindependent from location of server yeasy and cost effective to manage and maintain Local Area Network IRIX NT Linux Storage Area Network IT- Department

53 Konsolidacja zStorage still yis split amonst different storage systems for different host NT Linux Storage Area Network NT Linux Storage Area Network zStorage now yshares a common infrastructure IRIX

54 High Availability and Disaster Recovery ( przykład ) zHighly redundant network with no single point of failure.

55 Skalowalność zCan scale to very large configurations quickly

56 Share Data!? zCurrently there is just shared infrastructure NT Linu x Storage Area Network

57 Sieciowe zaplecze danych zNAS (Network Attached Storage) yserwery danych xlekki serwer, pamięć masowa, zarządzanie, sieć yzastosowanie xintensywny przepływ danych przeszykiwanie, analizy xRT processing multimedia, videokonferencje video-server - QoS xprzetwarzanie wieloprocesorowe zSAN (Storage Area Network) yrozdzielenie od LAN

58 Network Attached Storage (NAS) LANvault Backup Appliances for remote sites Conceptual View LAN at Remote Office Backup Server Easy to install Easy to manage, over the Web For remote or non-MIS offices WAN or Internet Central Management Console

59 NAS eliminates the factors that create risk for remote site backup... zReliability of an Appliance ySelf-contained, dedicated to backup only yNothing complex to crash during backup yFuture performance enhancements via Gigabit Ethernet zControl via Central Management Console yNo uncertainty about backup having been done correctly yNo need to rely on remote site operators yNo outdated or renegade remote sites…proactive alerts about upgrades, patches and service packs and automated update of multiple sites ensure consistency across entire enterprise

60 Comparing SAN and NAS

61 Cracow 03 Grid Workshop 2 Buzzwords in IT Industry Server Consolidation –maybe in a commercial environment –usually not in a technical environment a hammer is a hammer, a screwdriver is a screwdriver an HPC system cannot be used as a HPV system Storage Consolidation –DAS -> NAS -> SAN

62 Cracow 03 Grid Workshop History of Storage Architectures DAS - Direct Attached Storage pro –appropriate performance con –distributed, expensive administration –data may not be where it is needed –multiple copies of data stored

63 Cracow 03 Grid Workshop History of Storage Architectures NAS - Network Attached Storage pro –centralized, less expensive administration –one copy of data –access from every system con –network performance is the bottleneck

64 Cracow 03 Grid Workshop Switch History of Storage Architectures SAN - Storage Area Network pro –centralized administration –performance equivalent to DAS con –NO FILE SHARING –multiple copies of data stored

65 Cracow 03 Grid Workshop How does that translate to a GRID Environment? Storage Consolidation –useful in a local environment (GRID node) –does not work between remote GRID nodes Current Data Access between GRID Nodes –Data has to be copied before/after the execution of a job –Problems copy process has to be done manually or included in the job script copy can take long multiple copies of data –additional disk space needed –revision problem

66 Cracow 03 Grid Workshop What if a SAN would have the same file sharing capability as a NAS?... one could build a SAN between different buildings/sites/cities and not loose performance?

67 Cracow 03 Grid Workshop LAN SAN A first step: each host owns a dedicated volume consolidated on a RAID array. Storage management is centralized. Offers a certain level of flexibility. Storage Area Networks (SAN) The High Performance Solution

68 Cracow 03 Grid Workshop LAN SAN SOLARIS, AIX, HP-UX Windows NT, 2000 and XP Linux, Mac OS IRIX A unique high performances solution : Each host shares one or more volumes consolidated in one or more RAID arrays. Centralized storage management High modularity True High Performances Data sharing Heterogeneous Environment SGI InfiniteStorage Shared FileSystem (CXFS)

69 Cracow 03 Grid Workshop Data re-transmission due to IP packet loss limits actual IP throughput over distance Distance (kilometers) New York BostonChicagoDenver Hours Fibre Channel over SONET/SDH The High Efficiency, Long Distance Alternative

70 Cracow 03 Grid Workshop SAN Tape System Storage Servers Client LAN IP Router Fibre Channel Switch WAN DWDM Dedicated Fiber SDH SONET SAN Tape System Storage Servers Client LAN IP Router Fibre Channel Switch SONET FC IP LightSand Solution for building a Global-SAN

71 Cracow 03 Grid Workshop LightSand Products S-600 –2 ports FC and/or IP 1Gb/s –Point-to-point SAN interconnect over SONET/SDH OC-12c (622 Mb/s bandwidth) –Low latency (approximately 50 µSec) S-2500 –3 ports FC and/or IP 1Gb/s –Point-to-point SAN interconnect over SONET/SDH OC-48c (2.5 Gb/s bandwidth) –Point-to-multipoint SAN interconnect over SONET/SDH (up to 5 SAN islands. 622 Mb/s per link) –Low latency (approximately 50 µSec)

72 Cracow 03 Grid Workshop Sandia National Laboratory (SNL) Los Alamos National Laboratory (LANL) IP Network Server Fibre Channel Storage Area Network Fibre Channel Storage Area Network Scientists at LANL currently dump 100GB of supercomputing data to tape and FedEx it to SNL because it is faster than trying to use the existing 155Mb/s IP WAN connection –Actual measured throughput of 16Mb/s! (10% bandwidth utilization) Scientists at LANL currently dump 100GB of supercomputing data to tape and FedEx it to SNL because it is faster than trying to use the existing 155Mb/s IP WAN connection –Actual measured throughput of 16Mb/s! (10% bandwidth utilization) Data Movement Today – A Recent Case Study

73 Cracow 03 Grid Workshop Using LightSand gateways, the same data could be transferred in a few minutes! Remote Data Center Local Data Center IP Network FC SAN Server FC SAN LightSand Gateway Telco SONET/SDH Infrastructure LightSand Gateway The Better Way – Directly Between Storage Systems

74 Cracow 03 Grid Workshop GDAŃSK POZNAŃ ŁÓDŹ KRAKÓW WROCŁAW GDAŃSK ŁÓDŹ KRAKÓW POZNAŃ What does that mean for a GRID Environment? Full Bandwidth Data Access across the GRID No Multiple Copies of Data –avoid the revision problem –do not waste disk space Make GRID Computing more efficient WARSZAWA

75 Podsumowanie Problems in Storage Management zSystem design and configuration - device management zProblem detection and diagnosis - error management zCapacity planning - space management zPerformance tunning - performance management zHigh Availability - availability management zAutomation - self management z DATA GRID Management (GRID Projects) !! y Replica management, y optimization of data access y software that depends of kind of data - expert systems, y component-expert technology, y time estimation of data availability.....

76

77 Producenci macierzy dyskowych zEMC Symmetrix zHewlett-Packard zIBM Corp. zMAXSTRAT Corp. zSUN Corp.

78 EMC Corp. zFirma EMC Corporation jest producentem macierzy dyskowych Symmetrix o wielkiej pojemności ze zintegrowaną pamięcią typu cache. Macierze Symmetrix charakteryzują się mozliwością współpracy ze sprzętem czołowych producentów serwerów: HP, IBM, SUN, Silicon Graphics, DEC, i innych. Macierz może być podłączona do serwera poprzez łącze FWD SCSI lub (w przypadku maszyn HP) przez łącze światłowodowe FC-AL. zObecnie firma EMC Corp. oferuje dwie rodziny macierzy: 3000 i 5000 o maksymalnej pojemności 2.96 Terabajta różniące się między sobą rodzajami interfejsów: FWD SCSI i FC-AL. w serii 3000 oraz ESCON w serii 5000 przeznaczonej do współpracy ze standardem komputerów IBM. Ze względu na powszechnośc występowania poniże zostaną opisane jedynie modele rodziny 3000.

79 Rodzina Symmetrix 3000 zRodzina serwerów Symmetrix 3000 obejmuje trzy modele różniące się od siebie maksymalną pojemnością, gabarytami i poborem mocy, maksymalną pojemnością pamięci cache i ilościa portów. zBardzo wysoką wydajność uzyskano dzięki zastosowaniu pamięci podręcznej (cache) o wielkości do 4 GB co zapewnia maksymalną szybkości przetwarzania danych oraz maksymalną szybkość działania aplikacji. W zależności od modelu macierz rodziny Symmetrix może zawierać do 128 napędów dyskowych. Pozwala to osiągnąć do 2.94 TB pamięci dyskowej zabezpieczonej.

80 Symmetrix 3430 ICDA

81 Porównanie trybów SRDF- Symmetrix Remote Data Facility - zdalny RAID 1

82 MAXSTRAT Corp.

83 Parametry LE i XLE

84 Macierze dyskowe ( < 1 TB )

85 Macierze dyskowe ( > 1 TB )

86 Przykład: HP storage – rodzina produktów FC10 12H with AutoRAID FC60 Mid-market Klasa podstawowa E-Commerce, Application and File/Print Klasa wyższa Storage Area Networking, Mission Critical High Availability/Disaster Recovery, E- Commerce, Data Warehousing and Consolidation Klasa średnia Mission Critical High Availability, ERP, Data Warehousing, Data Analysis & Simulation ISP Enterprise High availability Disaster tolerance XP48 XP512 VA7100/VA 7400 DS2100 DS2300

87 XP 512/48 w pełni Fibre Channel w pełni redundantna gwarancja 100% dostępności danych skalowalna do ok. 90TB crossbar – nowoczesna architektura przełącznika o przepustowości 6.4GB/s zaawansowane zarządzanie i funkcjonalność najbardziej atrakcyjny produkt tej klasy wg. Gartner Group

88 Virtual Array 7100/7400 2Gb/s Fibre Channel architektura wirtualna prosta obsługa – pełne możliwości zarządzania zaawansowane metody zabezpieczania przed awarią skalowalna do 7.7TB technologia AutoRAID – automatyczna optymalizacja praca w środowiskach heterogenicznych

89 rodzina XP rodzina MSA Wyjątkowe TCO <72 TB Konsolidacja pamięci masowych + disaster recovery Uproszczenie przez wirtualizację Windows, HP-UX, Linux i inne systemy Unixowe Zawsze dostępne <165 TB Konsolidacja Data center + disaster recovery Aplikacje typu Oracle/SAP o największych wymaganiach HP-UX, Windows, + 20 innych włączając mainframe Konsolidacja niskim kosztem <24 TB WEB, Exchange, SQL prosty DAS-to-SAN (ProLiant) Windows, Linux, Netware i inne Skalowalność Przepustowość rodzina EVA macierze dyskowe HP StorageWorks

90 architektura fizyczna modularna, np. VA/EVA FC-AL FC-AL DFP FC DFP FC CacheMemory CacheMemory

91 architektura fizyczna monolityczna, np. XP12000 crossbar 83 GB/s cache and shared memory 128 GB CHIP Fibre Channel do 128 portów CHIP Fibre Channel CHIP Fibre Channel CHIP Fibre Channel ACP Fibre Channel ACP max. 4 pary ACP do 1152 dysków 72GB 15K rpm oraz 146GB 10K rpm max. 4 pary CHIP

92 XP disk arrays have a proud heritage More than 5000 XP Frames and 36 Petabytes of capacity installed so far! XP512 XP XP128 XP48 XP256 XP1024

93 XP Next Gen performance XP Next GenXP1024XP128 Max Sequential Performance 6.0 GB/s 2.1 GB/s 1.2 GB/s Max Random Performance Cache 2,368,000 IOPS 544,000 IOPS 272,000 IOPS Max Random Performance Disk 120,000 IOPS 66,000 IOPS 33,000 IOPS Data Bandwidth 68 GB/s 10 GB/s 5 GB/s Control Bandwidth 15 GB/s5 GB/s2.5 GB/s

94 XP Next Gen scalability XP Next GenXP1024XP128 Max Disk Drives Ext Storage Max Raw Capacity 165 TB + Ext Storage 149 TB18 TB Max Usable Capacity 144 TB + Ext Storage 129 TB16 TB Max Cache 128 GB 64 GB Max Shared Memory 6 GB4 GB Supported RAID levels RAID 1 (2D+2D) RAID 1 (4D+4D) RAID 5 (3D+1P) RAID 5 (7D+1P) (RAID 5 (6D+2P)) RAID 1 (2D+2D) RAID 1 (4D+4D) RAID 5 (3D+1P) RAID 5 (7D+1P) RAID 1 (2D+2D) RAID 1 (4D+4D) RAID 5 (3D+1P) RAID 5 (7D+1P) Disk Drive Types 72 GB 15K rpm 146 GB 10K rpm (146 GB 15K rpm) (300 GB 10K rpm) 36 GB 15K rpm 73 GB 10K rpm 73 GB 15K rpm 146 GB 10K rpm 36 GB 15K rpm 73 GB 10K rpm 73 GB 15K rpm 146 GB 10K rpm 2nd Future

95 XP Next Gen scalability MinIncrementMax Data Drives Spare Drives1140 Capacity 576 GB raw 288 GB usable TB raw 144 TB usable ACP Pairs114 Cache4 GB 128 GB Cache Bandwidth17 GB/s17/34 GB/s68 GB/s Shared Memory1 GB 6 GB Shared Memory Bandwidth7.5 GB/s 15 GB/s Host Ports88/16/32128 LDEVs11 8,192 16,384 Frames115 Start at the size you need, scale to the highest usable capacity of any array on the planet 2nd

96 XP Next Gen scalability Start at the size you need, scale to the highest usable capacity of any array on the planet ACP CHIP ACP SM 1 GB SM 1 GB Cache Switch Cache Switch Cache 4 GB Cache 4 GB SM Path SM Path CHIP SM 1 GB SM 6 GB SM Path SM Path Cache Switch Cache Switch Cache 4 GB Cache 128GB

97 XP Next Gen RAID types RAID 5 7D+1P Data Parity Data RAID 5 6D+2P Data Parity 1 Parity 2 Data RAID 1 2D+2D Mirror Data RAID 5 3D+1P Data Parity Data RAID 1 4D+4D Mirror Data Efficiency50% 75%87.5%75% Write Performance Write Read Modify Write Read Modify Write Read + Modify + Write + Fault Tolerance 1 of 4 +1 of 2 1 of 4 +1 of 2 1 of 41 of 82 of 8 2nd

98 Array Control Processor (ACP) The ACP performs all data movement between the disks and cache memory The ACP also provides data protection through RAID 1 mirroring and RAID 5 parity generation Each ACP provides 8 redundant 2 Gbps FC-AL loops (16 loops total per pair) and supports up to 64 dual ported drives per loop Maximum of four ACP pairs per XP Next Gen system ACPs are configured in pairs for redundancy

99 CHIPs Client Host Interface Processors (CHIPs) provide connections to hosts or servers and come in pairs Maximum of four pairs per XP Next Gen system Fibre Channel 16 and 32 port 1 to 2 Gigabit/sec Auto sensing Fibre Channel in both short and long wave versions ESCON 16 port ExSA Channel (ESCON compatible) FICON 8 and 16 port 1 to 2 Gigabit/sec auto-sensing FICON in both short and long wave versions iSCSI 16 port 1 Gigabit Ethernet short wave Future

100 Disk drives The number and type of disk drives installed in an XP array is flexible Disks must be added in groups of four Additional capacity can be installed over time as capacity needs grow New technology (faster/larger) disks may be become available for future upgrades All disks use the industry standard dual ported 2 Gbps Fibre Channel Arbitrated Loop (FC-AL) interface Each disk is connected to both of the redundant ACPs in a pair by separate Fibre Channel arbitrated loops Spare drives are automatically used in the event of a disk drive failure Disk Drive Specifications72 GB, 15K146 GB, 10K146 GB, 15K300 GB, 10K Raw capacity (User area)71.5 GB GB Disk diameter2.5 inches3 inches Rotation speed rpm10,025 rpm Mean latency time2.01 ms2.99 ms Mean seek time (Read/Write)3.8/4.2 ms4.9/5.4 ms Internal data transfer rate74.5 to MB/sec 57.3 to 99.9 MB/sec Future

101 klastry lokalne cluster HP-UX cluster Solaris cluster AIX cluster Windows XP Disk Array MC/ServiceGuard on HP-UX Veritas Cluster Server or Sun Cluster on Solaris HACMP on AIX Microsoft Cluster Server on NT Microsoft Cluster Service on Windows2000 TruCluster on Tru64 OpenVMS Clustering on OpenVMS Network Cluster Services on Netware Failsafe on Irix (SGI)

102 dostępność pełna redundancja podzespołów wszystkie podzespoły podłączane hot swap dyski zapasowe aktualizacja mikrokodu online z możliwością powrotu funkcja call home" zaawansowana zdalna diagnostyka

103 Online firmware update Online firmware update without any host ports going down Each CHIP has multiple processor modules. Each processor module contains a pair of microprocessors One microprocessor of a pair is updated, while the other microprocessor in the pair continues to service the host ports for the pair with little impact to performance Enables remote firmware update capability independent of host connection topology Host Port Micro processor Host Port Host Port Micro processor Host Port Host Port Micro processor Host Port Host Port Micro processor Host Port Host Port Micro processor Host Port Host Port Micro processor Host Port Before Update 1 st Processor Update Host Port Micro processor Host Port Host Port Micro processor Host Port 2 nd Processor Update After Update

104 Battery Internal environmentally friendly nickel-hydride batteries allow the XP Next Gen to continue full operation for up to one minute when the AC power is lost. During this minute all the disk drives continue to operate normally. When duration of the power loss is longer than one minute, the XP Next Gen executes either the De-stage Mode or Backup Mode, protecting data for up to 48 hours.

105 Storage Partition XP Cache Partition XP Easy Secure Consolidation Divide an XP Next Gen into independent configurable and manageable sub arrays Partitions Cache or Array (cache, storage, port) resources Allows array resources to be focused to meet critical host requirements Allows for service provider oriented deployments Array partitions are independently & securely managed Can deploy in an mixed deployment (Cache, Array, Traditional) CHIP Cache CHIP Site A Site B Site C Site A Admin Site B Admin Main Site Super Admin Non-partitioned Disk Main Site CHIP Cache 2nd

106 XP Next Gen Configuration Management HP Command View XP * Web-based XP Disk Array management, configuration & administration HP LUN Configuration & Security Manager XP Web-based LUN management and data security configuration HP Performance Advisor XP Web-based real-time performance monitoring HP Performance Control XP * Web-based performance resource allocation HP LUN Security XP Extension Web-based Write-Once/Read-Many archive storage configuration HP Cache LUN XP Web-based cache volume lock performance acceleration HP Auto LUN XP Web-based storage performance optimization HP Data Exchange XP Mainframe-open system host data sharing * New/Enhanced compared to the XP128/XP1024

107 XP Next Gen Availability Management HP Business Copy XP * Real-time local mirroring HP Continuous Access XP/XP Extension Real-time synchronous/asynchronous remote mirroring HP Continuous Access XP Journal * Real-time extended distance multi-site mirroring HP External Storage XP * Connect low cost external storage to the XP. HP Flex Copy XP Point-in-time copy to/restore from external HP MSA HP Secure Path XP/Auto Path XP Automatic Host Multi-Path Failover and Dynamic Load Balancing HP Cluster Extension XP Extended High Availability Server Clustering HP Fast Recovery Solutions Accelerated Windows Application backup & recovery management * New/Enhanced compared to the XP128/XP1024

108 Business Copy XP Features real-time local data mirroring within XP disk arrays full copies or space-efficient snapshots enables a wide range of data protection data recovery and application replication solutions instant access/restore flexible host agent for solution integration Benefits powerful leverage of critical business data for secondary tasks allows multiple mission-critical storage activities simultaneously P C9 Cn C1 High-speed, real-time data replication Full Copy Snapshot P C32 Cn C1 Future

109 zero downtime backup hp business copy xp Business Copy XP Tape Libraries Production Server Data Protector Backup Server LAN Data Protector Control Station P-VOL C1 XP brak wpływu na wydajność Aplikacji/Bazy Danych większa dostępność danych szybkie odtwarzanie z backupu online zautomatyzowane i całkowicie zintegrowane rozwiązanie: zintegrowane z Oracle RMAN zintegrowane z SAP brbackup zintegrowane z MS Exchange

110 Business Copy XP Full copy vs. Snapshot Pros Isolates primary data from Copy Read/Write performance impact Full speed Copy Read/Write access Cons Copy requires full amount of storage Fewer Images (9 Per Primary) Best for… High performance requirements Heavy Primary-Write/Copy-Read datasets Recover-from-copy implementations Pros Space efficient – only the delta data consumes extra space More concurrent images (32 per Primary) Cons Performance impact – Snapshot Read impacts primary data Virtual Snapshot Image – Copy shares data with Primary Heavy Write environments reduce efficiency Best for… Low cost/performance Low Write/Read environments P C9 Cn C1 Full-copy Snapshot P Cn C1 C32 Future

111 Continuous Access XP and Continuous Access XP Extension Features real-time multi-site remote mirroring between local and remote XP disk arrays synchronous and asynchronous copy modes Sidefile and Journaling options remote link bandwidth efficient flexible host agent for solution integration Benefits enables advanced business continuity reliable and easy to manage offers geographically dispersed disaster recovery High-performance real-time multi-site remote mirroring data mirror metro continental

112 CA latency/IOPS Study

113 Async-CA Guaranteed Data Sequence Asynchronous-CA Control Data 1 Data 2 Data 3 Data 4 Application Data 1 Data 2 Data 3 Data 4 Cache Data 1 Data 2 Data 3 Data 4 Cache Data 4 Data 3 Data 2 Data Sort Near SiteFar Site Asynchronous Transfer with Out of Order repaired (per consistency group) Guaranteed Data Sequence Sort by Sequence Control Information At far site, I/O #3 will be held until I/O #2 shows up

114 Async-CA Lost Data in Flight What of Lost Data in Flight? Data 1 Data 4 Data 2 Data 3 Application Data 1 Data 2 Data 3 Data 4 Cache Data 1 Data 2 Cache Data 4 Lost Data 2 Data Sort Near SiteFar Site Guaranteed Data Sequence until Guaranteed Data Sequence per CT group Sort by Sequence # and CT group Time Out 4 Default=3min. 4 0, 3-15min.** Suspend after Time Out* * All pending correct Record Set is completed. ** Under Consideration

115 Async-CA Consistency Group Consistency Group Summary 4 Consistency Group( Max. 16 / Array) Update sequence consistency maintained across volumes that belong to a consistency group XP256 (MCU) Near Site System RCP XP256 (RCU) LCP Remote Copy links S SSPP P RAID Manager Far Site System Consistency Group

116 Continuous Access XP Journal vs. Continuous Access XP Extension Update data (Journal) is stored on Disk Remote side initiates updates Enables logical 1:2, 1:1:1 configurations Strengths Updates very efficient, less bandwidth required Provides hi-capacity lower cost disk-based update data tracking Best for… Multi-Site implementations Update data (Sidefile) is stored in Cache Primary side initiates updates 1:1 configurations only Strengths Potential for better data currency Best for… 1:1 implementations Data JNL Copy Primary Remote Data Copy Primary Remote CA XP Journal CA XP Extension (using Sidefiles) cache JNL Copy Remote 3 4 2nd

117 Continuous Access XP Extension asynchronous mirror unlimited distance Contingency Site Multi-site data replication Primary Site Hot Standby Site Continuous Access XP synchronous mirror <100km Primary DWDM Secondary Journal WAN Journal WAN Journal Secondary Primary DWDM Tertiary Primary DWDM Secondary Business Copy Business Copy WAN Tertiary 2nd Future

118 Availability management software Scalability Availability Continuous Access XP Cluster Extension XP Wide-area Clustering - Continentalclusters wide area XP Business Copy XP c1 cn p p Auto Path XP Secure Path XP Clustering

119 Cluster Extension XP Features extends local cluster solutions up to the distance limits of the software rapid, automatic application recovery integrates with multi-host environments Serviceguard for Linux VERITAS Cluster Server on Solaris HACMP on AIX Microsoft Cluster service on Windows Benefits enhances overall solution availability ensures continued operation in the event of metropolitan-wide disasters Seamless integration of remote mirroring with industry-leading heterogeneous server clustering XP array Continuous Access XP Datacenter A Datacenter B datamirror Cluster Cluster Extension XP

120 continentalclusters

121 External Storage XP Connect low-cost and/or legacy external storage utilizing the superior functionality of the Next Generation XP array Accessed as a full privilege internal XP Next Gen LU No restrictions; use as: A regular XP Next Gen LUN Part of a Flex Copy XP, Business Copy XP or Continuous Access XP pair With full Solutions support Facilitates data migration Reduce costs by using less expensive secondary storage FC CHIP XP Next Gen External Storage Hosts FC CHIP LUN MSA1000 XP SAN Future

122 Included Support and Services Site Preparation Array Installation and Start Up Proactive 24 Support, 1 year Reactive Hardware Support, 24 x 7, 2 years Software Support, 1 year (included with software title) Accelerates your time to production with efficient installation of your new XP Disk Array by HP certified Storage Technical Specialists.

123 Mission Critical Support (Proactive 24) Environmental services Customer support teams Account support plan Activity reviews Reactive services 24x7 HW Support, 4 hr. response 24x7 SW Tech Support, 2 hr. response Escalation management Technology-focused proactive services Patch analysis and management SW Updates Annual System Healthcheck Storage array preventive maintenance SAN supportability assessment Network software and firmware updates HP Critical Support adds data availability guarantee Ongoing support Mission critical, multivendor environment

124 Command View XP Features Web-based, multi-array management framework Centralized array administration Advanced fibre channel I/O management supports XP Next Gen and legacy XP arrays from single mgt station Benefits common management across HP storage devices graphical visualizations speed up problem resolution multiple security layers ensure information is safe & secure manage storage anytime, from any where Web-based device management platform for XP disk arrays

125 Command View XP – cross platform support Seamlessly manage XP Next Gen arrays along side complete family of legacy XP systems Simplify storage management Reduce storage management complexity and cost Accelerate storage deployment Command View mgt station web client XP512/XP48 XP1024/XP128 XP Next Gen web client

126 LUN Configuration & Security Manager XP Features assignment of LUNs via drag n drop assignment of multiple LUNs with single click checks each I/O for valid security LUN and WWN group control Benefits flexibility to configure the array to meet changing capacity needs permissions for data access can be changed with no downtime every I/O is checked to ensure and maintain secure data access Easy-to-use menus for defining array groups, creating logical volumes and configuring ports

127 Performance Advisor XP Features real-time data gathering and analysis system view of all resources flexible event notification and large historical repository Benefits identify performance bottlenecks before they impact business helps maintain maximum array performance precise integration eliminates blame storming across systems, database and storage administrators Collect, monitor, and display real-time performance

128 Cache LUN XP Features stores critical data in the XP array cache user configurable, easy to use fast file access and data transfer scalable Benefits speeds access to mission critical data provides access to critical data such as database indices in nanoseconds rather than milliseconds Boost performance by redirecting I/O requests to the cache

129 Performance Control XP Features Control consumption of array performance by IO/s or MB/s Precise array port level settings reports and online graphing Benefits allows customer to align business priorities with the availability of array resources Effectively manage mission-critical and secondary applications more efficient use of array bandwidth allows for performance-level service-level oriented deployment Allocates performance bandwidth to specific servers or applications

130 Auto LUN XP Features optimizes storage resources moves high-priority data to underutilized volumes identifies volumes stressed under high I/O workload creates a volume migration plan migrates data across different disks or RAID levels Benefits improves array performance and reduces management costs by automatically placing data in highest performing storage (cache, RAID 1, RAID 5) High Capacity Disks High Performance Disks 73 GB Policy-based LUN Migration 36GB Non-disruptive volume migration for performance optimization

131 Secure Path XP Auto Path XP Features automatic and dynamic I/O path failover and load balancing automatic error detection heterogeneous support NT, W2K, HP-UX, Linux, AIX path failover & load balancing for MSCS NT & WIN2K - Persistent Reservation Benefits eliminates single points of I/O failure self-managing automatic error detection and load balancing same software interface across XP and virtual arrays simplifies training and administration Automatic protection against HBA and I/O Path failures with load balancing

132 Footprint Incredible Density The XP Next Gen can store data at a density of up to 35 gigabytes/inch GB disks in 4849 square inches of cabinet floor space

133 Zastosowania serwerów baz danych praca instytucji obsługujących wielką liczbę klientów, gdy wymagane jest przeszukiwanie wielkich baz danych (np. obsługa systemu podatkowego, obsługa kont bankowych, obsługa odbiorców energii elektrycznej, gazu i usług telekomunikacyjnych oraz wiele innych. animacja komputerowa, obliczenia i wizualizacja w trójwymiarowych zagadnieniach CAD/CAM i w dynamice płynów, bankowość i finanse - do analizy rynku i trendów giełdowych, geofizyka – np. przetwarzanie trójwymiarowych danych sejsmicznych, procesy chemiczne – akwizycja danych, sterowanie procesami, wizualizacja, chemia komputerowa, energetyka – modelowanie i zarządzanie dystrybucją energii, elektronika – np. projektowanie i symulacja półprzewodników,

134 Podstawowe parametry nośników

135 Helical vs. DLT

136 Cechy DLT

137 Cechy produktów ATL

138 Implementacja wymagań RAS zRedundancja zprzykłady: ySUN E6500, yIBM RS6K M80, S80 yHP N4000, V2500, SuperDome zElementy (na przykładzie komputerów klasy podstawowej HP L): ymonitorowanie pracy z pomocą oddzielnego procesora yPamieć (RAM), cache typu ECC ydynamiczna dealokacja procesorów i modułów pamięci yoddzielne podsystemy magistrali dla sterowników dysków lustrzanych ywymiana, hot swap, redundancja w odniesieniu do zasilaczy, wentylatorów..... ygwarancja 3 lata

139 Pamięć 4 x UltraSPARC II Interfejsy SBus Płyta systemowa 16 Pamięć 4 x UltraSPARC II Interfejsy SBus Płyta systemowa 2 Pamięć 4 x UltraSPARC II Interfejsy SBus Płyta systemowa 1 4 magistrale adresowe Przełącznica krzyżowa 16 x16 dla danych Przepustowość 12.8 GB/s SUN E10000

140 4 x PA8500 Interfejsy PCI Płyta systemowa 8 4 x PA8500 Interfejsy PCI Płyta systemowa 2 4 x PA8500 Interfejsy PCI Płyta systemowa 1 Przełącznica krzyżowa 8 x 8 (przepustowość 15.3 GB/s) Pamięć operacyjna Połączenia X/Y SCA (przepustowość 3.84/3.84 GB/s) X Y HP V2500

141 HyperPlane Agent 2-32 PA-8500 CPUs per V2500 Engine (56 GFLOPS) HyperPlane 8x8 Crossbar 15.3 Gbytes/sec 1.9GB/sec crossbar ports (8) 240 MB/sec I/O Ports (8) 2x PCI I/O Controllers (28) 4GB – 32GB SDRAM Physical Memory 256-Way Interleaved cccc 32-way 2x Memory Controller 2x Memory Controller 2x Memory Controller 2x Memory Controller cccc 5678 cccc 1234 cccc HyperPlane Agent HyperPlane Agent HyperPlane Agent New HP V2500

142 Katastrofy zFizyczne uszkodzenie sprzętu komputerowego zUszkodzenie łączy komunikacyjnych zUtrata zasilania


Pobierz ppt "Poziom podwyższony Ochrona danych Macierze dyskowe RAID Automatyczne biblioteki taśmowe Urządzenia do zapisu optycznego na dyskach CD."

Podobne prezentacje


Reklamy Google