Prezentacja na temat: "Poziom podwyższony Ochrona danych"— Zapis prezentacji:
1 Poziom podwyższony Ochrona danych Macierze dyskowe RAIDAutomatyczne biblioteki taśmoweUrządzenia do zapisu optycznego na dyskach CD.
2 Wprowadzenie Dyski i zasilacze stanowią najsłabsze ogniwo HAS Dyski zawierają daneDane muszą być chronioneDane muszą być odtwarzalne z pomocą dodatkowych systemówDisk storage systemspodsystemy dyskoweJBOD (Just a Bunch of Disks)Dyskihot-pluggablewarm-pluggablehot-spareswrite cachemacierze dyskoweSANNAS
3 SGI InfiniteStorage Product Line High AvailabilityDASNASSANRedundant Hardware and FailSafe™XVMData ProtectionLegato NetWorker,XFS™ Dump, OpenVault™HSMHigh AvailabilityData ProtectionHSMData SharingSGI Data Migration Facility (DMF),TMF, OpenVault™Data SharingXFS, CIFS/NFS, Samba,ClusteredXFS (CXFS™),SAN over WANStorage HardwareTP900, TP9100, TP9300, TP9400, TP9500, HDS 99x0,STK Tape Libraries, ADIC Libraries, Brocade Switches,NAS 2000, SAN 2000, SAN 3000Choose only the integrated capabilities you needCracow ‘03 Grid Workshop
16 Dyski lustrzane i RAID Pojedyncze dyski - MTBF godzin dyski lustrzane (RAID 1)hot-swap - niska wydajność wykorzystania powierzchni (~50%)RAID (D.A. Patterson, G. Gibson R.H. Katz, A case for Redundant Array of Inexpensive Disks or RAID, University of California, Berkeley, 1987)Grupa(-y) dysków sterowanych wspólnieJednoczesny zapis i odczyt na różnych dyskachBezawaryjność systemu - zapis informacji nadmiarowychcache
17 RAID, serwery danych, intensywny przepływ danych umieszczony w jednej obudowie z własnym zasilaczem,wyposażony w elementy redundantne w celu zapewnienia wysokiej dostępności ,wyposażony na ogół w dwa lub więcej kontrolerów we/wy,wyposażony w pamięć typu cache w celu przyspieszenia komunikacji,skonfigurowany tak, by zminimalizować mozliwość wystąpienia błędu (sprzętowa realizacja standardu RAID).
18 RAID, cd. Software/Hardware każdy komputer - redundantne i/f HW RAID- ograniczony do jednej tablicy dyskówDisk cache, procesor Tablice dysków: - kilka magistral dla kilku i/f komputerów, read-ahead buffersSW RAID - elastyczność, cenowo korzystniejszyHW RAID - Potencjalne pojedyncze punkty uszkodzeń:zasilaczechłodzenieinstalacja zasilającawewnętrzne sterownikipodtrzymanie bateryjnepłyta główna
30 SCSI vs. FC SCSI każdy z każdym 2 połączenia FC .......... komputery koncentratorydyski
31 Dlaczego Fibre Channel? Gigabit Bandwidth Now - 1 Gbs today, soon 2 and 4GbsHigh Efficiency - FC has very little overheadMultiple Topologies - Point-to-point, Arbitrated loop, FabricScalability - from Point-to-Point FC scales to complex FabricsLonger cable lengths and better connectivity than existing SCSI technologiesFibre Channel is an ISO and ANSI standard
32 Wysoka przepustowość Storage Area Networks today provide 1.06Gbs today2.12Gbs this year4.24Gbs in the futureMultiple Channels expand bandwidthi.e.:800 Mbyte/s
33 Topologie FC Point-to-Point 100MByte/s per connection Just defines connection between storage system and host
34 Topologie FC, dc. FC-AL, Arbitrated Loop Single Loop Dual Loop Data flows around the loop, passed from one device to anotherDual LoopSome data flows through one loopwhile other data flows throughthe second loopEach port arbitrates for access to the loopPorts that lose the arbitration act as repeaters
35 Topologie FC, cd. FC-AL, Arbitrated Loop with Hubs Hubs make a loop look like a series of point to point connections.231HUB4Addition and deletion of nodes is simple and non-disruptive toinformation flow.
36 Topologie FC, cd. FC-Switches 1 2 3 4 Switches permit multiple devices to communicate at 100 MB/s, thereby multiplying bandwidth.21SWITCH34Fabrics are composed of one or more switches. They enable Fibre Channel networks to grow in size.
37 So how do these choices impact my MPI application performance ? Let‘s find it out by running ...Micro Benchmarks to measure basic network parameters like latency and bandwidthNetperfPALLAS MPI-1 BenchmarkReal MPI ApplicationsESI PAM-CrashESI PAM-FlowDWD / RAPS Local Model
38 Benchmark HW Setup2 Nodes HP N4600 each with 8 processors running HP-UX 11iLots of NW Interfaces per node4 * 100BT Ethernet4 * Gigabit Ethernet Copper1 * Gigabit Ethernet Fibre1 * HyperFabric 1Point-to-point connections, no switches involvedJ6000 benchmarks were run on a 16 node cluster at the HP Les Ulis benchmark center (100BT and HyperFabric 1 switched networks)HyperFabric 2 benchmarks were run on the L3000 cluster at the HP Richardson benchmark center.
39 Jumbo Frame Same Performance with Fibre and Copper GigaBit ? GigaBit Ethernet allows the packet size (aka Maximal Transfer Unit MTU) to be increased from 1500 Bytes to 9000 Bytes. Can be done anytime by invoking lanadmin –M 9000 <nid>Can be applied to both Copper and Fibre GigaBit interfaces.
41 What is the Hyper Messaging Protocol (HMP) ? Standard MPI Messaging with remote nodes goes through the TCP/IP stack. Massive Parallelism with clusters is limited by OS overhead for TCP/IP. Example: PAM-Crash MPP on a HP J6000 HyperFabric workstation cluster (BMW data set)25% System Usage / CPU on a 8x2 cluster45% System Usage / CPU on a 16x2 clusterThis overhead is OS related and has nothing to do with NW interface performance.
42 What is HMP ?HMP Idea: Create a shortcut path for MPI applications in order to bypass some of the TCP/IP overhead of the OSApproach: Move the device driver from OS kernel into the application. This requires direct HW access privileges.Now available with HP MPI 1.7.1Only for HyperFabric HW
43 MPI benchmarking with the PALLAS MPI-1 benchmark (PMB) PMB is an easy-to-use MPI benchmark for measuring key parameters like latency and bandwidthCan be downloaded fromOnly 1 PMB process per node was used to make sure that NW traffic is not mixed with SMP traffic. Unlike the netperf test this is not a throughput scenario.
44 Selected PMB Operations PingPong with 4 Byte message (MPI_Send, MPI_Recv) measures message latencyPingPong with 4 MB message (MPI_Send, MPI_Recv) measures half duplex bandwidthSendRecv with 4 MB message (MPI_SendRecv) measures full duplex bandwidthBarrier (MPI_Barrier) measures barrier latency
48 Cechy SAN ( Storage Area Networks ) Centralized ManagementStorage Consolidation/Shared InfrastructureHigh Availability and Disaster RecoveryHigh BandwidthScalabilityShared Data !? (niełatwe niestety !)
51 Centralne zarządzanie Local Area NetworkIRIXNTLinuxStorage today isServer attached, therefore at the server’s location.Difficult to manageExpensive to manageDifficult to maintainBuilding ABuilding ^BBuilding CBuilding D
52 Centralne zarządzanie, cd. Local Area NetworkIRIXNTLinuxStorage Area NetworkStorage isNetwork attached (NAS)independent from location of servereasy and cost effective to manage and maintainIT-Department
53 Konsolidacja Storage still Linux NT Linux IRIX NT Storage now Storage Area NetworkStorage stillis split amonst different storage systems for different hostNTLinuxStorage Area NetworkIRIXStorage nowshares a common infrastructure
54 High Availability and Disaster Recovery ( przykład ) Highly redundant network with no single point of failure.
55 SkalowalnośćCan scale to very large configurations quickly
56 Share Data!? Currently there is just shared infrastructure Linux NT Storage Area NetworkCurrently there is just shared infrastructure
57 Sieciowe zaplecze danych NAS (Network Attached Storage)serwery danychlekki serwer, pamięć masowa, zarządzanie, siećzastosowanieintensywny przepływ danychprzeszykiwanie, analizyRT processingmultimedia, videokonferencjevideo-server - QoSprzetwarzanie wieloprocesoroweSAN (Storage Area Network)rozdzielenie od LAN
58 Central Management Console Network Attached Storage (NAS) LANvault Backup Appliances for remote sitesConceptual ViewLAN at Remote OfficeBackupServerEasy to installEasy to manage, over the WebFor remote or non-MIS officesWAN orInternetCentral Management Console
59 NAS eliminates the factors that create risk for remote site backup... Reliability of an ApplianceSelf-contained, dedicated to backup onlyNothing complex to crash during backupFuture performance enhancements via Gigabit EthernetControl via Central Management ConsoleNo uncertainty about backup having been done correctlyNo need to rely on remote site operatorsNo outdated or “renegade” remote sites…proactive alerts about upgrades, patches and service packs and automated update of multiple sites ensure consistency across entire enterprise
61 2 Buzzwords in IT Industry Server Consolidationmaybe in a commercial environmentusually not in a technical environmenta hammer is a hammer, a screwdriver is a screwdriveran HPC system cannot be used as a HPV systemStorage ConsolidationDAS -> NAS -> SANCracow ‘03 Grid Workshop
62 History of Storage Architectures DAS - Direct Attached Storage proappropriate performancecondistributed, expensive administrationdata may not be where it is neededmultiple copies of data storedCracow ‘03 Grid Workshop
63 History of Storage Architectures NAS - Network Attached Storage procentralized, less expensive administrationone copy of dataaccess from every systemconnetwork performance is the bottleneckCracow ‘03 Grid Workshop
64 History of Storage Architectures SAN - Storage Area Network procentralized administrationperformance equivalent to DASconNO FILE SHARINGmultiple copies of data storedSwitchCracow ‘03 Grid Workshop
65 How does that translate to a GRID Environment? Storage Consolidationuseful in a local environment (GRID node)does not work between remote GRID nodesCurrent Data Access between GRID NodesData has to be copied before/after the execution of a jobProblemscopy process has to be done manually or included in the job scriptcopy can take longmultiple copies of dataadditional disk space neededrevision problemCracow ‘03 Grid Workshop
66 What if...... a SAN would have the same file sharing capability as a NAS?... one could build a SAN between different buildings/sites/cities and not loose performance?Cracow ‘03 Grid Workshop
67 Storage Area Networks (SAN) The High Performance Solution A first step:each host owns a dedicated volume consolidated on a RAID array.Storage management is centralized.Offers a certain level of flexibility.SANLANCracow ‘03 Grid Workshop
68 SGI InfiniteStorage Shared FileSystem (CXFS) A unique high performances solution:Each host shares one or more volumes consolidated in one or more RAID arrays.Centralized storage managementHigh modularityTrue High Performances Data sharingHeterogeneous EnvironmentIRIXWindows NT,2000 and XPLinux, Mac OSLANSANSOLARIS,AIX, HP-UXCracow ‘03 Grid Workshop
69 Fibre Channel over SONET/SDH The High Efficiency, Long Distance Alternative HoursDistance(kilometers)Data re-transmission due to IP packet loss limits actual IP throughput over distanceNewYorkBostonChicagoDenverCracow ‘03 Grid Workshop
70 LightSand Solution for building a Global-SAN Tape SystemStorageServersClientLANIP RouterFibre ChannelSwitchWANDWDMDedicatedFiberSDHSONETFCIPCracow ‘03 Grid Workshop
71 LightSand Products S-600 S-2500 2 ports FC and/or IP 1Gb/s Point-to-point SAN interconnect over SONET/SDH OC-12c (622 Mb/s bandwidth)Low latency (approximately 50 µSec)S-25003 ports FC and/or IP 1Gb/sPoint-to-point SAN interconnect over SONET/SDH OC-48c (2.5 Gb/s bandwidth)Point-to-multipoint SAN interconnect over SONET/SDH (up to 5 SAN islands Mb/s per link)Cracow ‘03 Grid Workshop
72 Data Movement Today – A Recent Case Study ServerIP NetworkServerLos Alamos National Laboratory (LANL)Sandia National Laboratory (SNL)Fibre Channel Storage AreaNetworkFibre Channel Storage AreaNetworkScientists at LANL currently dump 100GB of supercomputing data to tape and FedEx it to SNL because it is faster than trying to use the existing 155Mb/s IP WAN connectionActual measured throughput of 16Mb/s! (10% bandwidth utilization)Cracow ‘03 Grid Workshop
73 The Better Way – Directly Between Storage Systems IPNetworkServerServerLocal Data CenterRemote Data CenterFC SANFC SANTelcoSONET/SDHInfrastructureLightSandGatewayLightSandGatewayUsing LightSand gateways, the same data could be transferred in a few minutes!Cracow ‘03 Grid Workshop
74 What does that mean for a GRID Environment? Full Bandwidth Data Access across the GRIDNo Multiple Copies of Dataavoid the revision problemdo not waste disk spaceMake GRID Computing more efficientGDAŃSKPOZNAŃŁÓDŹKRAKÓWWROCŁAWWARSZAWACracow ‘03 Grid Workshop
75 Podsumowanie Problems in Storage Management System design and configuration - device managementProblem detection and diagnosis - error managementCapacity planning - space managementPerformance tunning - performance managementHigh Availability - availability managementAutomation - self managementDATA GRID Management (GRID Projects) !!Replica management,optimization of data accesssoftware that depends of kind of data - expert systems,component-expert technology,time estimation of data availability .....
78 EMC Corp.Firma EMC Corporation jest producentem macierzy dyskowych Symmetrix o wielkiej pojemności ze zintegrowaną pamięcią typu cache. Macierze Symmetrix charakteryzują się mozliwością współpracy ze sprzętem czołowych producentów serwerów: HP, IBM, SUN, Silicon Graphics, DEC, i innych. Macierz może być podłączona do serwera poprzez łącze FWD SCSI lub (w przypadku maszyn HP) przez łącze światłowodowe FC-AL.Obecnie firma EMC Corp. oferuje dwie rodziny macierzy: 3000 i 5000 o maksymalnej pojemności 2.96 Terabajta różniące się między sobą rodzajami interfejsów: FWD SCSI i FC-AL. w serii 3000 oraz ESCON w serii przeznaczonej do współpracy ze standardem komputerów IBM. Ze względu na powszechnośc występowania poniże zostaną opisane jedynie modele rodziny
79 Rodzina Symmetrix 3000Rodzina serwerów Symmetrix 3000 obejmuje trzy modele różniące się od siebie maksymalną pojemnością, gabarytami i poborem mocy, maksymalną pojemnością pamięci cache i ilościa portów.Bardzo wysoką wydajność uzyskano dzięki zastosowaniu pamięci podręcznej (cache) o wielkości do 4 GB co zapewnia maksymalną szybkości przetwarzania danych oraz maksymalną szybkość działania aplikacji.W zależności od modelu macierz rodziny Symmetrix może zawierać do 128 napędów dyskowych. Pozwala to osiągnąć do 2.94 TB pamięci dyskowej zabezpieczonej.
86 Przykład: HP storage – rodzina produktów Klasa wyższaStorage Area Networking, Mission Critical High Availability/Disaster Recovery, E-Commerce, Data Warehousing and ConsolidationKlasa średniaMission Critical High Availability, ERP, Data Warehousing, Data Analysis & SimulationKlasa podstawowaE-Commerce, Application and File/PrintDisaster toleranceXP512XP48VA7100/VA7400FC60High availabilityDS210012H with AutoRAIDFC10DS2300ISPMid-marketEnterprise
87 XP 512/48 w pełni Fibre Channel w pełni redundantna gwarancja 100% dostępności danychskalowalna do ok. 90TBcrossbar – nowoczesna architektura przełącznika o przepustowości 6.4GB/szaawansowane zarządzanie i funkcjonalnośćnajbardziej atrakcyjny produkt tej klasy wg. Gartner Group
88 Virtual Array 7100/7400 2Gb/s Fibre Channel architektura wirtualna prosta obsługa – pełne możliwości zarządzaniazaawansowane metody zabezpieczania przed awariąskalowalna do 7.7TBtechnologia AutoRAID – automatyczna optymalizacjapraca w środowiskach heterogenicznych
89 macierze dyskowe HP StorageWorks rodzina XProdzina EVANOTE:THIS SLIDE POSITIONS OUR LONGTERM STRATEGIC PRODUCTSHP StorageWorks Array PortfolioHP StorageWorks MSA family—is ideal for small and medium business (SMB) customers and enterprise departments looking to make the first step toward external storage consolidation and SAN. Typical customers tend to believe investment protection critical, are price sensitive, and have moderate concerns about scalability. The MSA family is a perfect solution for customers with deployed Smart Array controllers and/or ProLiant Servers because it is built with HP Smart Array technology and features. With the MSA family and its unique DtS (DAS to SAN) migration technology, customer can easily move from internal ProLiant server storage to external storage and even SAN, leveraging common disk drives, management tools, data and technology. The MSA family consists of three products (MSA30, MSA500, and MSA1000) as well as optional server/ storage bundles. The MSA family delivers super simplicity and all the cost savings of consolidated storage. (MSA family – Easy, Safe, Affordable)HP StorageWorks EVA family—is ideal for mid-range and enterprise customers that require high availability, but not at the same level as the XP. They can tolerate some unavailability. They may want an identical remote copy of the data, for instance, but they have a greater tolerance for the amount of time to restore. These are customers who do not buy NonStop servers or high-end Superdomes. The EVA is an outstanding solution for customers who are looking for enterprise-class capabilities at a more cost effective price. Where TCO is very important to them and on-going management costs are very important. The EVA family is very simple to use and highly automated enabling customers to manage more storage/admin compared to any other array subsystem on the market, providing the best TCO. (EVA family – Enterprise-class that’s powerfully-simple)HP StorageWorks XP family – is ideal for enterprise customers requiring the highest level of scalability, availability, Disaster Recovery and Business Continuity solutions, and guaranteed uptime service levels. The level of service provided here could be compared to the level of service from our Nonstop severs or high-end Superdome servers. The XP is an excellent solution for customers with applications than cannot afford any downtime—‘bullet proof’ if you will and are willing to pay the cost to implement that level of service. Their business demands this level of service. The XP also supports the largest number of OSs and offers very robust solutions. Plus you can mange up to 149TB in one box with one interface that controls it all. (XP family - Xceptional availability, scalability and throughput)It’s a matter of choice - we have customers who clearly understand this and they own MSAs, EVAs and XPs—because they have different needs within their business.rodzina MSAPrzepustowośćZawsze dostępne<165 TBKonsolidacja Data center + disaster recoveryAplikacje typu Oracle/SAP o największych wymaganiachHP-UX, Windows, + 20 innych włączając mainframeWyjątkowe TCO<72 TBKonsolidacja pamięci masowych + disaster recoveryUproszczenie przez wirtualizacjęWindows, HP-UX, Linux i inne systemy UnixoweKonsolidacja niskim kosztem<24 TBWEB, Exchange, SQLprosty DAS-to-SAN (ProLiant)Windows, Linux, Netware i inneSkalowalność
90 architektura fizyczna „modularna”, np. VA/EVA 123105FC-ALDFPFCCacheMemory
91 architektura fizyczna „monolityczna”, np. XP12000 CHIPACPFibre Channelcrossbar83 GB/sFibre ChannelACPCHIPFibre Channelmax. 4 pary ACP do 1152 dysków 72GB 15K rpm oraz 146GB 10K rpmCHIPFibre Channelmax. 4 pary CHIPdo 128 portówACPCHIPFibre ChannelFibre ChannelACPThe xp1024’s crossbar architecture eliminates bus contention, creating a high-bandwidth path from server to disk.In simplified terms, the “crossbar” provides literally two (2) switches in the xp1024 (two for redundancy) that enable all data paths to remain open without any contention. This translates to an aggregation of backplane throughput of 15 GB/s.The XP architecture is highly redundant with no single point-of-failure. Redundant components and online firmware upgrades keep the array up and running, and ensure that critical data is available for use.This architecture is also highly efficient. The host side (CHIP) processes cache hits, checks access rights, and links protocols, so server requests are processed and handled quickly.Channel Host Interface Process (CHIP) is a PCB used for data transfer control between the host and cache memory.Array Control Process (ACP) is a PCB used for data transfer control between disk drives and cache memory. Four ports per PCB are mounted. Ports are controlled by their respective dedicated microprocessors and data is transferred concurrently between ports and disk drive.Cache memory is nonvolatile memory (battery backed up) and write data is duplicated so that data loss is not caused even when a failure of one component occurs in power supply or PCB.Shared memory is nonvolatile memory (battery backed up) used to store cache directories and disk control information. The required shared memory varies with the size of cache and number of LDEVs.cache and shared memory128 GB
92 XP disk arrays have a proud heritage More than 5000 XP Frames and 36 Petabytes of capacity installed so far!The XP Next Gen is the latest generation of the widely used and highly proven XP disk array platform.HP has shipped more than 4000 XP frames and more than 36 Petabytes (36,000 Terabytes) of XP storage capacity.As the latest generation of XP arrays, the XP Next Gen provides even more availability, scalability, and flexibility to fit your capacity requirements.The XP Next Gen products provide:Increased disk capacity including support for leading disk drives: 72GB 15K rpm and 146 GB 10K rpm dual ported Fibre Channel Arbitrated Loop disk drivesincreased maximum cache bandwidthincreased shared memory bandwidthincreased disk connection bandwidthXP1024XP512XP 12000XP256XP128XP48
93 XP Next Gen performance Max Sequential Performance6.0GB/s2.11.2Max Random Performance Cache2,368,000IOPS544,000272,000Max Random Performance Disk120,00066,00033,000Data Bandwidth68105ControlBandwidth15 GB/s5 GB/s2.5 GB/s
95 XP Next Gen scalability Start at the size you need, scale to the highest usable capacity of any array on the planetThe XP Next Gen allows you to start with the configuration you need today and scale of the highest usable capacity of any disk array as your needs grow!With the XP Next Gen, you can mix drives of various capacities and speeds in the same array to achieve an optimal balance of performance and capacity for your particular application demands.The XP Next Gen also allows you to size and performance of Cache, ACP pairs, Shared Memory, and the number of host ports and spare drives to meet your particular requirements.The XP Next Gen is the right choice if you need the very best performance.With its high performance and storage capacity the XP Next Gen is ideally suited for business intelligence, that is a combination of data warehousing, data mining and information generation.There are 4 slots reserved exclusively for spare disks, another 36 slots can be used either for spares or data disks for a maximum of 40 spare disks.MinIncrementMaxData Drives841148Spare Drives140Capacity576 GB raw288 GB usable-165 TB raw144 TB usableACP PairsCache4 GB128 GBCache Bandwidth17 GB/s17/34 GB/s68 GB/sShared Memory1 GB6 GBShared Memory Bandwidth7.5 GB/s15 GB/sHost Ports8/16/32128LDEVs8,19216,384Frames52nd
96 XP Next Gen scalability Start at the size you need, scale to the highest usable capacity of any array on the planetNOTESThis slide is animated, each step depicts the XP scaling up.ACP pairs, DKUs, and disk drives can be added in different order than what is shown by animationOrder depends on performance/cost preferences of the customerThe XP Next Gen allows you to start with the size you need today and scale as your need grow.The smallest XP Next Gen consists of one DKC (control cabinet)ClickIt can contain as little asOne CHIP pair4 Gigabytes of Cache Memory1 Gigabyte of Shared MemoryOne ACP pairAnd 8 disk drivesAs your needs grow you can continue to add disk drivesUp to 128 drives in the DKCYou can add an additional DKU (disk cabinet)And add more disk drivesAdditional ACP pairs and disk can be addedYou can also add additional infrastructure. The XP Next Gen will hold up to:4 Chip pairs128 Gigabytes of Cache12 Gigabytes of Shared MemoryAdditional DKUs can be addedEach DKU can hold up to 256 disk drivesUp to 512 disk drives per ACP pairContinue to expand capacity as your needs growScale to the highest usable capacity of any disk array on the planet!12151215ACPSM1 GBCacheSwitch4 GBPathCHIP1215121512151215ACPACPACPSM1 GB6 GBSMPathCacheSwitchCache4 GB128GBCHIPCHIPCHIP
97 XP Next Gen RAID types RAID 1 2D+2D RAID 1 4D+4D RAID 5 3D+1P RAID 5 2ndRAID 12D+2DRAID 14D+4DRAID 53D+1PRAID 57D+1PRAID 56D+2PDataDataDataDataDataRAID 1 2 data + 2 mirrored data configuration. The array can read from either set of identical information. RAID 0/1 results in 50% efficiency of usable storage compared to raw capacity.RAID 1 4 data + 4 mirrored data, doubles the maximum size of a RAID 1. For certain operating systems, having a smaller number of larger capacity stripes makes configuring the storage easier and simpler.RAID 5 3 data + 1 parity drive configuration. This results in 75% efficiency of raw storage.RAID 5 7 data + 1 parity configuration. This configuration improves the storage efficiency to 87.5% of usable compared to raw storage.RAID 5 6 data + 2 parity configuration. This configuration improves the fault tolerance. Two drives from one group can fail and data is still available. Trade-off is slightly lower write performance.This RAID type can also properly be called RAID 6.Scheduled to be available for 2nd release.All RAID types can be configured in the same XP array at the same time.RAID level 1 is a combination of RAID levels 0 and 1.The array is first set up as a group of mirrored pairs (RAID 1), then striped (RAID 0). A RAID level 0/1 array done in this manner can sustain multiple drive failures, as long as both drives of a mirrored pair do not fail at the same time. The probability of this occurring is very low.Performance is very good with RAID 0/1 arrays, and redundancy is also very high, but it comes at the cost of additional disk drives.RAID 5 uses block level striping and distributed parity. RAID 5 distributes the parity over all of the drives to increase performance by decreasing the bottleneck to a single drive. RAID 5 parity is used for fault tolerance.Write PerformanceRAID 1 writes require 2 I/Os (the same data written twice)—RAID 1 is the fastest write performing RAID type.RAID 5 (3D+1P) and RAID 5 (7D+1) writes require 4 I/Os (1-read old data, 2-read old parity, 3-write new data, 4-write new parity)—RAID 5 write performance can be as low as one half the performance of RAID 1 writes.RAID 5 (6D+2P) writes require 6 I/Os (1-read old data, 2-read old parity , 3-read old parity, 4-write new data, 5-write new parity, 6-write new parity)—RAID 5 (6D+2P) write performance can be as low as one third the performance of RAID 1 writes.Read PerformanceThe read performance of all RAID types is similar.The XP disk arrays automatically use a spare drive when a disk failure occurs, making the time of degraded protection small.After the failed drive is replaced the data is rebuilt and the spare drive is free.DataDataDataDataDataMirrorDataDataDataDataMirrorDataParityDataDataMirrorDataDataMirrorDataDataMirrorDataParity 1MirrorParityParity 2Efficiency50%75%87.5%WritePerformanceReadModifyRead +Modify +Write +FaultTolerance1 of 4+1 of 21 of 82 of 8
98 Array Control Processor (ACP) The ACP performs all data movement between the disks and cache memoryThe ACP also provides data protection through RAID 1 mirroring and RAID 5 parity generationEach ACP provides 8 redundant 2 Gbps FC-AL loops (16 loops total per pair) and supports up to 64 dual ported drives per loopMaximum of four ACP pairs per XP Next Gen systemACPs are configured in pairs for redundancyArray Control Processor (ACP)The ACP controls data transfer between the disk drive and cache memory. Each XP Next Gen ACP has eight 2 GB/s FC-AL loops compared to four 1 GB/s loops in the XP1024/XP128.The ACP also provides data protection through the use of RAID 1 mirroring and RAID 5 parity generationThe ACP is configured in pairs for redundancyThe XP Next Gen can have one, two, three or four pairs of ACPs.ACP 2AACP 2BACP 4BACP 3BACP 4AACP 3AACP 1A8-portACP 1B
99 CHIPsClient Host Interface Processors (CHIPs) provide connections to hosts or servers and come in pairsMaximum of four pairs per XP Next Gen systemFibre Channel16 and 32 port 1 to 2 Gigabit/sec Auto sensing Fibre Channel in both short and long wave versionsESCON16 port ExSA Channel (ESCON compatible)FICON8 and 16 port 1 to 2 Gigabit/sec auto-sensing FICON in both short and long wave versionsiSCSI16 port 1 Gigabit Ethernet short waveClient Host Interface Processors (CHIPs)Client Host Interface Processors provide connections to host or servers that use the XP Next Gen for data storage. CHIPs come in board pairs with a minimum of one pair and up to a maximum of four pairs of CHIPS per XP Next Gen.CHIP pairs available for use in the XP Next Gen include:16 and 32 port 1 to 2 Gbps Auto sensing short-wave and long-wave Fibre Channel with Continuous Access support16 port ExSA Channel (ESCON compatible)8 and 16 port 1 to 2 Gbps auto-sensing FICON 1 Gb in both short- and long-wave versions16 port 1 Gbps iSCSI Gigabit EthernetAll XP CHIPs provide native connections and do not require any external converters.Future
100 Disk drivesThe number and type of disk drives installed in an XP array is flexibleDisks must be added in groups of fourAdditional capacity can be installed over time as capacity needs growNew technology (faster/larger) disks may be become available for future upgradesAll disks use the industry standard dual ported 2 Gbps Fibre Channel Arbitrated Loop (FC-AL) interfaceEach disk is connected to both of the redundant ACPs in a pair by separate Fibre Channel arbitrated loopsSpare drives are automatically used in the event of a disk drive failureDisks must be added in groups of four (array groups)New technology (faster/larger) disks may be become available for future upgradesEach disk is connected to both the primary and secondary ACP by separate Fibre Channel arbitrated loopsEach XP must include at least one spare disk drive for every type of disk configured in the array. For larger configurations, and especially configurations where RAID5 7D+1P is implemented, more than one spare disk is strongly recommended of the type used in the RAID5 7D+1P.Additional spares can be configured in the field by ordering additional array groups and configuring the drives as spares. The maximum number of spares configurable is 40 drives for the XP Next Gen.There are 4 slots reserved exclusively for spare disks, another 36 slots can be used either for spares or data disks.Spare drives are automatically used in the event of a disk drive failureDisk Drive Specifications72 GB, 15K146 GB, 10K146 GB, 15K300 GB, 10KRaw capacity (User area)71.5 GBGBDisk diameter2.5 inches3 inchesRotation speedrpm10,025 rpmMean latency time2.01 ms2.99 msMean seek time (Read/Write)3.8/4.2 ms4.9/5.4 msInternal data transfer rate74.5 to MB/sec57.3 to 99.9 MB/secFutureFuture
101 klastry lokalne MC/ServiceGuard on HP-UX Veritas Cluster Server or Sun Cluster on SolarisHACMP on AIXMicrosoft Cluster Server on NTMicrosoft Cluster Service on Windows2000TruCluster on Tru64OpenVMS Clustering on OpenVMSNetwork Cluster Services on NetwareFailsafe on Irix (SGI)AIXWindowsclusterclusterSolarisHP-UXclusterclusterXP Disk Array
102 dostępność pełna redundancja podzespołów wszystkie podzespoły podłączane „hot swap”dyski zapasoweaktualizacja mikrokodu „online” z możliwością powrotufunkcja ”call home"zaawansowana zdalna diagnostyka
103 Online firmware update Online firmware update without any host ports going downEach CHIP has multiple processor modules. Each processor module contains a pair of microprocessorsOne microprocessor of a pair is updated, while the other microprocessor in the pair continues to service the host ports for the pair with little impact to performanceEnables remote firmware update capability independent of host connection topologyFirmware executes on microprocessors on CHIPs and ACPs (SVP and environmental functions also have some firmware).A CHIP has multiple processor modules. Each module has 2 microprocessors.The requirement for online update is that one microprocessor on a module gets updated while the other microprocessor continues to operate the host ports being served by both microprocessors.Because of the options that exist for how the firmware gets updated, the actual updates may occur from between 1 microprocessor in a cluster at a time (which would be one microprocessor on one CHIP) to half of the microprocessors in the entire cluster at a time (which is half of the microprocessors on all the CHIPs in one cluster).Online firmware update works with every connection configuration instead of requiring redundant connections. Therefore, firmware updates can be done remotely without a visual inspection of the machine.XP Next Gen enables a simpler online firmware update capability by eliminating the need for redundant host paths in the SAN or direct FC connections to hosts. This makes it possible to provide firmware updates remotely without complicated site qualification audits.Microcode update usually requires 5-10 minutes of time to complete.HostPortMicroprocessorHostPortMicroprocessorHostPortMicroprocessorHostPortMicroprocessorBefore Update1st Processor Update2nd Processor UpdateAfter Update
104 BatteryInternal environmentally friendly nickel-hydride batteries allow the XP Next Gen to continue full operation for up to one minute when the AC power is lost. During this minute all the disk drives continue to operate normally.When duration of the power loss is longer than one minute, the XP Next Gen executes either the De-stage Mode or Backup Mode, protecting data for up to 48 hours.The choice of De-stage vs. Backup mode has dependencies.If the array has 2 ACP pairs or less and no external storage, either mode can be chosen.If external storage is connected or there are 3 or more ACP pairs, then Back-up mode must be used.The Auto-Power-On switch choices of behavior are: automatic restart or manual intervention restart for sites that want to be sure the power has become stable before restarting operations.Using an external “uninterruptible power supply” (UPS) can continue operation for as long as the UPS can supply power.
105 Storage Partition XP Cache Partition XP 2nd2ndEasy Secure ConsolidationDivide an XP Next Gen into independent configurable and manageable “sub arrays”Partitions Cache or Array (cache, storage, port) resourcesAllows array resources to be focused to meet critical host requirementsAllows for service provider oriented deploymentsArray partitions are independently & securely managedCan deploy in an mixed deployment (Cache, Array, Traditional)Site BSite CSite AHP StorageWorks Storage Partition and Cache Partition XP allow resources on a single XP Array to be divided into separately configurable/manageable array ‘domains’. A single XP can, in effect, be provisioned as a series of distinct subsystems.Cache Partition XP allows array cache to be allocated to particular host/port combinations to ensure that those hosts/ports enjoy optimized performance of cache-oriented I/O.Cache partitions are assigned to specified disk array groupsUp to 32 partitions of at least 4GB can be createdProvides another method for tuning performance for data access for performance critical applicationsSequential cache performance is increased due to improved internal cache designCache segments are larger - 256kBLarger segments reduce processing overhead, increasing the processor power available to perform IO activitiesCache can be partitioned separate from storage partitionsWith Storage Partition XP a single XP can, in effect, be provisioned to appear as a series of distinct virtual subsystems. Each virtual subsystem can be independently provisioned and securely administered without impacting other subsystems.Each partition has assigned elementsHost Ports, Disk Array Groups, and Cache PartitionsHost access is through HW ports assigned to the partitionStorage partitions are individually managedPartitions are not visible to other partitionsSome or all of the XP Next Gen elements can be assigned to one or more Storage PartitionsCHIPCHIPCHIPCHIPCHIPCHIPCacheCacheCacheCacheCacheSite BAdminSite AAdminNon-partitioned DiskMain SiteCHIPCHIPCHIPCHIPCHIPCHIPCacheCacheCacheCacheCacheMain SiteSuper Admin
106 XP Next Gen Configuration Management HP Command View XP*Web-based XP Disk Array management, configuration & administrationHP LUN Configuration & Security Manager XPWeb-based LUN management and data security configurationHP Performance Advisor XPWeb-based real-time performance monitoringHP Performance Control XP*Web-based performance resource allocationHP LUN Security XP ExtensionWeb-based Write-Once/Read-Many archive storage configurationHP Cache LUN XPWeb-based cache volume lock performance accelerationHP Auto LUN XPWeb-based storage performance optimizationHP Data Exchange XPMainframe-open system host data sharingStart with Command View XP, the web-based management platform, then tailor your solution with point products that provide a more precise level of control enabling you to:Configure storage volumes and provide for secure connection to hosts using HP LUN Configuration and Security Manager XPMonitor and tune the performance of the XP Disk Array using HP Performance Advisor XPAllocate performance resources to mission critical hosts/applications/processes using HP Performance Control XPConfigure XP storage for archiving applications or for data retention/SEC requirements using HP LUN Security XP ExtensionFine tune storage resources by configuring frequently accessed files in cache using HP Cache LUN XPOptimize performance through intelligent volume migration using HP Auto LUN XPShare/exchange data in mixed Mainframe/Open Systems environments using HP Data Exchange XPHP StorageWorks Command View XP provides a web-based management platform for XP management applications. Command View's graphical mapping of host and storage status and host to array volumes increase your staff's efficiency and reduce costs.HP LUN Configuration and Security Manager XP is a combination product combines a configuration and capacity expansion tuning tool together with the best security available. In summary it combines the features of LUN Configuration Mgr. and Secure Mgr. into a single product. It is available for the XP128 and XP1024 disk arrays only and is a required purchase with either of these arrays.HP StorageWorks Performance Advisor XP is a web-based application that collects and monitors real-time performance of XP disk arrays, either standalone or integrated with Command View XP. Web-based performance monitoring can be used almost anywhere to check the status of your XP array, and a CLUI is also provided to integrate into third-party packages. Performance Advisor XP enables you to monitor many-to-many hosts to arrays from a single management station.HP StorageWorks Performance Control XP enables service providers using XP disk arrays to set and relax performance caps by worldwide name or user-assigned nicknames, specified in MB/sec or IO/sec.HP StorageWorks LUN Security XP Extension provides evolved means of LDEV access control, allowing a storage administrator to protect key datasets from being changed after initial write activities. As part of an overall storage/server/application solution, it can help customers deploy a storage solution to address SEC/congressional requirements for data integrity and retention .HP StorageWorks Cache LUN XP can speed up access to mission-critical data by locking it in cache. Wherever the rapid access to information is important, Cache LUN XP is the answer for heavily accessed LUNs, like database logs or index files.HP StorageWorks Auto LUN XP provides automatic monitoring and workload balancing for your XP disk arrays. It allows you to move frequently accessed data to underutilized volumes, replicate volumes for backup and recovery, and simulate the results of any changes before any data is moved.HP StorageWorks Data Exchange XP facilitates sharing of information across computing platforms.* New/Enhanced compared to the XP128/XP1024
107 XP Next Gen Availability Management HP Business Copy XP*Real-time local mirroringHP Continuous Access XP/XP ExtensionReal-time synchronous/asynchronous remote mirroringHP Continuous Access XP Journal*Real-time extended distance multi-site mirroringHP External Storage XP*Connect low cost external storage to the XP.HP Flex Copy XPPoint-in-time copy to/restore from external HP MSAHP Secure Path XP/Auto Path XPAutomatic Host Multi-Path Failover and Dynamic Load BalancingHP Cluster Extension XPExtended High Availability Server ClusteringHP Fast Recovery SolutionsAccelerated Windows Application backup & recovery managementEnhance the Availability and Continuity of a XP Next Gen solution by deploying a variety of tailored replication/HA software products:HP StorageWorks Business Copy XP is a local mirroring product that maintains one or several copies of critical data through a split-mirror process. Asynchronous copy volume updates ensure I/O response time for primary applications is not adversely affected. It provides full-copy/clone or snapshot/space-efficient local replication. Nine simultaneous clone copies or 32 simultaneous snapshot copies can be concurrently maintainedHP StorageWorks Continuous Access XP and Continuous Access XP Extension are high-availability data and disaster recovery solutions that deliver host-independent real-time remote data mirroring between XP disk arrays. Continuous Access XP provide synchronous replication and Continuous Access XP Extension extends capabilities to include asynchronous replication. A variety of remote connectivity infrastructures can be deployed to facilitate array-to-array connections – ESCON, Fibre Channel, ATM, and IP.HP StorageWorks External Storage XP is a software product which allows the XP to connect to low cost external storage via a fibre channel connection. The XP can then recognize the LUN’s created on the external storage and operate with them as if they were internal to the array.HP StorageWorks Flex Copy XP allows static point-in-time copies of XP array data to be copied to an external MSA1000 disk array. This product facilitates high-speed backup (using the external MSA as an external backup ‘staging’ device) or allows critical data to be ‘broadcast’ out to multiple external MSA arrays for more efficient, low impact offline data operations.HP StorageWorks Secure Path XP and HP StorageWorks Auto Path XP provides host-based automatic path failover and dynamic load balancing of array-to-host paths. This enhances overall solution availability and optimizes performance.HP StorageWorks Cluster Extension XP allows advanced Continuous Access XP remote mirroring to be tightly integrated with data center-level High Availability server clustering to provide true multi-site server/storage disaster recovery. Available for Sun Solaris (VCS), IBM AIX (HACMP), MS Windows 2000/2003 (MSCS), or Linux (Service Guard) environmentsHP StorageWorks Fast Recovery Solutions XP provides for tight integration of Business Copy XP local replication with Windows SQL and Exchange application environments to allow for rapid application recovery* New/Enhanced compared to the XP128/XP1024
108 High-speed, real-time data replication Business Copy XPFeaturesreal-time local data mirroring within XP disk arraysfull copies or space-efficient snapshotsenables a wide range of data protection data recovery and application replication solutionsinstant access/restoreflexible host agent for solution integrationBenefitspowerful leverage of critical business data for secondary tasksallows multiple mission-critical storage activities simultaneouslyHigh-speed, real-time data replicationHP StorageWorks Business Copy XP is a local mirroring product that maintains one or several copies of critical data through a split-mirror process. Asynchronous copy volume updates ensure primary application performance is not adversely affected. With XP Next Gen, space efficient snapshot local copy will be available in addition to the existing full-copy/clone capability.ProblemSeveral workloads or processes may require the same set of data across multiple applications. However, in order to run one application, the “competing” application cannot be run and is therefore unavailable. A simple example of this is the backup of a database. To ensure a coherent database backup, all online transactions need to be halted. The database is therefore unavailable during this backup process.SolutionHP Business Copy XP provides a solution by supporting multiple copies of the production data. (Up to nine full-copy or 32 snapshot/space-efficient concurrent copies). Each copy of the data can be used for various purposes—backup, new application testing, or data warehousing loading—without any disruption to the primary application and primary data, and most importantly, all without any disruption to your business operations. At the same time, you will be creating faster implementation of new requirements and capabilities so that your business will succeed.Additional noteMay be used in conjunction with Continuous Access XP to maintain multiple copies of critical data at local and remote sites. Note: Continuous Access XP is only supported with Data Protector on HP-UX servers. Both HP Data Protector and VERITAS NetBackup products have been tightly integrated with HP Business Copy XP to provide Zero Downtime and Split-Mirror Backup solutions for the HP Surestore XP disk arrays on multiple server platforms.Can the Primary volume be resynced from a Business Copy?YesCan a full physical Primary volume be created from a Business Copy? No At first release Business Copy will support a max of 4,096 pairsSecond release will support a max of 8,191 pairsPPC9C32CnCnC1C1Full CopySnapshotFuture
109 zero downtime backup hp business copy xp brak wpływu na wydajność Aplikacji/Bazy Danychwiększa dostępność danychszybkie odtwarzanie z backupu onlinezautomatyzowane i całkowicie zintegrowane rozwiązanie:zintegrowane z Oracle RMANzintegrowane z SAP brbackupzintegrowane z MS ExchangeData ProtectorControl StationProductionServerData ProtectorBackupServerLANP-VOLC1BusinessCopy XPTape LibrariesXP
110 Business Copy XP Full copy vs. Snapshot There are pros/cons to using either Full copy or Snapshot:Full-CopyBecause an independent copy is made when a clone is created, any activity against that copy do not affect the primary data from which it was created. And in comparison to Snapshots, where a copy-on-write operation must be completed before primary/snapshot-common data can be overwritten, Write activity against cloned primary data is accomplished unencumbered by local mirroring activities.Clones are storage intensive – an equal amount of storage capacity must be provisioned for each primary data source copied.Clones are most useful when performance concerns are paramount (when unfettered access to primary or mirror image data must be insured) or when a completely independent local copy image is required (e.g. for recovery from disk implementations).Snapshot/Space-efficientBecause only differential data must be additionally committed to disk, Snapshots can be very storage capacity efficient. For primary or mirror data that is not updated very frequently or very much, incremental capacity consumption can be very low. Due to the logical/metadata nature of snapshots, many more concurrent copies can be created and managed.A Read operation of a Snapshot image can impact I/O operations to the primary dataset if the read data resides on the primary volume – and potentially contend with whatever operations that are continuing on that volume.A Write operation of a Snapshot image can also impact I/O operations to the primary dataset if the data to be written resides on the primary volume. That data must be copied over to the snapshot pool so as to preserve the data integrity of the primary volume.A Write operation of a Primary image can be impacted if Snapshots of that image are being maintained. Any data that is required to maintain integrity of a snapshot image must be copied over to the snapshot pool before the write operation can be performance so as to preserve the data integrity of the primary volume.Snapshots are most useful when cost/performance issues are paramount. They are even more practical/appropriate for low I/O activity environments, where Snapshot maintenance does not unduly impact primary data I/O operations.CnFutureCnC1C1ProsSpace efficient – only the delta data consumes extra spaceMore concurrent images (32 per Primary)ConsPerformance impact – Snapshot Read impacts primary dataVirtual Snapshot Image – Copy shares data with PrimaryHeavy Write environments reduce efficiencyBest for…Low cost/performanceLow Write/Read environmentsProsIsolates primary data from Copy Read/Write performance impactFull speed Copy Read/Write accessConsCopy requires full amount of storageFewer Images (9 Per Primary)Best for…High performance requirementsHeavy Primary-Write/Copy-Read datasetsRecover-from-copy implementations
111 Continuous Access XP and Continuous Access XP Extension Featuresreal-time multi-site remote mirroring between local and remote XP disk arrayssynchronous and asynchronous copy modesSidefile and Journaling optionsremote link bandwidth efficientflexible host agent for solution integrationBenefitsenables advanced business continuityreliable and easy to manageoffers geographically dispersed disaster recoveryHigh-performance real-time multi-site remote mirroringHP StorageWorks Continuous Access XP and HP StorageWorks Continuous Access Extension XP are high-availability data and disaster recovery solutions that deliver host-independent real-time remote data mirroring between XP disk arrays. With seamless integration into a full spectrum of remote mirroring-based solutions, Continuous Access XP and XP Extension can be deployed in solutions ranging from data migration to high availability server clustering — available over ESCON or fibre channel.Remote mirroring solutions with Continuous Access XP--the ultimate in remote data mirroring between separate sitesmaintains duplicate copy of data between two XP arraysensures data integrity between two XP arrayseliminates storage and data center cost of “multi-hop” solutionsOS-independentHigh availability cluster integration: leverage advanced features like fast failover/failback for seamless interoperation in high availability server clustering solutions for HP-UX, Microsoft Windows NT, Sun Solaris, and IBM AIX. Works with Cluster Extension, Metrocluster and Continentalclusters for full long-distance clustering solutionSynchronous or asynchronous copy modes to meet stringent requirements for data currency and system performance while ensuring uncompromised data integrityhighest data concurrency with synchronous copyhighest performance and distance using asynchronous copyDeploy Continuous Access XP remote mirroring using a wide range of remote link technologies, including pure fiber/DWDM, and high performance WAN/LAN Converters for ATM (OC-3), DS-3 (T3), and IP, cross-town, or all the way across the globeContinuous Access connection between XP disk arrays can be Fibre Channel (can use multiple channels) or ESCONFor best performance, Fibre Channel is recommended between XP512/48s and XP128/1024s. ESCON is primarily used to connect with older XP256 XP disk arrays that don’t support Fibre Channel Continuous AccessHP StorageWorks Continuous Access XP Multi-site is a remote replication tool that is the basis of 1:2, and 1:1:1 multi-site replication solutions. Using it’s unique Journal-based replication technology, it enables a single volume to replicated to two distinct arrays or a single volume to be cascaded to two distinct arrays (see other slide).Journal-based replication results in better remote link bandwidth utilization. Replication operations consume less bandwidth and therefore allow for relatively lower bandwidth link capacity to be deployed for such a solution.At First Release8,191 pairs will be supportedXP Next Gen, XP1024, and XP128 supportedAt Second Release16,383 pairs will be supportedXP512 supportedmetromirrordatacontinentalmirror
113 Async-CA Guaranteed Data Sequence Asynchronous-CA ControlAsynchronous-CA ControlSort by Sequence Control InformationNear SiteFar SiteCacheCacheApplicationData 1Data 1Data 14231SortData 2Data 2Data 2AsynchronousTransferwith Out of Orderrepaired (perconsistency group)Data 4Data 3Data 3Out of order transfer resolved at the far site.Data 3Data 2Data 3Data 4Data 1Data 4Data 4Guaranteed Data SequenceGuaranteed Data SequenceAt far site, I/O #3 will be held until I/O #2 shows up
114 Async-CA Lost Data in Flight What ofWhat ofLost Data in Flight?Lost Data in Flight?Near SiteSort by Sequence # and CT groupFar SiteCacheCacheApplicationData 1Data 1Data 1421SortData 2Data 4Data 4Time OutData 3Lost4Default=3min.Data 20, 3-15min.**Data 2Data 2Writes 1 and 2 make it to Write cache and disk. Write #3 has until a timeout to show up. If the timer pops, the link is suspended until operator intervention (pairresync command). Converters like CNT do automatic re-transmission which should help.XP disk array behavior must be overlaid with expected FC driver and LVM behavior so that proper Threshold and timeouts are purposefully chosen. For example,if the FC driver receives a response within 10 seconds, all is OK. If not, it waits 2 seconds and retries.LVM watches all this with a 60 second timeout. If I/O completes in that time (up to 5 FC driver retries), all is OK. If not, the port is considered down and LVM switches to a pvlink.While the link is suspended, both sides start using a bit map (#4 in RCU goes into a bit map, all new and unsent sidefile writes go into an MCU bitmap. After the reason for the link problem is rectified (and possibly after a data consistent BC copy is made at the far side) a pairresync command causes a non-ordered bitmap copy to bring the pairs back into synchronization.4Data 4Data 1Data 3Suspend after Time Out*Guaranteed Data Sequence per CT groupGuaranteed Data Sequence until* All pending correct Record Set is completed. ** Under Consideration
115 Async-CA Consistency Group Consistency Group SummaryConsistency Group SummaryNear Site SystemFar Site SystemRAID ManagerRAID ManagerRemote Copy linksRCPLCPConsistencyXP256 (MCU)XP256 (RCU)GroupPSConsistency group (CT group) - unit of granularity for I/O ordering assurance. A consistency group is a grouping of LUNs that need to be treated the same from a data consistency (I/O ordering) perspective.PPSS4Consistency Group( Max. 16 / Array)Update sequence consistency maintained across volumes thatbelong to a consistency group
116 Continuous Access XP Journal vs. Continuous Access XP Extension RemoteCopyPrimary14Primary1RemoteJNLHP StorageWorks Continuous Access XP Journal can be compared and contrasted with traditional Continuous Access XP Extension (asynchronous) operation:CA XP Journal architecture is uniquely suited for one-to-many or one-to-one-to-one configurations. CA XP Async can only facilitate one-to-one configurations.With CA XP Journal, update data (in the form of the journal) is stored on traditional disk, while CA XP Async queues change data in array cache. This means that CA XP MS can be more tolerant of high or fluctuating workload environments where tracking/recording or update data can ‘fall behind’ primary data operations. Much more capacity in the form of disk can be provisioned to store such data with CA XP Journal and, thus, forms a more resilient ‘buffer’ for remote copy operations.With CA XP Journal, the journaled data consists of track-level change information. When the remote site journal ‘reads’ changes on the primary site journal, multiple track-level changes can be aggregated and retrieved . With CA XP Extension (Async) , transaction must be replicated across the remote link on an I/O by I/O basis. Depending on the application/database environment, such an ‘atomic’-level replication scheme can be very inefficient.Data3cachecache3224DataCopyJNLRemote2ndCopy43JNLCA XP JournalCA XP Extension (using Sidefiles)Update data (Journal) is stored on DiskRemote side initiates updatesEnables logical 1:2, 1:1:1 configurationsStrengthsUpdates very efficient, less bandwidth requiredProvides hi-capacity lower cost disk-based update data trackingBest for…Multi-Site implementationsUpdate data (Sidefile) is stored in CachePrimary side initiates updates1:1 configurations onlyStrengthsPotential for better data currencyBest for…1:1 implementations
117 Multi-site data replication PrimaryDWDMSecondaryBusinessCopyBusinessCopyWANTertiaryNote: This slide should be presented in animated form as it builds up site by siteContinuous Access XP Journal can enable an optimized cascaded multi-site Disaster Tolerant solutionManagement overhead is reduced as split occurs via Journaling, not Business CopyTwo Architectures are possibleMulti TargetCascadingWith a traditional cascaded replication solution, an intermediate local image must be created on the Hot Standby Site array in order to ensure ongoing data consistency at all times among all remote images of the primary data.Furthermore, these solutions result in compromised mirror data currency on the Contingency Site as replication to that site does not occur on a real time basis with respect the Primary Site.In a traditional cascaded configuration, a synchronous copy image is created on the Hot Standby Site array using Continuous Access XP. From that copy image, a local copy is paired/created on the Hot Standby Site array using Business Copy XP. On a ongoing basis, the local Business Copy image does not represent a logically consistent image of its parent volume. In order to obtain such a consistent image, the local copy image must be ‘split’ from the parent image, creating a point-in-time image of the parent. That local copy image can then be copied asynchronously to the Contingency Site XP array using Continuous Access XP Extension (Async.) Because the Hot Standby Site local copy represents a point-in-time image, the Contingency Site remote copy image can, at best, only be current with this image. In the meantime, the Hot Standby Site local copy falls behind the Primary Site/Volume, with data currency of the Contingency Site remote mirror being compromised. This operation is highly coordinative in nature, with the local image on the Hot Standby Site having to be continuously split and resynched with it’s parent volume.With HP StorageWorks CA XP Journal cascaded solutions, all replication activities can happen in real-time, without compromising data currency or posing a process/solution risk.PrimaryDWDMSecondaryTertiary2ndJournalJournalWANSecondaryPrimaryDWDMTertiaryFutureJournalJournalWANContinuous Access XPsynchronous mirror<100kmContinuous Access XP Extensionasynchronous mirrorunlimited distancePrimary SiteHot Standby SiteContingency Site
118 Availability management software wide areaWhen disaster strikes, it can mean much more than a temporary loss of computing power. Work delays, data degradation and data loss can quickly translate into lost revenue, lost profits, and lost customers. HP provides a range of solutions that address varying degrees of availability and scalability.These solutions range fromInternal Mirrors – Business Copy XPPath Management – Auto Path XP (AIX, Solaris), Secure Path XP (HP-UX, Windows, Linux)Clustering – Local Cluster SolutionsRemote Mirrors – Continuous Access XPLong Distance Clusters – Cluster Extension XPWorldwide Clusters – ContinentalclustersWide-area Clustering - ContinentalclustersAvailabilityCluster Extension XPContinuous Access XPpc1cnClusteringAuto Path XPSecure Path XPBusiness Copy XPXPScalability
119 Cluster Extension XPFeaturesextends local cluster solutions up to the distance limits of the softwarerapid, automatic application recoveryintegrates with multi-host environmentsServiceguard for LinuxVERITAS Cluster Server on SolarisHACMP on AIXMicrosoft Cluster service on WindowsBenefitsenhances overall solution availabilityensures continued operation in the event of metropolitan-wide disastersSeamless integration of remote mirroring with industry-leading heterogeneous server clusteringHP Cluster Extension XP continues your critical applications within minutes or seconds after an event. It offers disaster tolerance against application downtime from fault, failure, or site disaster by extending clusters between data centers. Cluster Extension XP works seamlessly with your open-system clustering software and XP storage systems to automate fail-over and fail-back between sites.What problems does it to solve?Heterogeneous disaster recovery protection over MAN distances (up to 10 kilometers)Lack of integrated, fully tested, turnkey heterogeneous disaster recovery solutionsSimplifies installation and reduces the amount of time required to implement disaster recovery solutionsProtection against Microsoft Quorum disk fatal errorsFeaturesextends leading high availability server clustering solutions over geographically dispersed data centers up to 10 kmintegrates XP remote mirroring with cluster monitoring and failover/fallback operationscontinuous mirroring-pair synchronization monitoring and recoverycluster server solutions for Solaris, AIX, and Microsoft Windows 2000, Advanced Server, and Datacenter Servertightly integrated with Serviceguard for HP-UX and LinuxSupport DWM, ATM, IP and WAN connectivityReliable and efficient: ensures fast fail-over and recovery with extensive condition checkingScalable: extends a single cluster solution over metropolitan or global distancesManageable: detects failures and automatically manages recoveryFlexible: operates with Windows, Linux, Solaris, and AIX for compatibility with your system environmentAvailable: utilizes Continuous Access XP mirroring for high-performance synchronous and long-distance asynchronous solutionsCustomer benefitsConsolidated disaster recovery; saves money and improves operational efficiencyRapid disaster recovery implementation for protection of heterogeneous resources; information assets are safe; one point of contact for all supportStress-free automatic failover and fast fallback and data replication; requires no user interventionHP Cluster Extension XP functions as middleware between HP Continuous Access XP Mirror Control tools and your cluster software, whether it is VCS on Sun Solaris, HACMP on IBM AIX, HP Serviceguard for Linux, or Microsoft Cluster Service for Windows 2000 with HP’s Quorum Filter Service.HP Cluster Extension XP runs on Solaris, AIX, Linux, or WindowsA specific design should be developed by HP Technical Consultants to ensure satisfaction with a particular configuration.ClusterCluster Extension XPDatacenter ADatacenter BdatamirrorContinuousAccess XPXP arrayXP array
121 External Storage XPConnect low-cost and/or legacy external storage utilizing the superior functionality of the Next Generation XP arrayAccessed as a full privilege internal XP Next Gen LUNo restrictions; use as:A regular XP Next Gen LUNPart of a Flex Copy XP, Business Copy XP or Continuous Access XP pairWith full Solutions supportFacilitates data migrationReduce costs by using less expensive secondary storageExternal Storage XP is a software productExternal storage devices must be from a list of tested and supported devices.At first release supported devices will be the XP512, XP48, XP1024, XP128, and MSA1000.The XP256 is targeted to be supported at second releaseSome EMC and IBM arrays will be supported for data migration only, details to followAny port from any CHIP pair installed in any slot (CHIP or ACP) can be connected to external storage.At first releaseExternal Storage capacity will be limited to 16 PBBusiness Copy and Auto LUN are supportedAt second releaseExternal Storage capacity will be limited to 32 PBLUN Security Extension will be supportedAt Future ReleaseNo ACP configuration will be supportedContinuous Access will be supportedJournaling will be supportedFutureCHIPCHIPLUNMSA1000XPFCCHIPFCCHIPSANMSA1000HostsXP Next GenExternal Storage
122 Included Support and Services Site PreparationArray Installation and Start UpProactive 24 Support, 1 yearReactive Hardware Support, 24 x 7, 2 yearsSoftware Support, 1 year (included with software title)There are a number of services available from HP that are tailored for XP disk arraysLUN design and implementationExpert configuration assistance that allows for a faster time to production deploymentHP Continuous Access XP implementationRapid deployment of a high speed host independent data mirroring operation between local or remote XP disk arraysHP Business Copy implementation for the XPSpeeds the deployment of mirroring configurations without the concern of complexity or operational impactHigh availability storage assessmentExposes potential risks to business continuity and minimizes costly production outages by providing recommendations to avoid such situationsPerformance analysisProvides a greater return on storage hardware investment by enhancing the throughput and usability of the devicePerformance tuning and optimizationProvides a greater return on storage hardware investment by enhancing the throughput and usability of the device continually throughout a period of a year.Data migration servicesProvides a smooth transition to an HP storage platform for open systems, mainframe and mixed environmentsSAN solution service (available Q3 2003)Provides the services needed for SAN implementation or integration with an XP disk arrayAccelerates your time to production with efficient installation of your new XP Disk Array by HP certified Storage Technical Specialists.
123 Mission Critical Support (Proactive 24) Environmental servicesCustomer support teamsAccount support planActivity reviewsReactive services24x7 HW Support, 4 hr. response24x7 SW Tech Support, 2 hr. responseEscalation managementTechnology-focused proactive servicesPatch analysis and managementSW UpdatesAnnual System HealthcheckStorage array preventive maintenanceSAN supportability assessmentNetwork software and firmware updatesHP excels at mission-critical support, not only for enterprise storage (XP and EVA), but for your entire IT environment. Here is a summary of the services we provide for mission-critical support.(review each bullet briefly)Care Pack Critical support is available for the XP today with a 100% data availability guarantee that even covers loss of access to data.Note to presenter: Proactive 24 support for the EVA is planned to be available in the second quarter of 2003.HP Critical Support adds data availability guaranteeOngoingsupportMission critical, multivendor environment
124 Web-based device management platform for XP disk arrays Command View XPFeaturesWeb-based, multi-array management frameworkCentralized array administrationAdvanced fibre channel I/O managementsupports XP Next Gen and legacy XP arrays from single mgt stationBenefitscommon management across HP storage devicesgraphical visualizations speed up problem resolutionmultiple security layers ensure information is safe & securemanage storage anytime, from any whereWeb-based device management platform for XP disk arraysHP StorageWorks Command View XP provides the common user interface for all XP disk array management applications. You and your staff only need to learn a single user interface, reducing the learning curve and increasing usage of the tool. It’s web-based so you can monitor your storage resources any time, from anywhere with a web browser. Using the web, your storage expert can participate in problem resolution across multiple installations or easily train junior staff. The tabular user interface in HP Command View XP gives you access to information quickly, and the graphical representations of your storage resources reduce the amount of time your staff will spend troubleshooting problems.The Command View User Interface is being used by enterprise, modular, tape and virtualization products. In the future, HP will be merging these solutions to enable management of all these products and services from a single management console.A clear differentiator contained within Command View is Path Connectivity. This module provides customers with a series of reports detailing and configurations, connections and paths being used by hosts, switches into the XP array. This module eliminates the need for the customer to maintain their own binder with printed reports that they have to manually update. Command View updates the configuration automatically eliminating this tedious task. Path Connectivity also provides diagnostic capabilities for the fibre channel connection between hosts and the XP. This feature will identify if a particular connection has begun to degrade and then provide diagnostic information to speed up the troubleshooting process by identifying potential causes.Command View XP runs on a Windows 2000 or Windows 2003 management station.
125 Command View XP – cross platform support Seamlessly manage XP Next Gen arrays along side complete family of legacy XP systemsSimplify storage managementReduce storage management complexity and costAccelerate storage deploymentCan CommandView XP run on any Java enabled platform, or is restricted to certain operating systems? CommandView XP runs on a Windows 2000 / Windows 2003 Management Station only. How can redundant CommandView XP workstations be configured?High Availability CommandView XP has been discussed, but not committed to. CommandView XP as a Single-Point-Of-Failure does not impact data availability.webclientwebclientCommand Viewmgt stationXP512/XP48XP1024/XP128XP Next Gen
126 LUN Configuration & Security Manager XP Featuresassignment of LUNs via drag ‘n dropassignment of multiple LUNs with single clickchecks each I/O for valid securityLUN and WWN group controlBenefitsflexibility to configure the array to meet changing capacity needspermissions for data access can be changed with no downtimeevery I/O is checked to ensure and maintain secure data accessEasy-to-use menus for defining array groups, creating logical volumes and configuring portsHP StorageWorks LUN Configuration Manager and Security Manager XP is a new combination product combines a configuration and capacity expansion tuning tool together with the best security available. LUN Configuration Manager contains three applications in a single easy-to-use tool that launches directly from the HP Command View XP user interface. LUN Management lets you add or delete paths, set the host mode, set and reset the command device, and configure Fibre Channel ports. LU Size Expansion (LUSE) lets you create volumes that are larger than standard volumes. Volume Size Configuration (VSC) allows configuration of custom size volumes that are smaller than normal volumes—improving data access. Security Manager is your LUN security watchdog that gives you continuous, real-time, I/O-level data access control of your XP array. It allows you to restrict the availability of a single LUN or group of LUNs to a specified host or group of hosts. Secured LUNs become inaccessible to any other hosts connected to the array. Unauthorized hosts no longer have access to data that is off-limits.HP LUN Configuration Manager XP allows a system administrator to set up and configure LUNs and assign them to ports.Additionally there are two programs that allow LUNs to be created which are smaller and larger than the available open emulation types. For example, many smaller LUNs can be combined to form a single large LUN using LUN size expansion (LUSE), which is important in environments where there are restrictions on the total number of LUNs supported.Custom-size volumes are smaller than the standard emulation types and can be easily created using Open Systems Custom Volume Size (OCVS). This is important when LUNs may need to be downsized in capacity to avoid wasting disk space. A good example here is when there is a need to create a command device for an XP application.This sample screen shot shows part of the process involving LUN setup and shows a table mapping SCSI ID, and LUN number, volume and emulation type.HP Surestore Secure Manager XP restricts access to LUNs or groups of LUNs to a specific host or group of hosts by checking every I/O. Permissions to access data can be changed on-the-fly with no downtime.Create LUN groups and WWN groups to simplify the configuration and management of data.Integrated into HP Command View and accessible from the same management station.Security is enabled at the port level for flexible deployment with all host server environments supported by the XP disk array family.New drag and drop capability integrated into CommandView. NOTE: HDS/SUN will have similar capability. We expect them to expose this functionality through HDS’s HighCommand device manager.Note: XP128/1024 LUN Configuration & Secure Manager functionality is bundled together as a single product. For legacy systems, LUN Configuration Manager and Secure Manager XP are used by customers. Both products provide the same functionality level; however, it is bundled slightly differently for the XP128/1024.Under volume management, customers can group LUNs – even if they are not concatenated providing more flexibility as customers do not need a free list of LUNs – they can just pick and choose available LUNs.
127 Performance Advisor XP Featuresreal-time data gathering and analysissystem view of all resourcesflexible event notification and large historical repositoryBenefitsidentify performance bottlenecks before they impact businesshelps maintain maximum array performanceprecise integration eliminates “blame storming” across systems, database and storage administratorsCollect, monitor, and display real-time performanceHP StorageWorks Performance Advisor XP is a web-based application providing real-time collection and monitoring of your XP disk arrays. HP Performance Advisor XP collects data on CHIP & ACP utilization, and sequential and random reads/writes and I/O rates at each LDEV. No need to worry that your XP resources are not performing XP resources are not performing up to expectations. Its performance alarm and notification capabilities will keep you ahead of any problems.Collects data on a real-time basis, making it easy to see up-to-date and current performance patterns. (Some competing products collect data only on a sporadic basis, several times a day, making their data stale and dated.)Can be accessed by a tab on the Command View (CV) screen. By clicking the Performance Advisor tab, you automatically click the PA URL and start the PA application, saving administrator time and effort. Alternatively, Performance Advisor XP can run as a completely standalone application with its own GUI.Performance Advisor is installed on a PC-based management station (it can be on the same station that CV is installed on), and multiple arrays and multiple hosts can be examined from the same station. Select one array and look at all the hosts attached to it. Or select one host and look at all the array components. This makes it easy to see which hosts are imposing the heaviest workload on the XP disk array.Alert thresholds can be set to send alarms when the threshold has been reached to warn you that attention may be needed or workloads are approaching a critical stage where you may need to take some action to head off a rash of user complaints.VantagePoint Operations (OpenView performance monitor) can access the same performance data that is collected by Performance Advisor.Precise (a third-party Oracle performance optimization tool) can access the same data generated by Performance Advisor and solve database processing bottlenecks.Data from up to 8000 LUNs can be displayed very quickly in one PA window. It takes only two minutes to display this large amount of data, even though it is unlikely you would want to view that many LUNs at one time.Up to 2 GB of historical performance data can be stored making it easy to analyze trends (a leading competitor stores only 500 MB).Additional notes:Flexibility to view “many-to-many” hosts, arrays, LUNs, or components works with Windows 2000/NT, HP-UX, SUN, IBM AIX, Linux, (mainframes soon)Event notification by to pager or PC or VPO consoleWeb-based integration with HP Command View XP
128 Boost performance by redirecting I/O requests to the cache Cache LUN XPFeaturesstores critical data in the XP array cacheuser configurable, easy to usefast file access and data transferscalableBenefitsspeeds access to mission critical dataprovides access to critical data such as database indices in nanoseconds rather than millisecondsBoost performance by redirecting I/O requests to the cacheHP StorageWorks Cache LUN XP lets you reserve areas of cache memory on your HP StorageWorks XP disk array to store frequently accessed information. You’ll see improved file access times and faster data transfers. Cache-resident data is accessed in nanoseconds instead of milliseconds for both read and write I/O operations.HP Cache LUN redirects I/O requests from the XP array disk drives to the XP array cache memory to boost performance for frequently accessed data. Simple to implement with HP Command View XP integration, and transparent to the array operation, performance gains are immediate. You can cache up to 1024 volumes for the ultimate in efficiency.HP Cache LUN is the answer for LUNs that are heavily accessed like index files or database logs. When you want to use the cache for something else, destage the data back to its original location. Once the “hot” data is cache resident, it is accessed in nanoseconds rather than in milliseconds. Performance improvements occur immediately.Increase cache memory before running HP Cache LUN XP to avoid degradation when accessing non-cached data.
129 Performance Control XP FeaturesControl consumption of array performance by IO/s or MB/sPrecise array port level settingsreports and online graphingBenefitsallows customer to align business priorities with the availability of array resourcesEffectively manage mission-critical and secondary applicationsmore efficient use of array bandwidthallows for performance-level service-level oriented deploymentAllocates performance bandwidth to specific servers or applicationsHP StorageWorks Performance Control XP lets you allocate XP disk array performance resources to hosts, so you can align those resources with computing priorities, maximize storage ROI, and deploy XP disk arrays in service-provider solutions. PCXP is Web-based software that plugs into Command View XP and allocates XP array performance bandwidth to specific servers or applications, based on user-defined policiesWhat problems does it solve?Lack of complete systems performance management solutions and flexible interfacesLack of tools that provide granular control of resources or groups of resources which have an impact on overall performanceKeeping things simple yet powerful enough to really have an overall effect on performanceCustomer benefitsEntire configuration is better understood, relationships are clear, and policy decisions are more accurate and are arrived at more quicklyApplications automatically get the bandwidth they need, when they need itSimplified performance management and reportingInformation is securePerformance policies can be assigned by HBA, WWN or user assigned-nicknamesSchedule by hour or by day
130 Non-disruptive volume migration for performance optimization Auto LUN XPFeaturesoptimizes storage resourcesmoves high-priority data to underutilized volumesidentifies volumes stressed under high I/O workloadcreates a volume migration planmigrates data across different disks or RAID levelsBenefitsimproves array performance and reduces management costs by automatically placing data in highest performing storage (cache, RAID 1, RAID 5)Non-disruptive volume migration for performance optimizationHP StorageWorks Auto LUN XP provides automatic monitoring and load balancing of your HP StorageWorks resources. HP Auto LUN XP compares disk array LUN utilization to limits set by you. It even proposes a data and volume migration plan so that hot spots and other potential disk bottlenecks are avoided, providing optimal, load-balanced performance.HP Auto LUN XP analyzes the current array operation and uses its knowledge about the data layout on the array to suggest how best to relocate data on the XP disk array to improve performance. The administrator then has the option to say “yes” or “no” before any data is moved according to the plan generated by Auto LUN. It works in several discrete phases, reports on its findings, generates an action plan, but doesn’t move anything on the array until you give it your command.Benefits include:Keeps highly accessed data on higher performance disk drive groups.Migrates across different disks or RAID levels for better performance.Shorter random I/Os (2k-4k) may be better suited for storage on smaller, faster disk spindles. Under certain circumstances, RAID 1 may be faster than RAID 5, etc.Simple, easy-to-use operation:Easy-to-follow menusUp to 90 days of history are stored and usable reports are generated. Data can be exported to Excel for further graphing and charting.Moving volumes is easy as the data is copied on the source volume to the target volume. Then, host access is transferred to the target volume to complete the migration.At first release4096 pairs are supportedAt second release8191 pairs are supportedHigh CapacityDisks73 GBPolicy-basedLUN Migration36GBHigh PerformanceDisks
131 Secure Path XP Auto Path XP Featuresautomatic and dynamic I/O path failover and load balancingautomatic error detectionheterogeneous supportNT, W2K, HP-UX, Linux, AIXpath failover & load balancing for MSCS NT & WIN2K - Persistent ReservationBenefitseliminates single points of I/O failureself-managing automatic error detection and load balancingsame software interface across XP and virtual arrays simplifies training and administrationAutomatic protection against HBA and I/O Path failures with load balancingHP StorageWorks Auto Path XP helps the smooth flow of data around bottlenecks and failed I/O pathsDynamic load balancing helps share the load between available pathsAvoids excessive queues on any one path and smoothes out the transaction workloadWhat is it? Host-based software that provides automatic protection against HBA path failures between the server and storage.What problems does it solve?Complete protection of all HBA and I/O data paths, no single point of failureLack of heterogeneous HBA and I/O protection solutionsFull integration with vendor’s management software and third party softwareLoad balancing after a failureCustomer benefitsNo administrator intervention in the event of a failure; training is simplifiedFull integration with HP XP management software; simplifies training and support, common interfaceHeterogeneous support for HP-UX, Windows NT/2000, IBM AIX and LinuxPerformance is maintained despite an I/O path failureHP-developed Auto Path XP products are available for Windows NT/2000 and HP-UX. We are currently working on HP Auto Path XP for Linux and for HP-UX for future releases. The HP-developed products will work with the XP disk arrays as well as the VA family of arrays, thus providing flexibility for SANs that incorporate both products. In addition, we sell an HDS-developed Auto Path XP product for AIX. This one works only with the XP disk arrays.Note: We also have an NT product from Hitachi, which was placed on blind CPL in June to encourage orders on the HP-developed NT product. This product also supports SCSI failover.Sun Solaris support is provided only on a business need basis through the NSSO big deal escalation process.To date, Windows NT and 2000 have supported a limited "SCSI Reservation Protocol" that is not fully optimized for Clustered Servers using Fibre Channel Storage.The SCSI Reservation Protocol must be "re-distributed" every time an I/O attempt is not properly executed causing performance problems, specifically with I/O path load-balancing.HP is modifying the XP firmware and HP's I/O Path failover software, HP Auto Path XP, to support the "Persistent Reservation Protocol."Persistent Reservations support is built into the XP128 and XP1024 firmware and will be built into all HP Auto Path XP software, allowing HP's I/O path failover solution to support "dynamic load balancing" in MSCS Windows NT and 2000 environments as well as Serviceguard for HP-UX and Serviceguard for Linux environments. HP is the 1st in the industry to have this capability.
132 Footprint Incredible Density The XP Next Gen can store data at a density of up to35 gigabytes/inch2GB disks in 4849 square inches of cabinet floor space
133 Zastosowania serwerów baz danych praca instytucji obsługujących wielką liczbę klientów, gdy wymagane jest przeszukiwanie wielkich baz danych (np. obsługa systemu podatkowego, obsługa kont bankowych, obsługa odbiorców energii elektrycznej, gazu i usług telekomunikacyjnych oraz wiele innych.animacja komputerowa,obliczenia i wizualizacja w trójwymiarowych zagadnieniach CAD/CAM i w dynamice płynów,bankowość i finanse - do analizy rynku i trendów giełdowych,geofizyka – np. przetwarzanie trójwymiarowych danych sejsmicznych,procesy chemiczne – akwizycja danych, sterowanie procesami, wizualizacja, chemia komputerowa,energetyka – modelowanie i zarządzanie dystrybucją energii,elektronika – np. projektowanie i symulacja półprzewodników,
138 Implementacja wymagań RAS Redundancjaprzykłady:SUN E6500, 10000IBM RS6K M80, S80HP N4000, V2500, SuperDomeElementy (na przykładzie komputerów klasy podstawowej HP L):monitorowanie pracy z pomocą oddzielnego procesoraPamieć (RAM), cache typu ECCdynamiczna dealokacja procesorów i modułów pamięcioddzielne podsystemy magistrali dla sterowników dysków lustrzanychwymiana, hot swap, redundancja w odniesieniu do zasilaczy, wentylatorów .....gwarancja 3 lata
139 SUN E10000 4 magistrale adresowe Pamięć4 x UltraSPARC IIInterfejsy SBusPłyta systemowa 16Płyta systemowa 2Płyta systemowa 14 magistrale adresowePrzełącznica krzyżowa 16 x16 dla danychPrzepustowość 12.8 GB/s
140 HP V2500 Przełącznica krzyżowa 8 x 8 (przepustowość 15.3 GB/s) X 4 x PA8500Interfejsy PCIPłyta systemowa 8Płyta systemowa 2Płyta systemowa 1Przełącznica krzyżowa 8 x 8(przepustowość 15.3 GB/s)Pamięć operacyjnaPołączenia X/Y SCA(przepustowość 3.84/3.84 GB/s)XY