RecoverPoint For Virtual Machines (RP4VMS) 5.2 Is here, Here’s what’s new

We have just released the 5.2 version of RP4VMs 5.2, here’s the summary of the changes


Auto Registration of Datastores


A new feature is the automatic registration of the datastores for journal VMDKs, during the cluster installation, up to 15 datastores will be automatically registered, they are the 15 shared datastores with the most free space for the ESX cluster the vRPAs are installed, these can be used for placements of the journals, a datastore can be un-registered or the check for Auto-registration can be cleared.

ESXi cluster Auto Registration


ESX cluster registration was a protection pre-requisite in previous versions, if the cluster the VM to be protected is on was not registered, the protect VM wizard would not be able to proceed.


The auto registration feature in 5.2 streamlines the protection process by removing the need to register the ESX cluster prior to launching the Protect VMs wizard.

Automated Provisioning of VMkernel Ports

A VMkernel Port is required for every ESXi host that contains a Production VM, a Copy VM, or a vRPA. In previous versions, the VMkernel Ports would need to be manually created. This communication must be in place to run the Protect VM Wizard.


Starting with 5.2, VMKernel Ports can be created using the RecoverPoint for VMs Plug-in. They can be created during the ESXi Cluster registration process. The Splitter works with specific VMKernel Ports for communication. VMKernel Ports that are created by RecoverPoint for VMs have a unique label.


During the Protect VM Wizard, if the VM is located on an ESXi server without a VMKernel Port a warning is given. This is only displayed if the ESXi Cluster is being Auto-Registered.

New Software Features for RecoverPoint for VMs 5.2

The Journal I/O and Repository Access Filter (JIRAF) is a new automatic feature for RecoverPoint for VMs 5.2.
Here are some key points:

  • For Repository and Journal access only
  • Enhances behavior with VSANs
  • Removes previous Journal limit of 255 GB
  • Improves VMDK provisioning time
  • Requires ESX 6.0U2 as a minimum
  • JIRAF is a new software VIB
  • Automatically installed for the ESX Cluster containing vRPAs
  • Automatically deployed during cluster installation


    Here are the key points for reducing the Production Journal for RecoverPoint for VMs 5.2:

  • Previous versions required 10 GB minimum
  • Production Journal Only
  • 5.2 allows for 3 GB minimum for Production Journal only.
  • No impact on Failover


    Improvements to vCenter Server integration:

  • Prior to 5.2 all vCenter Servers needed to be registered
  • Only vCenter Servers with either vRPAs, VMs to be protected, or Copy VMs should be registered
  • Support up to five vCenter Servers in Linked-Mode
  • VMware supports ten configured in Linked-Mode
  • Only vCenter Servers hosting vRPAs, Production or Copies are licensed

    Security enhancements

    • During installation, the user is required to replace the default admin user password with a unique password. This password serves also as the password for the root user across all vRPAs in the system.
    • The predefined admin user is granted all permissions for managing the system, including SE. For new RecoverPoint for VMs installs, the only other predefined user is boxmgmt (which is deprecated). For upgrades from RecoverPoint for VMs 5.1.1.4 or later, the other predefined users remain as they were before the upgrade.
    • The SE permission enables access to advanced support commands. These commands can be displayed by using enable advanced support commands in the Sysmgmt CLI.
    • Root access is disabled by default for remote and local access. Access can be enabled (or subsequently disabled) from the Boxmgmt CLI.

    One splitter communication mode


    Starting from this release, only IP communication between the splitter and vRPAs will be supported. iSCSI communication between the splitter and vRPAs is no longer supported.

    You can see a demo of the new release below

    as always, you can download the new release by clicking the screenshot below

The XtremIO X2 SRA for VMware Site Recovery Manager (SRM) and a new White Paper are now available

We have just released the XtremIO X2 SRA for VMware SRM (Site Recovery Manager)!

The SRA can be downloaded from here

https://my.vmware.com/web/vmware/details?downloadGroup=SRM_SRA55&productId=494

or by clicking the screenshot below

Critical business processes require data replication for various purposes such as data protection or production environment duplication for development, testing, analytics or operations.

Data replication over the wire requires many customers to compromise either on RPO (Recovery Point Objective) or bandwidth cost for transferring the high throughput generated by the applications. This is even more acute with All Flash Array high throughput.

While there are several approaches for replication, they all fundamentally struggle with RPO, limited performance and complex operational processes, forcing customers to make tradeoffs between RPO and cost.

XtremIO leverages its unique content addressable storage, in-memory metadata and copy data management implementation to offer the most efficient solution by replicating unique data only. This unique implementation significantly reduces bandwidth and storage array utilization. Furthermore, XtremIO simplifies the replication management for all use cases.

Data center consolidation by way of x86 virtualization is a trend that has gained tremendous momentum and offers many benefits. Although the physical nature of a server is transformed once it is virtualized, the necessity for data protection remains. Virtualization opens the door to new and flexible opportunities in data protection, data recovery, replication, and business continuity. This document offers best practices for automated disaster recovery of virtualized workloads using Dell EMC™ XtremIO X2 all flash arrays, native replication, and VMware® Site Recovery Manager™ (SRM) with varying levels of consistency.

Dell EMC® XtremIO Storage Replication Adapter for VMware Site Recovery Manager (SRA) allows VMware SRM to implement disaster recovery for XtremIO clusters. Dell EMC® XtremIO SRA supports SRM functions such as failing over, failing back, and failover testing using XtremIO XMS as the replication engine.

000

For those who wants to understand how the solution is working end-to-end, we have also just published a new and very comprehensive white paper about the topic, the white paper can be downloaded clicking the screenshot below

Demos.. you can also SEE how it’s all working by watching the demos I’ve created

One more thing..

next month, we will release a new version of the virtual storage integrator (VSI), which will allow you to set a specific point in time snapshot when working with VMware SRM, this is unique in the array based replication (ABR) support for SRM and here’s a teaser screenshot of it 🙂

004

Dell EMC Storage Analytics (ESA) 4.6 is now available

We have just released the 4.6 version of our free VMware vRealize operations adapter for the Dell EMC storage portfolio.

From an XtremIO prespective, this release bring support for Native Replication that was released in XIOS 6.1

001

002

the other feature that impacts all platforms is the removal of licensing. You can see here there is no longer a field for it:

003

You can download the new version from here (click the screenshot below)

XtremIO X2-T, What’s in a Name

At Dell Technologies World 2018, we have announced a new member of the X2 family, known as “X2-T”, this is a lower cost configuration designed for the entry level markets

X2-T comes in a single X-Brick configuration that scales from 34.5TB raw up to 69.1TB raw or 369TB effective capacity given a 6:1 storage efficiency.

It’s perfect for customers requiring a single X-Brick with small capacity (100 to 200TBe) but expect limited growth and still require the high-performance with low latency of X2. Memory upgradeable to X2-R should unplanned growth occur..

X2-T Scales up to a “Full” X2-R by purchasing additional memory to scale beyond 1 X-Brick. This allows Dell EMC to capture market-share in high-end midrange while protecting enterprise business. X2-T delivers an attractive price for 100TB Effective as an entry point for XtremIO

X2-T Entry Specifications

  • Single X-Brick with reduced memory only… but upgradeable to full X2-R after purchasing memory expansion
  • Base X-Brick includes 18 SSD which is about 104 TB effective capacity at 3.75:1
  • Scale up to 36 SSDs / 231.5 TB of effective capacity in increments of 6 SSDs at a time.
  • Scale Out or scale UP capacity beyond 36 SSD with memory expansion

Effective capacity stated is based on typical field results of 3.75:1 storage efficiency on XtremIO X2 platform.

For customers who decide to start with an X2-T, there is a price premium scaling up to an X2-R versus purchasing an initial X2-R base.

You can hear Dan Inbar talks about it here

a common question that we get in the field is “does X2-T lacks any features like snapshots and / or native replication”?>

The short (and the long answer is no!), X2-T is running the exact same OS (XIOS) has it’s other brothers, X2-R and X2-S and is not limited in any of it’s functionality, it’s really just a lower price point for customers who want’s to use XtremIO and are not willing to compromise on other products deficiencies like snapshots, true inline deduplication / compression etc’

XIOS 6.1 – Data Reduction (DRR) Reporting per a Volume

Know your Volumes

XtremIO has just released a new feature. The feature name is DRR Per Volume. The DRR Per Volume Reporting feature allows users to calculate data savings for Volumes and Volume Snapshot Groups (VSGs). The data savings calculations will provide the following information for each Volume and/or Volume Snapshot Group (VSG):

  • Unique Physical Space per Volume or VSG: The minimum capacity that will be freed if this Volume/VSG is deleted
  • Data Reduction Rate (DRR) Per Volume or VSG: Estimated Data Reduction Ratio of the Volume/VSG, taking into account Deduplication and Compression
  • Copy Efficiency per VSG: Estimated efficiency of using volume copies (rather than independent volumes)

The Volume Snapshot Group (VSG) Structure

Creating a Copy in XtremIO is instantaneous. This is because when a copy is created, no data or meta data is copied. Rather, a new internal volume construct is created. This internal construct stores all of the shared data between the Volume and it’s Copy.

In the example below, when a Copy is taken from the Product Volume, and internal construct is created and stores all of the shared data of the Production Volume and its Copy.

All the Copies taken from the original Volume are part of the same Volume Snapshot Group (VSG). This includes also copies taken of existing copies.

Unique Physical Space

The Unique Physical Space calculation takes all the address space of a Volume and calculates how many of its logical blocks reference a physical block with a single logical address reference (reference count =1). This means that if this logical block will be deleted, than it will definitely free up a physical block too.

What does Global Deduplication has to do with Unique Physical Space Calculation?

The Unique Physical Space is the minimum capacity that will be freed if this Volume/VSG is deleted. You may ask, why is this this the minimum unique physical space that will be freed and not the actual? Well this relates to how XtremIO calculates Deduplication Globally.

There are 2 possible ways to calculate Deduplication in storage products:

  • Volume based Deduplication – In this method, only duplicated blocks within a volume are deduped. If the same data block is written to two different volumes, it consumes two physical blocks.
  • Global Deduplication – In this method, all duplicated blocks within a Cluster are deduped. If the same data block is written to two different volumes, it will consume only a single physical block.

In order to promote maximum efficiencies, XtremIO architecture is based on an in line global deduplication data service. A duplicated block will always be written only ONCE in the XtremIO cluster.

The Unique Physical Space counter counts all blocks that have a single occurrence (reference count) in the array – meaning all blocks that are not deduped. XtremIO doesn’t differentiate between blocks that are unique to a volume and blocks that are shared between different volumes. It only counts completely unique blocks. If a block has only 2 copies, and both copies are stored on a given Volume, the Unique Physical Space will not include this block in its calculation, even though it will be removed if the Volume is removed.

Data Reduction Rate (DRR)

The DRR Per Volume and VSG is calculated by taking the data blocks of the Volume/VSG and calculating their Deduplication and Compression rates. In most cases, Volumes in the same VSG will have similar DRR rates.

In the example below, the DRR rate of a VDI volume is 13.4:1. The DRR rate of an Oracle volume is 3.2:1.

Copy Efficiency per VSG

The Copy Efficiency calculates per VSG the ratio between all the host accessible blocks of the Volumes of the VSG and the amount of logical blocks that are actually stored in the VSG. The relative amount of shared blocks affects the Copy Efficiency ratio. The more blocks that are shared within the VSG, the higher the Copy Efficiency.

Let’s review the following example. The below VSG has 3 external Volumes: Production Volume, Writable Copy1 and Writable Copy2.

The number of logical blocks stored in the VSG is: 18

Volume Logical blocks stored in VSG
Internal Volume1 6
Internal Volume2 6
Production Volume 3
Writable Copy1 0
Writable Copy2 3
Total logical blocks 18

However, the volumes and writable copies has the following host accessible address space:

Volume Host accessible address space
Production Volume 15
Writable Copy1 6
Writable Copy2 15
Host accessible blocks of VSG 36

Therefore, the Copy Efficiency of the VSG will be: 36:18 which is a 2:1 ratio.

Running the Calculations

The data saving calculation are performed at the Volume and Volume Snapshot Group (VSG) levels. User can provide a Consistency Group as input, and the savings metrics will be automatically calculated for all of the Volumes of the Consistency Group. The calculation is always made on all of the Volumes of the selected Volume’s Snapshot Group.

scanning a volume and identifying unique blocks is a resource and time-consuming task, the array will only take samples of the volume and analyze them.


XIOS 6.1 – Hosts Paths Monitoring

Internally, I refer to XIOS 6.1 as the “Protection Anywhere” version, you may think about the Native Replication aspect of the XMS / AppSync integration aspect of it but there is another layer, protecting you, the customers from making mistakes, we are after all, humans..

Motivation

Storage arrays provides to hosts several paths for each exposed volume and rely on the host multipath layer to perform load balancing between relevant paths, and in case of a path failover – failover IOs to healthy paths. In order to have proper path redundancy (HA), it is required to have at least two healthy paths residing on different failure domains (in our case – different storage controllers belonging to a given cluster).

If a path fails, but the host only has paths on a single failure domain (e.g. due to host multipath misconfiguration or bugs) then the host may lose all paths, resulting in a service loss (DU) and host application failures.

It is therefore desirable to detect lack of redundancy in host paths failure domains beforehand and especially before initiating maintenance related HA operations that may cause path failures e.g. NDU, replacing a storage controller.

Feature Objective

The purpose of the feature is to monitor Host paths to detect and notify lack of redundancy between host failure domains (different storage controllers) and prevent\stop destructive operation ( e.g. NDU, Replace SC).

Monitor Host paths to detect lack of redundancy between host failure domains (different storage controllers) and Prevent\stop destructive operation that can cause Host side DU.

  • Monitor is based on collection and analysis of I-T-L
    Health Indication
  • I-T-L considered Active\Healthy if IOs or heartbeat received for this I-T-L
  • I-T-L\Path redundancy for Initiator to specific Volume exist if it has at least 2 healthy paths through 2 targets at different SCs
  • Provide indication
    of lack of redundancy per Initiator (Note – we don’t provide indication at Host level)
  • Scope for this Release (XIOS 6.1)
    • Monitor Initiators’ paths periodically (Steady state)
      • Notify\Alert if lack of redundancy detected
      •  Indicate potential problems at Host connectivity or Multipath behavior.
    • Monitor Initiators’ paths at NDU – Main purpose for this release
      • Define Enter criteria to invoke ‘Cluster Upgrade’
      • Monitor Initiator path-redundancy during ‘OS Upgrade’ and Stop ‘OS Upgrade’ if degradation of Initiators’ path redundancy state detected
        • Prevent Host side DU , mitigate loss of access to LUNs.

Below you can see a screenshot flagging the error in the XMS WebUI, for example, here you can see that host / initiator “drm-pod1-esx45_fc0” has lost it’s redundant connectivity

We’ve also added some new CLI commands to monitor that

Initiator path-redundancy – CLI [1]

show-initiators [cluster-id=<id: name or index>] [duration=<seconds>] [filter=<>] [frequency=<seconds>] [prop-list=<>] [vertical=<N/A>]

Initiator-Name Index Port-Type Port-Address IG-Name Index     path-redundancy-state

10:00:00:90:fa:a9:48:aa 1 fc 10:00:00:90:fa:a9:48:aa LG1358 1    non-redundant

  • Remove display of:
    • Chap-Authentication-Initiator-User-Name
    • Chap-Discovery-Initiator-User-Name:
    • Chap-Authentication-Cluster-User-Name:
    • Chap-Discovery-Cluster-User-Name:
    • Initiator-OS: other
  • Display degraded initiators during os upgrade using filter:
    • path_redundancy_state = os_upgrade_disconnected or os_upgrade_non_redundant

show-initiators filter=path_redundancy_state:like:os_upgrade

XIOS 6.1 – Integrating AppSync With the XtremIO X2 XMS

Wow, one of my mini projects is finally here

For years, XtremIO has been known as the go to platform for the CDM use case, if you are new to this world, you may want to first have a read at these posts:

https://xtremio.me/2016/06/25/appsync-3-0-is-here-whats-new/

but in a nutshell, the XtremIO XVC (snapshots) capability allows you to take thousands of snapshots without hurting the source volume performance or without creating extra capacity waste, the glue for the application integration (put an Oracle DB into a backup mode etc) was always done via AppSync.

We wanted to make the integration tighter…why?

Because the world is NOT a single dimension based, for example, if I’m a VMware admin that wants to backup my VMs, I may not want to use the Appsync UI, for that reason, you can actually perform these tasks directly from within vCenter which integrates directly to AppSync..

You can read more about it here https://xtremio.me/2017/01/30/vsi-7-1-is-here-vsphere-6-5-supported/

BUT, what if I’m the STORAGE admin??

This pose a good question because I’m not likely to learn the AppSync UI and I’m definitely not likely to learn the VMware vCenter UI..this is where this new XIOS 6.1 integration comes in, you can now backup your vSphere VMs or datastores and be able to restore specific VMs or the entire datastores directly from the XtremIO Web UI

See a video demo I recorded below