Dell EMC PowerProtect 19.3 is available, Kubernetes Integration? You bet!

Back in 2018, i traveled to a customer advisory board and talked about CSI, i then got ALL the customers basically telling me “That’s great but what about backup”?

Capture

Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services.

PowerProtect Data Manger (PPDM) allows you to protect your production workloads in Kubernetes (K8s) environments, ensuring that the data is easy to backup and restore, always available, consistent, and durable in a Kubernetes workload or DR situation.

  1. PPDM has a Kubernetes-native architecture developed for Kubernetes environments.
  2. Easy for the IT Ops team to use and is separate from the dev ops environment; allows centralized governance from the dev ops environment.
  3. Users are protecting into Data Domain, benefiting from secondary storage with unmatched efficiency, deduplication, performance, and scalability – and near-future plans to protect to object storage for added flexibility.

Dell EMC and Velero are working together in an open source community to improve how you protect your data, applications, and workloads for every step of your Kubernetes journey.

Velero is:

  • A an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.
  • For the Kubernetes administrator or the developers, and it handles protection for Kubernetes.
  • A tool that focuses on backup/restore of the K8s configuration.

PPDM builds on top of these capabilities to:

  • Provide a data protection solution that has single management for VMs, applications, and containers.
  • Provide an enterprise grade solution that allows you to place your production workloads in K8s environment
  • Focus on crash consistent backup/restore that is always available and durable in a K8s workload or DR situation

Kubernetes, or k8s (k, 8 characters, s), or “kube” is a popular open source platform for container orchestration which automates container deployment, container (de)scaling and container load balancing

  • Kubernetes automates container operations.
  • Kubernetes eliminates many of the manual processes involved in deploying and scaling containerized applications.
  • You can cluster together groups of hosts running containers, and Kubernetes helps you easily and efficiently manage those clusters.
  • Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling.

Kubernetes Features

  • Automated Scheduling: Kubernetes provides advanced scheduler to launch container on cluster nodes.
  • Self Healing Capabilities: Rescheduling, replacing, and restarting the containers which are died.
  • Automated roll-outs and rollbacks: Kubernetes supports roll-outs and rollbacks for the desired state of the containerized application.
  • Horizontal Scaling and Load Balancing: Kubernetes can scale up and scale down the application as per the requirements.

K8s Architectural Overview

PowerProtect is our answer to modern challenges

  • It allows customers to take existing production workloads or new workloads and start placing them in kubernetes production environments, knowing that they will be protected.
  • It allows IT operations and backup admins to manage k8s data protection from a single Enterprise-grade management UI, as well as allowing a k8s admins to define protection for their workloads from the k8s APIs.
  • We are building the solution in collaboration with VMware Velero – which focuses on data protection and migration for k8s workloads.

When building the solution we focus on 3 pillars:

  • Central Management
    • When given a K8s cluster credentials – Next-Gen SW will discover the namespaces, labels and pods in the environment, you will be able to protect namespaces or specific pods
    • Protection is defined via the same PLC mechanism that Next-Gen SW has
    • Logging, Monitoring, governance, recovery are done through Next-Gen SW, the same way they are done for other assets
    • Efficient and Flexible
    • Same Data Protection Platform – our solution is built into DPD’s single Next-Generation software, so as an IT ops you only need to manage thought one platform – your VMs, your applications and your containers
    • Protection to deduped storage allows great TCO with DD/DDVE/Next-Gen Hardware superior deduplication. Protection can also be performed to S3-compatible storage (ECS, S3 in a public cloud next release).
    • Next-Generation SW is planned to protect any persistent volume (i.e. crash-consistent images), an will protect quiesced applications such as MySQL, Postgres, Mongo and Cassandra in future
    • Protection is planned for any kubernetes deployment, such as:
      • PKS (Essential/Enterprise/Cloud)
      • Openshift
      • GCP (Anthos/GKE)
      • AWS (EKS)
      • Openstack
      • On-prem bare metal
  • Built for Kubernetes
    • By using the k8s APIs we allow flexibility in which clusters can be protected. It’s possible to use additional applications such as Grafana, Prometheus, istio, helm to augment capabilities and automation to the solution.
    • Next-Gen SW discovers, shows and monitors k8s resources – namespaces and persistent volumes.
    • No sidecars – there is no need to install a backup client container for each pod (which is time consuming, has a large vector of attack, consumes lots of resources and does not scale)
    • Node affinity – by providing protection controllers per node we avoid cross-node traffic. This is more efficient and more secure.

PowerProtect is our answer to modern challenges

  • It allows customers to take existing production workloads or new workloads and start placing them in kubernetes production environments, knowing that they will be protected.
  • It allows IT operations and backup admins to manage k8s data protection from a single Enterprise-grade management UI, as well as allowing a k8s admins to define protection for their workloads from the k8s APIs.
  • We are building the solution in collaboration with VMware Velero – which focuses on data protection and migration for k8s workloads.

When building the solution we focus on 3 pillars:

  • Central Management
    • When given a K8s cluster credentials – Next-Gen SW will discover the namespaces, labels and pods in the environment, you will be able to protect namespaces or specific pods
    • Protection is defined via the same PLC mechanism that Next-Gen SW has
    • Logging, Monitoring, governance, recovery are done through Next-Gen SW, the same way they are done for other assets
    • Efficient and Flexible
    • Same Data Protection Platform – our solution is built into DPD’s single Next-Generation software, so as an IT ops you only need to manage thought one platform – your VMs, your applications and your containers
    • Protection to deduped storage allows great TCO with DD/DDVE/Next-Gen Hardware superior deduplication. Protection can also be performed to S3-compatible storage (ECS, S3 in a public cloud next release).
    • Next-Generation SW is planned to protect any persistent volume (i.e. crash-consistent images), an will protect quiesced applications such as MySQL, Postgres, Mongo and Cassandra in future
    • Protection is planned for any kubernetes deployment, such as:
      • PKS (Essential/Enterprise/Cloud)
      • Openshift
      • GCP (Anthos/GKE)
      • AWS (EKS)
      • Openstack
      • On-prem bare metal
  • Built for Kubernetes
    • By using the k8s APIs we allow flexibility in which clusters can be protected. It’s possible to use additional applications such as Grafana, Prometheus, istio, helm to augment capabilities and automation to the solution.
    • Next-Gen SW discovers, shows and monitors k8s resources – namespaces and persistent volumes.
    • No sidecars – there is no need to install a backup client container for each pod (which is time consuming, has a large vector of attack, consumes lots of resources and does not scale)
    • Node affinity – by providing protection controllers per node we avoid cross-node traffic. This is more efficient and more secure.

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only).

While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the StorageClass resource.

Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.

We used hostPath in our lab, Kubernetes supports hostPath for development and testing on a single-node cluster. A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.

K8 Components to be backed up

Namespaces

  • Kubernetes namespaces can be seen as a logical entity used to represent cluster resources for usage of a particular set of users. This logical entity can also be termed as a virtual cluster. One physical cluster can be represented as a set of multiple such virtual clusters (namespaces). The namespace provides the scope for names. Names of resources within one namespace need to be unique.

PersistentVolumeClaim (PVC)

  • PVC is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only).

storageClassName

  • A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.

CSI

  • Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.
  • Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach, mount, etc. the volumes exposed by the CSI driver.
  • The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object

Note: We used hostPath in our lab, Kubernetes supports hostPath for development and testing on a single-node cluster. A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.

Protecting K8 workloads using PowerProtect Data Manager

Asset Source

  • Asset source for the Kubernetes cluster is the cluster’s master node’s  FQDN or IP address. In case of a HA cluster, the external IP of the load balancer must be used for the asset source.
  • The default port for a production Kubernetes API server with PPDM is 6443.
  • PowerProtect will use bearer token or a kubeconfig file to authenticate with the Kubernetes API server.

Assets

PowerProtect will discover two types of assets for  protection in Kubernetes clusters

  • Namespaces
  • Persistent Volume Claims (PVC). PVC’s are namespace bound and so should be shown as children of the namespace they belong to in the UI.
  • PowerProtect will use Velero for protection of namespaces (metadata). PowerProtect will drive PVC snapshot and backup using its own controller.

PowerProtect components on the Kubernetes Cluster

PowerProtect will install the following components on the Kubernetes cluster when a Kubernetes cluster is added as an asset source.

  • Custom Resource Definitions for  BackupJob, RestoreJob, BackupStorageLocation, BackupManagement
  • Service account for PowerProtect controller
  • Cluster role binding to bind service account to cluster admin role
  • Deployment for PowerProtect Controller with replica set of 1 for R3
  • Velero
  • PLC Configuration & Asset Configuration
  • When a PLC is created, a new storage unit (SU) is created on the protection storage as part of PLC configuration. In case of a PLC of type Kubernetes,  a BackupStorageLocation containing the SU information will also be created on the cluster.
  • PowerProtect controller running in the Kubernetes cluster will create a corresponding BackupStorageLocation in the Velero namespace whenever a BackupStorageLocation is created in the PowerProtect namespace.

    Protection driven from PLC

    When protection action is triggered, CNDM will post a BackupJob custom resource (for each namespace asset in that PLC) to the Kubernetes API server and monitor the status. BackupJob custom resource name will be <namespace>-YYYY-MM-DD-SS. Backupjob will include

    The namespace asset that needs to be protected

    All the PVC assets in that namespace that need to be protected

    Backup storage location (target).

    PowerProtect Controller that is watching for these custom resource will be notified.

    PowerProtect Controller will create a Velero Backup custom resource to backup with the following information

    Namespace

    Velero setting to not include PVC and PV resource types

    Velero setting to include cluster resources.

    Velero setting to not take snapshots

    Velero BackupStoragelocation corresponding to the PLC SU

    PowerProtect controller will monitor and wait for Velero backup to complete.

    Since the provider for the BackupStorageLocation is DataDomain, Velero will invoke the DataDomain Object store plugin to write data to the storage unit.

    Once Velero CR status indicates that the backup has completed, PowerProtect controller will then perform steps 7, 8, 9 for each PVC

    Snapshot PVC

    Launch cProxy pod with the snapshot volume mounted to the pod

    cProxy pod will write snapshot contents to DD.

    Once all the PVC’s are backed up, PowerProtect controller will update the status in the BackupJob custom resource. The manifest will include all files created by PowerProtect and Velero. In order to get the list of files created by Velero, PowerProtect controller will read the DataDomain folder after the Velero backup is complete.

    CNDM will create a protection copy set containing protection copy for the namespace and each PVC asset

    Note: Kubernetes etcd datastore has a limit of 1.5 MiB for document size by default. If the PVC’s in a single namespace exceed couple of hundred, we can run into document size limits

    • The controller currently supports up to 10 backupjobs (namespaces) simultaneously.  Within each backupjob, the pvc’s are backed up one at a time. So at a time, there can at the most be 10 cproxy’s running for backup.
      • This is an initial implementation. As a reference point – Velero today is completely serial.
    • CSI snapshots today can only take full snapshots
    • FS agent will be running inside the cproxy. After CSI snapshot is taken, it will be mounted onto a cproxy pod and the FS agent will read that mount and write to DD.

    The following steps describe the restore workflow when a namespace has been deleted and restore is triggered from PPDM UI.

    CNDM will post the Restorejob custom resource to the Kubernetes API server. RestoreJob custom resource will include

    The namespace asset that needs to be restored

    All the PVC assets in that copyset

    Backup storage location (target).

    PowerProtect Controller that is watching for these custom resource will be notified.

    PowerProtect Controller will create a Velero Restore custom resource to restore

    Namespace resource (note: not the whole namespace, just the namespace resource)

    All the resources needed in order to start restore of PVC

    PowerProtect controller will monitor and wait for Velero restore to complete. Since the provider for the BackupStorageLocation is DataDomain, Velero will invoke the DataDomain Object store plugin to read data to the storage unit.

    Once Velero CR status indicates that the restore has completed, PowerProtect controller will then restore each PVC in the restore job

    cProxy pod will be launched for each PVC

    cProxy read contents from DataDomain and write to the PVC

    PowerProtect Controller will then create a second Velero Restore custom resource to restore all the remaining resources in the namespace excluding namespace reesource, PVC and PV

    PowerProtect controller will monitor and wait for Velero restore to complete.

    PowerProtect controller will update the final RestoreJob status

    Below you can see a demo about how it all looks

    • Training Resources

    The following training assets were created for this release. Additional assets also exist that describe the PowerProtect product capabilities and concepts. These training sessions are not specific to this release but do provide an introduction for those that are new to PowerProtect. For more education assets, search for “PowerProtect” on the education portal.

    PowerProtect Data Manager 19.3 Recorded Knowledge Transfer

    Take this 2 hour and 30 minute On-Demand Class to get an overview of the new enhancements, features, and functionality in the PowerProtect Data Manager 19.3 release.

    Registration Link:   https://education.dellemc.com/content/emc/en-us/csw.html?id=933228438

The Kubernetes CSI Driver for Dell EMC PowerMax v1.1 is now available

 

Product Overview:

 
 

CSI Driver for PowerMax enables integration with Kubernetes open-source container orchestration infrastructure and delivers scalable persistent storage provisioning operations for PowerMax and All Flash Arrays.

 
 

Highlights of this Release:

 
 

The CSI Driver for Dell EMC PowerMax has the following features:

  • Supports CSI 1.1
  • Supports Kubernetes version 1.13, and 1.14
  • Supports Unisphere for PowerMax 9.1
  • Supports Fibre Channel
  • Supports Red Hat Enterprise Linux 7.6 host operating system
  • Supports PowerMax – 5978.444.444 and 5978.221.221
  • Supports Linux native multipathing
  • Persistent Volume (PV) capabilities:
    • Create
    • Delete
  • Dynamic and Static PV provisioning
  • Volume mount as ext4 or xfs file system on the worker node
  • Volume prefix for easier LUN identification in Unisphere
  • HELM charts installer
  • Access modes:
    • SINGLE_NODE_WRITER
    • SINGLE_NODE_READER_ONLY

        
       

Software Support:

  • Supports Kubernetes v1.14
  • Supports PowerMax – 5978.221.221 (ELM SR), 5978.444.444 (Foxtail)
  • CSI v1.1 compliant

     
     

Operating Systems:  

  • Supports CentOS 7.3 and 7.5
  • Supports Red Hat Enterprise Linux 7.6

 
 

 
 

Resources:

The Kubernetes CSI Driver for Dell EMC Isilon v1.0 is now available


Isilon, the de-facto scale out NAS platform, so much of it’s Ptb are being used all over the world, no wonder why i’m getting a lot of requests to have the Kubernetes CSI plugin available for it, well, now you have it!

Product Overview:

CSI Driver for Dell EMC Isilon enables integration with Kubernetes, open-source container orchestrator, infrastructure and delivers scalable persistent storage provisioning operations for Dell EMC Isilon storage system.

Highlights of this Release:

The CSI driver for Dell EMC Isilon enables Isilon use as a persistent storage in the Kubernetes clusters. The driver enables fully automated workflows for dynamic and static persistent volume (PV) provisioning and snapshot creation. Driver uses SmartQuotas to limit volume size.

​  

Software Support:

This introductory release of CSI Driver for Dell EMC Isilon supports the following features:

Supports CSI 1.1

  • Supports Kubernetes version 1.14.x
  • Supports Red Hat Enterprise Linux 7.6 host operating system
  • Persistent Volume (PV) capabilities:
    • Create from scratch
    • Create from snapshot
    • Delete
  • Dynamic PV provisioning
  • Volume mount as NFS export
  • HELM charts installer
  • Access modes:
    • SINGLE_NODE_WRITER
    • MULTI_NODE_READER_ONLY
    • MULTI_NODE_MULTI_WRITER
  • Snapshot capabilities:
    • Create
    • Delete

Note:
Volume Snapshots is an Alpha feature in Kubernetes. It is recommended for use only in short-lived testing clusters, as features in the Alpha stage have an increased risk of bugs and a lack of long-term support. See Kubernetes documentation for more information about feature stages.

Refer to Dell EMC Simple Support Matrix for the latest product version qualifications.

Resources:

CSI Driver for Dell EMC Isilon files and documentation are available for download on:

below you can see a demo about how it all works:

XtremIO 6.3 is here, Sync, Scale & Protect!

We have just released the new XtremIO, 6.3 version with some big enhancements, so let’s dive into each one of them!

XtremIO Remote Protection

XtremIO Metadata-Aware Replication

XtremIO Metadata-Aware Asynchronous Replication leverages the XtremIO architecture to provide the most efficient replication that reduces the bandwidth consumption. XtremIO Content-Aware Storage (CAS) architecture and in-memory metadata allow the replication to transfer only unique data blocks to the target array. Every data block that is written in XtremIO is identified by a fingerprint which is kept in the data block’s metadata information.

  • If the fingerprint is unique, the data block is physically written and the metadata points to the physical block.
  • If the fingerprint is not unique, it is kept in the metadata and points to an existing physical block.

A non-unique data block, which already exists on the target array, is not sent again (deduplicated). Instead, only the block metadata is replicated and updated at the target array.

The transferred unique data blocks are sent compressed over the wire.

XtremIO Asynchronous Replication is based on Snapshot-shipping method that allows XtremIO to transfer only the changes, by comparing the changes between two subsequent Snapshots, benefiting from write-folding.

This efficient replication is not limited per volume, per replication session or per single source array, but is a global deduplication technology across all volumes and all source arrays.

In a fan-in environment, replicating from four sites to a single target site, as displayed in Figure 14, overall storage capacity requirements (in all primary sites and the target site) are reduced by up to 38 percent[1], providing the customers with considerable cost savings.

001

  1. Global Data Reduction with XtremIO Metadata-Aware Replication

XtremIO Synchronous Replication (new to 6.3)

XtremIO enables to protect the data both asynchronously and synchronously when ‘zero data loss’ data protection policy is required.

XtremIO Synchronous replication is fully integrated with Asynchronous replication, in-memory snapshots and iCDM capabilities, which makes it the most efficient.

The challenge with Synchronous replication arises when the source and target are out of sync. This is true during the initial sync phase as well as when a disconnection occurs due to link failure or a user-initiated operation (for example, pausing the replication or performing failover).

Synchronous replication is highly efficient as a result of using these unique capabilities:

  • Metadata-aware replication – For the initial synchronization phase and when the target gets out of sync, the replication is using the metadata-aware replication to efficiently and quickly replicate the data to the target. The replication is using multiple cycles until the gap is minimal and then switches to synchronous replication. This reduces the impact on the production to a minimum and accelerates the sync time.
  • Recovery Snapshots – to avoid the need for a full copy or even a full metadata copy, XtremIO leverages the in-memory snapshots capabilities. Every few minutes recovery-snapshots are created on both sides, which can be used as a baseline in case a disconnection occurs. When the connection is resumed, the system only needs to replicate the changes made since the most recent recovery snapshot prior to the disconnection.
  • Prioritization – In order to ensure the best performance for the applications using Sync replication, XtremIO automatically prioritizes the I/O of Sync replication over Async replication. Everything is done automatically and no tuning or special definition is required.
  • Auto-Recovery from link disconnection– The replication resumes automatically when the link is back to normal.



XtremIO Synchronous replication is managed at the same location as the Asynchronous replication and supports all Disaster Recovery operations similarly to Asynchronous replication.

Switching between Async to Sync is simply performed using a single command or via the UI as can be seen below



Once changed, you can also see the new replication mode in the remote session view


Best Protection

XtremIO replication efficiency allows XtremIO to support the replication for All-Flash Arrays (AFA) high performance workloads. The replication supports both Synchronous replication and Asynchronous replication with an RPO as low as 30 seconds and can keep up to 500 PITs.[2]. XtremIO offers simple operations and workflows for managing the replication and integration with iCDM for both Synchronous and Asynchronous replication:

  • Test a Copy (Current or specific PIT) at the remote host
    Testing a copy does not impact the production and the replication, which continues to replicate the changes to the target array, and is not limited by time. The “Test Copy” operation is using the same SCSI identity for the target volumes as will be used in case of Failover.
  • Failover
    Using the failover command, it is possible to select the current or any PIT at the target and promote it to the remote host. Promoting a PIT is instantaneous.
  • Failback
    Fails over back from the target array to the source array.
  • Repurposing Copies
    XtremIO offers a simple command to create a new environment from any of the replication PITs.
  • Refresh a Repurposing Copy
    With a single command, a repurpose copy can be refreshed from any replication PIT. This is very useful when refreshing the data from the production to a test environment that resides on a different cluster or when refreshing the DEV environment from any of the existing build versions.
    • Creating a Bookmark on demand
      An ad-hoc PIT can be created when needed. This option is very useful when an application-aware PIT is required or before performing maintenance or upgrades.

Unified View for Local and Remote Protection

A dedicated tab for data protection exists in  the XtremIO GUI for managing XtremIO local and remote protection (Sync & Async replication). In the Data Protection Overview screen, a high level view displays the status for all Local and Remote protections, as shown in ‎Figure 15.

002

  1. Data Protection Overview Screen

The Overview section includes:

  • The minimum RPO compliance for all Local and Protection sessions
  • Protection sessions status chart
  • Connectivity information between peer clusters

From the overview screen it is easy to drill down to the session

Consistency Group Protection View

With the new unified protection approach in one view it is easy to understand the protection for the consistency group. The Topology View pane displays the local and remote protection topology of the Consistency Group, as shown in ‎Figure 16. Clicking each of the targets displays the detailed information in the Information pane.

003


Secured Snapshots

The purpose of this feature is to allow a customer to protect snapshots created by a “Local Protection Session” against an accidental user deletion:

  • Once a “Local Protection Session” creates the snapshot, it is automatically marked as “Secured”
  • The snapshot’s protection will expire once the snapshot is due for deletion by its retention policy

Once a “Local Protection Session” is set as create secured snapshots:

  • It cannot be set to create non-secure snapshots

Once a snapshot is set as “Secured”:

  • The snapshot cannot be deleted
  • The snapshot-set containing this snapshot cannot be deleted
  • The contents of this snapshot cannot be restored nor refreshed

    Secured Snapshots – Override

  • Procedure
    • Require Case – Legal Obligation
  • New XMCLI Command
    • Tech-Level Command
  • To remove the “secured” flag, a formal ticket must be filed to DellEMC.
  • This is a legal obligation!
  • Once filed, a technician level user account can use the new “remove-secured-snap-flag” XMCLI command to release the “secure” flag.

    Secured Snapshots – Examples – Create a Protected Local Protection

    The following output displays the creation of a new “Local Protection Session” with a “Secured Flag” setting

    The following output displays the modification of an existing “Local Protection Session” to start creating snapshots with a “Secured Flag” setting

    Secured Snapshots – Examples – Snapshot Query & Release

    The first output displays the query of the properties of a “Secured Snapshot”

    The second output displays an attempt of deleting a “Secured Snapshot”

    The third output displays how to release the “Secured” flag (using a tech level user account)

    Below, you can also see how a created protection policy on a consistency group look like when you decide to not allow the option to delete the snapshot

    And this is the error you will get if you (or someone else) will try to remove this volume


    IPv6 Dual Stack

  • Dual IP Versions (4 & 6)
  • XMS Management
    • External IPv4 & IPv6
    • Internal IPv4 or IPv6
  • Storage Controller
    • iSCSI IPv4 & IPv6

    Native Replication IPv4 Only

    This feature allows a customer to assign multiple IP addresses (IPv4 and IPv6) to a single interface

    Let’s discuss the different network interfaces used in XtremIO and see what changed

    XMS Management traffic is used to allow a user to connect to various interfaces (such as WebUI, XMCLI, RestAPI, etc.)

    Previously, the user could configure either an IPv4 or an IPv6 IP Address.

    The new feature allows assigning two IP Addresses – One of each version

    XMS Management traffic is also used internally to connect to clusters (SYM and PM)

    The behavior on this interface wasn’t changed – All managed cluster should use the same IP version

    Storage Controllers’ iSCSI traffic allows external host connectivity

    Previously, the user could only configure IP addresses from a single IP version

    The new feature allows assigning multiple IP Addresses with different versions

    Native Replication behavior remains the same – These interfaces are limited to IPv4 IP addresses

    IPv6 Dual Stack – XMS Management – New XMCLI commands

    Multiple parameters were added to the “show-xms” XMCLI command to support this feature:

    Figure 1 Two new parameters which determine the IP versions

    Figure 2: The “Primary IP Address” parameter names remained as is (to conform with backward-compatibility requirements)

    Figure 3: Various new parameters to describe the new “Secondary IP Address”

    Two additional XMCLI commands were introduced to support this feature:

    Figure 1 The “add-xms-secondary-ip-address” XMCLI command sets the XMS management interface “Secondary IP Address” and “Secondary IP Gateway”

    Figure 2:
    The “remove-xms-secondary-ip-address” XMCLI command removes the XMS management interface “Secondary IP Address” and “Secondary IP Gateway”

    Note that there is no “modify” command – To modify the secondary IP address, remove and set it again with its corrected values.

    IPv6 Dual Stack – Storage Controller Management – Interfaces

    To support the changes made to the Storage Controllers’ iSCSI interfaces settings, the following changes were implemented:

  • Per “iSCSI Portals” – The user can now configure multiple iSCSI portals with different IP versions on the same interface
  • Per “iSCSI Routes” – The user can now configure multiple IPv4 and IPv6 iSCSI routes

    As explained earlier, the Storage Controller Native Replication interface behavior remains as is – The interfaces only allow IPv4 IP addresses.


    Scalability

    Scalability Increase

    The new 6.3.0 release supports an increased number of objects:

  • Number of volumes and copies (per cluster) was increased to 32K
  • Number of SCSI3 Registrations and Reservations (per cluster) was increased to 32K

    Below you can see a demo, showing how to configure Sync Replication from the GUI

    And here you can see how Sync replication works with VMware SRM in conjunction with our unique point in time failover in case where you don’t want to failover to the last point in time

     

Isilon – The Challenge of Files at Scale

I get a lot of requests to post about iSilon so I hooked up with Ron Steinke, A Technical Staff member of the iSilon Software Engineering to write some guest posts, I would really appreciate the feedback and whether you would like us to write more about iSilon

Scalability is a harder problem for stateful, feature rich solutions. Distributed filesystems are a prime example of this, as coordinating namespace and metadata updates between multiple head nodes presents a challenge not found in block or object storage.

The key is to remember that this challenge must be viewed in the context of the simplification it brings to application development. Application developers choose to use files to simplify the development process, with less concern about what this means for the ultimate deployment of the application at scale. For a process limited by the application development lifecycle, or dependent on third party applications, the tradeoff of utilizing a more complex storage solution is often the right one.

Part of the challenge of file scalability is fully replicating the typical file environment. Any scalability solution which imposes restrictions which aren’t present in the development environment is likely to run against assumptions built into applications. This leads to major headaches, and the burden of solving them usually lands on the storage administrator. A few of the common workarounds for a scalable flat file namespace illustrate these kinds of limitations.

One approach is to have a single node in the storage cluster managing the namespace, with scalability only for file data storage. While this approach may provide some scalability in other kinds of storage, it’s fairly easy to saturate the namespace node with a file workload.

A good example of this approach is default Apache HDFS implementation. While the data is distributed across many nodes, all namespace work (file creation, deletion, rename) is done by a single name node. This is great if you want to read through the contents of a large subset of your data, perform analytics, and aggregate the statistics. It’s less great if your workload is creating a lot of files and moving them around.

Another approach is namespace aggregation, where different parts of the storage array service different parts of the filesystem. This is effectively taking UNIX mount points to their logical conclusion. While this is mostly transparent to applications, it requires administrators to predict how much storage space each individual mount point will require. With dozens or hundreds of individual mount points, this quickly becomes a massive administration headache.

Worse is what happens when you want to reorganize your storage. The storage allocations that were originally made typically reflect the team structure of the organization at the time the storage was purchased. Organizations being what they are, the human structure is going to change. Changing the data within a single mount point involves renaming a few directories. Changes across mount points, or the creation of new mount points, involve data rewrites that will take longer and longer as the scale of your data grows.

Clearly these approaches will work for certain kinds of workflows. Sadly, most storage administrators don’t have control of their users’ workflows, or even good documentation of what those workflows will be. The combination of arbitrary workflows and future scaling requirements ultimately pushes many organizations away from limited solutions.

The alternative is a scale-out filesystem, which looks like a single machine both from the users’ and administrators’ perspective. A scale-out system isolates the logical layout of the filesystem namespace from the physical layout of where the data is stored. All nodes in a scale-out system are peers, avoiding specials roles that may make a particular node a choke point. This parallel architecture also allows each scale-out cluster to grow to meet the users’ needs, allowing storage sizes far larger than any other filesystem platform.

There are four main requirements to provide the transparency of scale-out:

  • A single flat namespace, available from and serviced by all protocol heads. This removes the scaling limitation of a single namespace node, by allowing the capacity for namespace work to scale with the number of nodes in the system.
  • Flat allocation of storage space across the namespace. While the data may ultimately be stored in different cost/performance tiers, these should not be tied to artificial boundaries in the namespace.
  • The ability to add space to the global pool by adding new storage nodes to the existing system. Hiding this from the way applications access the system greatly simplifies future capacity planning.
  • Fail-in-place, the ability of the system to continue operating if drives or nodes fail or are removed from the system. This removal will necessarily reduce the available storage capacity, but should not prevent the system from continuing to function. All of these capabilities are necessary to take full advantage of the power of a scale-out filesystem. Future posts will discuss some of the benefits and challenges this kind of scalable system brings. In my next post, we’ll see how the last two elements in the list help to enhance the long-term upgradability of the system.

The CSI plugin 1.0 for Unity is now available


CSI Driver for Dell EMC Unity XT Family enables integration with Kubernetes open-source container orchestration infrastructure and delivers scalable persistent storage provisioning operations for Dell EMC Unity.

Highlights of this Release:

       Dynamic persistent volume provisioning

       Snapshot creation

       Automated volume provisioning workflow on Fiber Channel arrays

       Ease of volume identification

Software Support:

  • Supports CSI 1.1
  • Supports Kubernetes version 1.14
  • Supports Unity XT All-Flash and Hybrid Unified arrays based on Dell EMC Unity OE v.5.x

Operating Systems:  

  • Supports Red Hat Enterprise Linux 7.5/7.6
  • Supports Centos 7.6

Resources:




below you can see a quick demo about using the plugin

Enterprise Storage Analytics (ESA) version 5.0 is now available.

We have just released the new (5.0) and free version of our vROPS adapter, also known as ESA, it’s one of the first one that support VMware vRealize Operations 8.0! , if you are new to this, I highly suggest you start reading about it here

https://xtremio.me/2014/12/09/emc-storage-analytics-esa-3-0-now-with-xtremio-support/

ESA Overview:

Dell EMC Enterprise Storage Analytics(ESA) for vROps(formerly EMC Storage Analytics) allows VMware vRealize Operations customers to monitor their Dell EMC storage within a single tool (vROps); reducing time to resolution and handling problems before they happen.

Using vROps with ESA, users can:

  • Monitor view relationships/topologies from VMs to the underlying storage
  • View alerts and anomalies
  • Storage capacity including capacity planning
  • Use out-of-the-box Reports, Dashboards, Alerts or customize their own
  • View performance, performance trends, analysis, etc.

Highlights of this Release:

  • Rebranding ScaleIO to VxFlex OS
  • Support for latest Dell EMC storage platforms
  • Support vROps 8.0 using the latest libraries and SDK

Installation is super easy, just upgrade your existing ESA or install a new one, the links can be found below


Once the installation is done, you want to configure the Dell EMC storage array you would like to show it’s matrixes by creating an account to it as shown below


And once that is done, you will immediately see ESA start collecting data from the array, give it couple of minutes.



Once the collection is done, you can either use the generic vROPS dashboards or use our customized, storage based reports as seen below



Of course, the real value of vROPS with ESA is that it shows you an end-to-end visibility, from the hosts -> VM -> storage array which provides a very powerful insight !


Resources: