The Kubernetes CSI Driver for Dell EMC PowerMax v1.1 is now available

 

Product Overview:

 
 

CSI Driver for PowerMax enables integration with Kubernetes open-source container orchestration infrastructure and delivers scalable persistent storage provisioning operations for PowerMax and All Flash Arrays.

 
 

Highlights of this Release:

 
 

The CSI Driver for Dell EMC PowerMax has the following features:

  • Supports CSI 1.1
  • Supports Kubernetes version 1.13, and 1.14
  • Supports Unisphere for PowerMax 9.1
  • Supports Fibre Channel
  • Supports Red Hat Enterprise Linux 7.6 host operating system
  • Supports PowerMax – 5978.444.444 and 5978.221.221
  • Supports Linux native multipathing
  • Persistent Volume (PV) capabilities:
    • Create
    • Delete
  • Dynamic and Static PV provisioning
  • Volume mount as ext4 or xfs file system on the worker node
  • Volume prefix for easier LUN identification in Unisphere
  • HELM charts installer
  • Access modes:
    • SINGLE_NODE_WRITER
    • SINGLE_NODE_READER_ONLY

        
       

Software Support:

  • Supports Kubernetes v1.14
  • Supports PowerMax – 5978.221.221 (ELM SR), 5978.444.444 (Foxtail)
  • CSI v1.1 compliant

     
     

Operating Systems:  

  • Supports CentOS 7.3 and 7.5
  • Supports Red Hat Enterprise Linux 7.6

 
 

 
 

Resources:

The Kubernetes CSI Driver for Dell EMC Isilon v1.0 is now available


Isilon, the de-facto scale out NAS platform, so much of it’s Ptb are being used all over the world, no wonder why i’m getting a lot of requests to have the Kubernetes CSI plugin available for it, well, now you have it!

Product Overview:

CSI Driver for Dell EMC Isilon enables integration with Kubernetes, open-source container orchestrator, infrastructure and delivers scalable persistent storage provisioning operations for Dell EMC Isilon storage system.

Highlights of this Release:

The CSI driver for Dell EMC Isilon enables Isilon use as a persistent storage in the Kubernetes clusters. The driver enables fully automated workflows for dynamic and static persistent volume (PV) provisioning and snapshot creation. Driver uses SmartQuotas to limit volume size.

​  

Software Support:

This introductory release of CSI Driver for Dell EMC Isilon supports the following features:

Supports CSI 1.1

  • Supports Kubernetes version 1.14.x
  • Supports Red Hat Enterprise Linux 7.6 host operating system
  • Persistent Volume (PV) capabilities:
    • Create from scratch
    • Create from snapshot
    • Delete
  • Dynamic PV provisioning
  • Volume mount as NFS export
  • HELM charts installer
  • Access modes:
    • SINGLE_NODE_WRITER
    • MULTI_NODE_READER_ONLY
    • MULTI_NODE_MULTI_WRITER
  • Snapshot capabilities:
    • Create
    • Delete

Note:
Volume Snapshots is an Alpha feature in Kubernetes. It is recommended for use only in short-lived testing clusters, as features in the Alpha stage have an increased risk of bugs and a lack of long-term support. See Kubernetes documentation for more information about feature stages.

Refer to Dell EMC Simple Support Matrix for the latest product version qualifications.

Resources:

CSI Driver for Dell EMC Isilon files and documentation are available for download on:

below you can see a demo about how it all works:

XtremIO 6.3 is here, Sync, Scale & Protect!

We have just released the new XtremIO, 6.3 version with some big enhancements, so let’s dive into each one of them!

XtremIO Remote Protection

XtremIO Metadata-Aware Replication

XtremIO Metadata-Aware Asynchronous Replication leverages the XtremIO architecture to provide the most efficient replication that reduces the bandwidth consumption. XtremIO Content-Aware Storage (CAS) architecture and in-memory metadata allow the replication to transfer only unique data blocks to the target array. Every data block that is written in XtremIO is identified by a fingerprint which is kept in the data block’s metadata information.

  • If the fingerprint is unique, the data block is physically written and the metadata points to the physical block.
  • If the fingerprint is not unique, it is kept in the metadata and points to an existing physical block.

A non-unique data block, which already exists on the target array, is not sent again (deduplicated). Instead, only the block metadata is replicated and updated at the target array.

The transferred unique data blocks are sent compressed over the wire.

XtremIO Asynchronous Replication is based on Snapshot-shipping method that allows XtremIO to transfer only the changes, by comparing the changes between two subsequent Snapshots, benefiting from write-folding.

This efficient replication is not limited per volume, per replication session or per single source array, but is a global deduplication technology across all volumes and all source arrays.

In a fan-in environment, replicating from four sites to a single target site, as displayed in Figure 14, overall storage capacity requirements (in all primary sites and the target site) are reduced by up to 38 percent[1], providing the customers with considerable cost savings.

001

  1. Global Data Reduction with XtremIO Metadata-Aware Replication

XtremIO Synchronous Replication (new to 6.3)

XtremIO enables to protect the data both asynchronously and synchronously when ‘zero data loss’ data protection policy is required.

XtremIO Synchronous replication is fully integrated with Asynchronous replication, in-memory snapshots and iCDM capabilities, which makes it the most efficient.

The challenge with Synchronous replication arises when the source and target are out of sync. This is true during the initial sync phase as well as when a disconnection occurs due to link failure or a user-initiated operation (for example, pausing the replication or performing failover).

Synchronous replication is highly efficient as a result of using these unique capabilities:

  • Metadata-aware replication – For the initial synchronization phase and when the target gets out of sync, the replication is using the metadata-aware replication to efficiently and quickly replicate the data to the target. The replication is using multiple cycles until the gap is minimal and then switches to synchronous replication. This reduces the impact on the production to a minimum and accelerates the sync time.
  • Recovery Snapshots – to avoid the need for a full copy or even a full metadata copy, XtremIO leverages the in-memory snapshots capabilities. Every few minutes recovery-snapshots are created on both sides, which can be used as a baseline in case a disconnection occurs. When the connection is resumed, the system only needs to replicate the changes made since the most recent recovery snapshot prior to the disconnection.
  • Prioritization – In order to ensure the best performance for the applications using Sync replication, XtremIO automatically prioritizes the I/O of Sync replication over Async replication. Everything is done automatically and no tuning or special definition is required.
  • Auto-Recovery from link disconnection– The replication resumes automatically when the link is back to normal.



XtremIO Synchronous replication is managed at the same location as the Asynchronous replication and supports all Disaster Recovery operations similarly to Asynchronous replication.

Switching between Async to Sync is simply performed using a single command or via the UI as can be seen below



Once changed, you can also see the new replication mode in the remote session view


Best Protection

XtremIO replication efficiency allows XtremIO to support the replication for All-Flash Arrays (AFA) high performance workloads. The replication supports both Synchronous replication and Asynchronous replication with an RPO as low as 30 seconds and can keep up to 500 PITs.[2]. XtremIO offers simple operations and workflows for managing the replication and integration with iCDM for both Synchronous and Asynchronous replication:

  • Test a Copy (Current or specific PIT) at the remote host
    Testing a copy does not impact the production and the replication, which continues to replicate the changes to the target array, and is not limited by time. The “Test Copy” operation is using the same SCSI identity for the target volumes as will be used in case of Failover.
  • Failover
    Using the failover command, it is possible to select the current or any PIT at the target and promote it to the remote host. Promoting a PIT is instantaneous.
  • Failback
    Fails over back from the target array to the source array.
  • Repurposing Copies
    XtremIO offers a simple command to create a new environment from any of the replication PITs.
  • Refresh a Repurposing Copy
    With a single command, a repurpose copy can be refreshed from any replication PIT. This is very useful when refreshing the data from the production to a test environment that resides on a different cluster or when refreshing the DEV environment from any of the existing build versions.
    • Creating a Bookmark on demand
      An ad-hoc PIT can be created when needed. This option is very useful when an application-aware PIT is required or before performing maintenance or upgrades.

Unified View for Local and Remote Protection

A dedicated tab for data protection exists in  the XtremIO GUI for managing XtremIO local and remote protection (Sync & Async replication). In the Data Protection Overview screen, a high level view displays the status for all Local and Remote protections, as shown in ‎Figure 15.

002

  1. Data Protection Overview Screen

The Overview section includes:

  • The minimum RPO compliance for all Local and Protection sessions
  • Protection sessions status chart
  • Connectivity information between peer clusters

From the overview screen it is easy to drill down to the session

Consistency Group Protection View

With the new unified protection approach in one view it is easy to understand the protection for the consistency group. The Topology View pane displays the local and remote protection topology of the Consistency Group, as shown in ‎Figure 16. Clicking each of the targets displays the detailed information in the Information pane.

003


Secured Snapshots

The purpose of this feature is to allow a customer to protect snapshots created by a “Local Protection Session” against an accidental user deletion:

  • Once a “Local Protection Session” creates the snapshot, it is automatically marked as “Secured”
  • The snapshot’s protection will expire once the snapshot is due for deletion by its retention policy

Once a “Local Protection Session” is set as create secured snapshots:

  • It cannot be set to create non-secure snapshots

Once a snapshot is set as “Secured”:

  • The snapshot cannot be deleted
  • The snapshot-set containing this snapshot cannot be deleted
  • The contents of this snapshot cannot be restored nor refreshed

    Secured Snapshots – Override

  • Procedure
    • Require Case – Legal Obligation
  • New XMCLI Command
    • Tech-Level Command
  • To remove the “secured” flag, a formal ticket must be filed to DellEMC.
  • This is a legal obligation!
  • Once filed, a technician level user account can use the new “remove-secured-snap-flag” XMCLI command to release the “secure” flag.

    Secured Snapshots – Examples – Create a Protected Local Protection

    The following output displays the creation of a new “Local Protection Session” with a “Secured Flag” setting

    The following output displays the modification of an existing “Local Protection Session” to start creating snapshots with a “Secured Flag” setting

    Secured Snapshots – Examples – Snapshot Query & Release

    The first output displays the query of the properties of a “Secured Snapshot”

    The second output displays an attempt of deleting a “Secured Snapshot”

    The third output displays how to release the “Secured” flag (using a tech level user account)

    Below, you can also see how a created protection policy on a consistency group look like when you decide to not allow the option to delete the snapshot

    And this is the error you will get if you (or someone else) will try to remove this volume


    IPv6 Dual Stack

  • Dual IP Versions (4 & 6)
  • XMS Management
    • External IPv4 & IPv6
    • Internal IPv4 or IPv6
  • Storage Controller
    • iSCSI IPv4 & IPv6

    Native Replication IPv4 Only

    This feature allows a customer to assign multiple IP addresses (IPv4 and IPv6) to a single interface

    Let’s discuss the different network interfaces used in XtremIO and see what changed

    XMS Management traffic is used to allow a user to connect to various interfaces (such as WebUI, XMCLI, RestAPI, etc.)

    Previously, the user could configure either an IPv4 or an IPv6 IP Address.

    The new feature allows assigning two IP Addresses – One of each version

    XMS Management traffic is also used internally to connect to clusters (SYM and PM)

    The behavior on this interface wasn’t changed – All managed cluster should use the same IP version

    Storage Controllers’ iSCSI traffic allows external host connectivity

    Previously, the user could only configure IP addresses from a single IP version

    The new feature allows assigning multiple IP Addresses with different versions

    Native Replication behavior remains the same – These interfaces are limited to IPv4 IP addresses

    IPv6 Dual Stack – XMS Management – New XMCLI commands

    Multiple parameters were added to the “show-xms” XMCLI command to support this feature:

    Figure 1 Two new parameters which determine the IP versions

    Figure 2: The “Primary IP Address” parameter names remained as is (to conform with backward-compatibility requirements)

    Figure 3: Various new parameters to describe the new “Secondary IP Address”

    Two additional XMCLI commands were introduced to support this feature:

    Figure 1 The “add-xms-secondary-ip-address” XMCLI command sets the XMS management interface “Secondary IP Address” and “Secondary IP Gateway”

    Figure 2:
    The “remove-xms-secondary-ip-address” XMCLI command removes the XMS management interface “Secondary IP Address” and “Secondary IP Gateway”

    Note that there is no “modify” command – To modify the secondary IP address, remove and set it again with its corrected values.

    IPv6 Dual Stack – Storage Controller Management – Interfaces

    To support the changes made to the Storage Controllers’ iSCSI interfaces settings, the following changes were implemented:

  • Per “iSCSI Portals” – The user can now configure multiple iSCSI portals with different IP versions on the same interface
  • Per “iSCSI Routes” – The user can now configure multiple IPv4 and IPv6 iSCSI routes

    As explained earlier, the Storage Controller Native Replication interface behavior remains as is – The interfaces only allow IPv4 IP addresses.


    Scalability

    Scalability Increase

    The new 6.3.0 release supports an increased number of objects:

  • Number of volumes and copies (per cluster) was increased to 32K
  • Number of SCSI3 Registrations and Reservations (per cluster) was increased to 32K

    Below you can see a demo, showing how to configure Sync Replication from the GUI

    And here you can see how Sync replication works with VMware SRM in conjunction with our unique point in time failover in case where you don’t want to failover to the last point in time

     

Isilon – The Challenge of Files at Scale

I get a lot of requests to post about iSilon so I hooked up with Ron Steinke, A Technical Staff member of the iSilon Software Engineering to write some guest posts, I would really appreciate the feedback and whether you would like us to write more about iSilon

Scalability is a harder problem for stateful, feature rich solutions. Distributed filesystems are a prime example of this, as coordinating namespace and metadata updates between multiple head nodes presents a challenge not found in block or object storage.

The key is to remember that this challenge must be viewed in the context of the simplification it brings to application development. Application developers choose to use files to simplify the development process, with less concern about what this means for the ultimate deployment of the application at scale. For a process limited by the application development lifecycle, or dependent on third party applications, the tradeoff of utilizing a more complex storage solution is often the right one.

Part of the challenge of file scalability is fully replicating the typical file environment. Any scalability solution which imposes restrictions which aren’t present in the development environment is likely to run against assumptions built into applications. This leads to major headaches, and the burden of solving them usually lands on the storage administrator. A few of the common workarounds for a scalable flat file namespace illustrate these kinds of limitations.

One approach is to have a single node in the storage cluster managing the namespace, with scalability only for file data storage. While this approach may provide some scalability in other kinds of storage, it’s fairly easy to saturate the namespace node with a file workload.

A good example of this approach is default Apache HDFS implementation. While the data is distributed across many nodes, all namespace work (file creation, deletion, rename) is done by a single name node. This is great if you want to read through the contents of a large subset of your data, perform analytics, and aggregate the statistics. It’s less great if your workload is creating a lot of files and moving them around.

Another approach is namespace aggregation, where different parts of the storage array service different parts of the filesystem. This is effectively taking UNIX mount points to their logical conclusion. While this is mostly transparent to applications, it requires administrators to predict how much storage space each individual mount point will require. With dozens or hundreds of individual mount points, this quickly becomes a massive administration headache.

Worse is what happens when you want to reorganize your storage. The storage allocations that were originally made typically reflect the team structure of the organization at the time the storage was purchased. Organizations being what they are, the human structure is going to change. Changing the data within a single mount point involves renaming a few directories. Changes across mount points, or the creation of new mount points, involve data rewrites that will take longer and longer as the scale of your data grows.

Clearly these approaches will work for certain kinds of workflows. Sadly, most storage administrators don’t have control of their users’ workflows, or even good documentation of what those workflows will be. The combination of arbitrary workflows and future scaling requirements ultimately pushes many organizations away from limited solutions.

The alternative is a scale-out filesystem, which looks like a single machine both from the users’ and administrators’ perspective. A scale-out system isolates the logical layout of the filesystem namespace from the physical layout of where the data is stored. All nodes in a scale-out system are peers, avoiding specials roles that may make a particular node a choke point. This parallel architecture also allows each scale-out cluster to grow to meet the users’ needs, allowing storage sizes far larger than any other filesystem platform.

There are four main requirements to provide the transparency of scale-out:

  • A single flat namespace, available from and serviced by all protocol heads. This removes the scaling limitation of a single namespace node, by allowing the capacity for namespace work to scale with the number of nodes in the system.
  • Flat allocation of storage space across the namespace. While the data may ultimately be stored in different cost/performance tiers, these should not be tied to artificial boundaries in the namespace.
  • The ability to add space to the global pool by adding new storage nodes to the existing system. Hiding this from the way applications access the system greatly simplifies future capacity planning.
  • Fail-in-place, the ability of the system to continue operating if drives or nodes fail or are removed from the system. This removal will necessarily reduce the available storage capacity, but should not prevent the system from continuing to function. All of these capabilities are necessary to take full advantage of the power of a scale-out filesystem. Future posts will discuss some of the benefits and challenges this kind of scalable system brings. In my next post, we’ll see how the last two elements in the list help to enhance the long-term upgradability of the system.

The CSI plugin 1.0 for Unity is now available


CSI Driver for Dell EMC Unity XT Family enables integration with Kubernetes open-source container orchestration infrastructure and delivers scalable persistent storage provisioning operations for Dell EMC Unity.

Highlights of this Release:

       Dynamic persistent volume provisioning

       Snapshot creation

       Automated volume provisioning workflow on Fiber Channel arrays

       Ease of volume identification

Software Support:

  • Supports CSI 1.1
  • Supports Kubernetes version 1.14
  • Supports Unity XT All-Flash and Hybrid Unified arrays based on Dell EMC Unity OE v.5.x

Operating Systems:  

  • Supports Red Hat Enterprise Linux 7.5/7.6
  • Supports Centos 7.6

Resources:




below you can see a quick demo about using the plugin

Enterprise Storage Analytics (ESA) version 5.0 is now available.

We have just released the new (5.0) and free version of our vROPS adapter, also known as ESA, it’s one of the first one that support VMware vRealize Operations 8.0! , if you are new to this, I highly suggest you start reading about it here

https://xtremio.me/2014/12/09/emc-storage-analytics-esa-3-0-now-with-xtremio-support/

ESA Overview:

Dell EMC Enterprise Storage Analytics(ESA) for vROps(formerly EMC Storage Analytics) allows VMware vRealize Operations customers to monitor their Dell EMC storage within a single tool (vROps); reducing time to resolution and handling problems before they happen.

Using vROps with ESA, users can:

  • Monitor view relationships/topologies from VMs to the underlying storage
  • View alerts and anomalies
  • Storage capacity including capacity planning
  • Use out-of-the-box Reports, Dashboards, Alerts or customize their own
  • View performance, performance trends, analysis, etc.

Highlights of this Release:

  • Rebranding ScaleIO to VxFlex OS
  • Support for latest Dell EMC storage platforms
  • Support vROps 8.0 using the latest libraries and SDK

Installation is super easy, just upgrade your existing ESA or install a new one, the links can be found below


Once the installation is done, you want to configure the Dell EMC storage array you would like to show it’s matrixes by creating an account to it as shown below


And once that is done, you will immediately see ESA start collecting data from the array, give it couple of minutes.



Once the collection is done, you can either use the generic vROPS dashboards or use our customized, storage based reports as seen below



Of course, the real value of vROPS with ESA is that it shows you an end-to-end visibility, from the hosts -> VM -> storage array which provides a very powerful insight !


Resources:

Virtual Storage Integrator 8.2 is here

Dell EMC Virtual Storage Integrator is a plug-in that extends the VMware vSphere Client, allowing users of the vSphere Client to perform Dell EMC storage management tasks within the same tool they use to manage their VMware virtualized environment.

VSI 8.0 was the first release to support the new vSphere HTML 5 Client
Ÿ
VSI Administration Included:
Registering/Unregistering vCenter Servers in a linked mode group
Registering/Unregistering Dell EMC Storage
Granting/Revoking Storage Access to vCenter users/Groups (New user model)
Ÿ
Storage Management Included:
Managing VMFS 6 datastores
Ÿ
Host Management Included:
Host Best Practices for XtremIO

8.1 Recap
Ÿ
Unity NAS Server Administration
Ÿ
Unity NFS Datastore Support
Ÿ
Increasing Datastore Capacity (Unity, XtremIO)
Ÿ
RDM Support
Ÿ
Host Best Practices for VMAX and Unity
Ÿ
Changes in PowerMax Storage Group
Administration and VMFS datastore Provisioning

What’s new in 8.2

PowerMax
Usage Limits
Increase datastore size

You can see a more in-depth review of the Powermax features here https://drewtonnesen.wordpress.com/2019/09/27/vsi-8-2/


XtremIO

  • Edit XtremIO VMFS Datastores



    DRR Savings (Refresh)



    Enable/Disable IO Alerts



Unity
Setting Host IO Size during NFS Datastore Create



Advanced Deduplication Support for VMFS datastore provisioning
Note: Advanced Deduplication was supported for NFS and RDM in 8.1 release



Edit NFS and VMFS Datastore (additional storage settings)

If it’s a new installation, you can download the plugin by clicking the screenshot below

If you want to upgrade your current VSI 8.0 or 8.1, you have two options

Upgrade VSI Plug-in using DockerHub
Use the procedure in this topic to upgrade the VSI plug-in using DockerHub.
Before you begin
Log out of the vCenter before upgrading the plug-in.
Procedure
1. Log in to the IAPI VM through SSH.
2. Stop the current IAPI container. Use the following command:
docker stop iapi
3. Pull the latest IAPI/VSI image from DockerHub:
docker pull dellemc/vsi
4. You can also perform the following optional steps:
a. Back up the previous IAPI container. Use the following commands:
a.
docker rename iapi iapi-before-upgrade
b.
docker update –restart=no iapi-before-upgrade
Deploying and Unregistering VSI Plug-in
12 Dell EMC Virtual Storage Integrator (VSI) for VMware vSphere Client Product Guide

b. Back up the IAPI database. Use the following command:
docker cp vsidb:/data/appendonly.aof <OUTPUT_DIRECTORY>
c. Backup the existing VSI plug-in. Use the following command:
cp /opt/files/vsi-plugin.zip <OUTPUT_DIRECTORY>
d. Backup existing SSL certificates. Use the following command:
docker cp iapi:/etc/ssl/certs/java/cacerts <OUTPUT_DIRECTORY>
e. All the back up files in the previous steps can be restored after the container upgrade by
copying the files to the new container. For example, docker cp
<OUTPUT_DIRECTORY>
5. Upgrade the VSI plug-in.
l To register the VSI plug-in without SSL verification, use the following command:
python3 /opt/scripts/register_extension.py -ignoressl true
l To see other options for using SSL certificate verification during registration, use the
following command:
python3 register_extension.py -h
Note: After upgrade, the SSL is not enabled by default. To enable SSL, see Change
settings in application.conf of the IAPI container.
6. Refresh the VSI plug-in:
l If you have a current vCenter already open, logout and log back in to refresh the plug-in.
Or
l If you do not have a vCenter open, log in to the vSphere Client, log out, and log back in
to refresh the plug-in.

111.jpgUpgrade VSI using Package
Use the procedure in this topic to upgrade VSI 8.0 to VSI 8.1 version.
Before you begin
You must log out of the vCenter before upgrading the plug-in.
Procedure
1. Download the VSI 8.1 upgrade.tar package from Dell EMC Support Site or DockerHub.
2. Copy the upgrade package to /tmp folder of the IAPI server.
To copy the upgrade file, use the following command:
scp iapi-vsi-upgrade.tar root@<IAPI server>:/tmp
Deploying and Unregistering VSI Plug-in
Dell EMC Virtual Storage Integrator (VSI) for VMware vSphere Client Product Guide 13

Note: If SSH is enabled, the default password is root.
3. Change the directory to /tmp folder.
4. Extract the upgrade package. Use the following command:
tar -xf/tmp/<iapi-vsi-upgrade.tar>
Note: The upgrade package contains the .tar file, the upgrade script, and the docker
image.
5. Run the upgrade script. Use the following command:
./upgrade_iapi.sh –upgrade <iapi-vsi-upgrade.tar.gz>
The upgrade script backs up and renames the VSI 8.0 docker image, lay down the new
docker image. The script also restores the necessary files to a backup folder. An example for
the folder name, <iapi-before-upgrade-20190412194829>
Note: You can also restore the application.conf and SSL certificates after the
completion of upgrade. Use the restore command with the .tar file that resides in the
backup folder:
./upgrade_iapi.sh –restore ./<20190412194829.tar>
6. To upgrade the VSI plug-in, use the following command:
python3 /opt/scripts/register_extension.py -ignoressl true
Note: After upgrade, the SSL is not enabled by default. To enable SSL, see Change
settings in application.conf of the IAPI container.
7. Refresh the VSI plug-in:
l If you have a current vCenter already open, logout and log back in to refresh the plug-in.
Or
l If you do not have a vCenter open, log in to the vSphere Client, log out, and log back in
to refresh the plug-in.