Tech Preview Of PowerProtect integration with Kubernetes

Back in December 2018, we had a customer advisory council where we shared our thoughts on our containers strategy. At VMworld 2019, Efri from the Dell Data Protection team shared the Containers Protection plans by talking and demonstrating a tech preview of using Dell EMC PowerProtect to protect containers, note that it doesn’t mean we don’t support taking snapshots of containers to the primary storage array, that is of course already supported but the idea here is to offload the backup to another device with a proper backup catalog and everything you can think about when dealing with enterprise backup polices.

If you think about it, with the power of Dell Technologies and VMware we should have the IP assets to come up with such an integration and that exact what we are doing, we are leveraging VMware Velero IP and adding our own magic sauce on (and in) it to allow our PowerProtect product to backup containers.

The Cube Interview:

A vBrownBag session:

like any future demonstration, please note that future features are subject to change

Dell EMC vSphere 6.7 U3 ISO is now available

On the heels of VMworld 2019, Dell have just released the ISO image of ESXi (vSphere) 6.7 U3

You can download it by clicking the screenshot below

 

The release notes for it:

ESXi 6.7U3 Build Number: 14320388

Dell Version : A00

Dell Release Date : 23 Aug 2019

VMware Release Date: 20 Aug 2019

 

Important Fix/Changes in the build

==================================

– Driver Changes as compared to previous build : yes

– Base Depot: update-from-esxi6.7-6.7_update03.zip

 

Drivers included:-

 

Intel Drivers:

=====================

– igbn: 1.4.7

– ixgben: 1.7.10

– ixgben_ens: 1.1.3

– i40en_ens: 1.1.3

 

Qlogic Drivers:

======================

– qcnic: 1.0.22.0

– qfle3: 1.0.77.2

– qfle3i: 1.0.20.0

– qfle3f: 1.0.63.0

– qedentv: 3.9.31.2

– qedrntv: 3.9.31.2

– qedf: 1.2.61.0

– qlnativefc: 3.1.16.0

– scsi-qedil: 1.2.13.1

 

Broadcom Drivers:

======================

– bnxtnet: 214.0.195.0

– bnxtroce: 214.0.184.0

 

Mellanox Drivers:

=======================

– nmlx5_core: 4.17.13.8

– nmlx5_rdma: 4.17.13.8

 

Emulex Drivers:

=======================

– lpfc: 12.0.257.5

 

Dell PERC Controller Drivers:

=======================

– dell-shared-perc8 – 06.806.90.00

 

Refer the below links for more information:

 

VMware ESXi 6.7 Update 3 Release Notes

https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-67u3-release-notes.html

 

Please find more details on this Dell Customized Image on dell.com/virtualizationsolutions

https://docs.vmware.com/en/VMware-vSphere/index.html

XtremIO is now certified for Pivotal PKS (Pivotal Container Service)

We are not stopping with the integration around and above Kubernetes

To improve competitiveness in a fast-paced world, organizations are embracing software as a prime differentiator. A software-driven business is more agile and efficient, innovates faster and responds dynamically to changing market and customer demands. Increasingly, gaining these compelling advantages requires adopting cloud-native applications.

Kubernetes is the de facto standard for container orchestration for microservices and applications. However, enterprise adoption of big data and databases using containers and Kubernetes is hindered by multiple challenges, such as complexity of persistent storage, availability, and application life-cycle management. Kubernetes provides the agility and scale that modern enterprises require. However, Kubernetes provides the building blocks for infrastructure, but not a turnkey solution.

Originally containers and Kubernetes were optimized for stateless applications, restricting enterprises to use container technology for workloads, such as websites and user interfaces. More recently, stateful workloads have been increasingly run on Kubernetes. Whereas it is relatively easy to run stateless microservices using container technology, stateful applications require slightly different treatment.

There are multiple factors which need to be taken into account when considering handling persistent data using containers, including:

  • Containers are ephemeral by nature, so the data that needs to be persistent must survive through the restart/re-scheduling of a container.
  • When containers are rescheduled, they can die on one host, and might get scheduled on a different host. In such cases, the storage should also be shifted and made available on a new host for the container to start gracefully.
  • The application should not have to deal with the volume/data and underlying infrastructure, and should handle the complexity of unmounting and mounting, etc.
  • Certain applications have a strong sense of identity (such as Kafka, Elastic, etc.), and the disk used by a container with a specific identity is tied to it. If, for any reason, a container with a specific identity becomes rescheduled, it is important that the disk specifically associated with that identity, gets reattached on a new host.

In this white paper, we present a solution that provides the robust capabilities and built-in high availability and scalability of XtremIO and PKS, bringing simplicity to Kubernetes, and creating a turnkey solution for data-heavy workloads. XtremIO CSI Plugin provides Pivotal PKS technology with built-in enterprise-grade container storage and uncompromising performance that extends Kubernetes’ multi-cloud portability to the private cloud.

With this proven integration, organizations can accelerate their digital business objectives via cloud-native applications, from development through testing and into production, on a single platform with ease and agility. Most importantly, eliminating the risk of application downtime preserves critical revenue streams, helps ensure customer satisfaction and strengthens your competitive advantage.

Planning, designing and building a private or hybrid cloud to support cloud-native applications can be a complex and lengthy project that does not address immediate business needs. IT also must maintain and manage the infrastructure that supports these new applications, to ensure that the environment is reliable, secure and upgradeable.

Kubernetes offers great capabilities. However operationalization of these capabilities is not a trivial task. There are many layers involved, such as hardware, operating system, container runtime, software-defined networking, and Kubernetes itself. And all the layers require Day 0, Day 1 and Day 2 activities. Most of these layers also require constant patching (for example, Kubernetes ships a new version every three months), so the effort to maintain this infrastructure grows exponentially.

Kubernetes is the de facto standard for container orchestration platforms. However, Kubernetes can be difficult to operationalize and maintain for the following reasons:

  • High availability — There is no out-of-the-box fault-tolerance for the cluster components themselves (masters and etcd nodes).
  • Scaling — Kubernetes clusters handle scaling the pod/service within the nodes, but do not provide a mechanism to scale masters and etcd Virtual Machines (VMs).
  • Health checks and healing — The Kubernetes cluster performs routine health checks for node health only.
  • Upgrades — Rolling upgrades on a large fleet of clusters is difficult. Who manages the system it runs on?
  • Operating systems — A Kubernetes installation requires multiple machines where an OS is installed. Installing and patching these systems is an overhead.

Pivotal Container Service (PKS) is a purpose-built product that enables enterprises to deploy and consume container services with production-grade Kubernetes, built with high availability, security, multi-tenancy and operational efficiency across private and public clouds.

It effectively solves the complexities around Kubernetes infrastructure and enables you to focus on your business applications. Furthermore, it benefits your developers, providing them with the latest version of Kubernetes and simple integrations with monitoring and logging tools.

Key Benefits

With Pivotal Container Service (PKS), not only do you receive a vanilla Kubernetes distribution supported by Pivotal and VMware, but you also receive the automation to maintain the Kubernetes stack with reduced effort, including the operating system, container runtime, software-defined networking and Kubernetes itself. The automation provided by BOSH enables you to install (Day 0), configure (Day 1), patch and maintain your Kubernetes infrastructure in a healthy state over time (Day 2).

This automation also extends your capabilities to have an effective Kubernetes as a service offering, and lets you create Kubernetes clusters with a single command, to either expand or shrink the clusters, or to repair them as needed. This is a unique capability that enables you to provide your Kubernetes users with as many clusters as they need, and to resize them, leveraging automation as your workload requirements change.

PKS also simplifies the networking configuration, by providing you with VMware NSX-T, which is a software-defined network infrastructure to build cloud-native application environments.

PKS also provides you with a container registry to host your containers, namely, Harbor, a graduated CNCF project.


  1. Pivotal PKS Components

Dell EMC XtremIO X2: The Next Generation All-Flash Array

Dell EMC XtremIO is a purpose-built all-flash array that provides consistent high performance with low latency; unmatched storage efficiency with inline, all-the-time data services; rich application integrated copy services; and unprecedented management simplicity.

The next-generation platform, XtremIO X2, builds upon unique features of XtremIO to provide even more agility and simplicity for your data center and business. Content-aware in-memory metadata, and inline all-the time data services have made XtremIO the ultimate platform for virtual server and desktop environments and workloads that benefit from efficient copy data management. The architecture offers VMware XCOPY operations that are twice as fast, which can reach as high as 40 GB/s bandwidth, due to them being in-memory operations.

X2 includes inline, all-the-time data services, thin provisioning, deduplication, compression, replication, D@RE, XtremIO Virtual Copies (XVC), and double SSD failure protection with zero performance impact. Only XtremIO’s unique in-memory metadata architecture can accomplish this.

This white paper is designed for developers, system architects and storage administrators tasked with evaluating or deploying Pivotal PKS, and interested in persistent storage within their deployment. While this document focuses on Dell EMC XtremIO X2 products, the concepts discussed and implemented can be applied to other storage vendor products.

Understanding the material in this document requires prior knowledge of containers, Kubernetes, PKS and of the Dell EMC XtremIO X2 cluster type. These concepts are not covered in detail in this document. However, links to helpful resources on these topics are provided throughout the document and in the References
section. In-depth knowledge of Kubernetes is not required.

Click the screenshot below to download the white paper.

You can also watch the below demos to see how it all works

 

Dell EMC PowerMax is now supporting Kubernetes CSI

Back in DTW, we have made a commitment to invest in the kubernetes eco-system (which is getting bigger and bigger..)

Every journey starts with small steps and if you are new to the who ‘storage persistency’ issues with Docker, I would highly advice that you first read the post I wrote back in november 2018

https://xtremio.me/2018/12/13/tech-previewing-our-upcoming-xtremio-integration-with-kubernetes-csi-plugin/ read it? Good! so after we have released the VxFlex OS and the XtremIO CSI plugins, it is now the time for PowerMax to get the support as well.

If you think about it, PowerMax is the only array that support both Mainframes and (now), containers. Think of all the mission critical apps it support and now, all the new Devops based apps as well, it can really do it all!

About the support for the PowerMax / CSI initial release:

The CSI Driver adheres to the Container Storage Interface (CSI) specification v1.0 and is compatible with Kubernetes versions 1.13.1, 1.13.2, and 1.13.2 running within a host operating system of Red Hat Enterprise Linux (RHEL) 7.6.

Services Design, Technology, & Supportability Improvements

The CSI Driver for Dell EMC PowerMax v1.0 has the following features

  • Supports CSI 1.0
  • Supports Kubernetes version 1.13.1, 1.13.2, and 1.13.3
  • Supports Red Hat Enterprise Linux 7.6 host operating system
  • Supports PowerMax – 5978.221.221 (ELM SR)
  • Persistent Volume (PV) capabilities:
    • create
    • delete
  • Dynamic and Static PV provisioning
  • Volume mount as ext4 or xfs file system on the worker node
  • Volume prefix for easier LUN identification in Unisphere
  • HELM charts installer
  • Access modes:
    • SINGLE_NODE_WRITER
    • SINGLE_NODE_READER_ONLY

Installation

Driver Installation Prerequisites

Full details are in the Installation Guide but here is a summary of pre-reqs for the driver.

  • Upstream Kubernetes 1.13.x, with specific feature gates enabled
  • Downstream distributions, like OpenShift and PKS, have not been tested and may not work at this time.
  • Docker daemon running and configured with MountFlags=shared on all k8s masters/nodes
  • Helm and Tiller installed on k8s masters
  • iscsi-initiator-utils package installed on all the Kubernetes nodes
  • Make sure that the ISCSI IQNs (initiators) from the Kubernetes nodes are not part of any existing Hosts on the array(s)
  • A namespace “powermax” should be created prior to the installation. The driver pods, secrets are created in this namespace.
  • Make sure that all the nodes in the Kubernetes cluster have network connectivity with the U4P instance (used by the driver)

Full details are in the Installation Guide.

  1. Clone the repository from the URL – github.com/dell/csi-powermax
  2. Create a Kubernetes secret – “powermax-creds” – with your U4P’s username and password in the namespace “powermax
  3. (Optional) Create a Kubernetes secret – “powermax-certs” – with CA cert(s) used to sign U4P’s SSL certificate. If not found, the install script will create an empty secret in the namespace “powermax
  4. Create your myvalues.yaml file from the values.yaml file and edit some parameters for your installation.These include –
    1. The U4P URL (must include the port number as well)
    2. clusterPrefix – A unique identifier identifying the Kubernetes cluster (max 3 characters)
    3. A list of port group names
    4. An optional (white)list of arrays the driver will manage
    5. A set of values for the default storage class (SYMID, SRP, ServiceLevel)
  5. Run the “install.powermax” shell script
  6. Customize the installation (if desired) by adding additional Storage Classes to support multiple arrays, storage resource pools, service levels

Using the Driver

See the Installation Document section “Test Examples that use the CSI Driver For Dell EMC PowerMax”

There are a number of test helm charts and scripts in test/helm that can be used as examples of how to deploy storage using the driver

  • Create a “test” namespace to hold the tests.
  • The directory 2vols contains a simple Helm chart that deploys a container with two volumes. To deploy it run “sh starttest.sh 2vols”. To stop it run “sh stoptest.sh”.
  • There is a sample master/slave Postgres deployment example.

    Deploys a master/slave postgres database You can deploy it using “sh postgres.sh”; however for the script to work correctly you need psql locally installed where you’re running the script from, and the world database from http://pgfoundry.org/projects/dbsamples/
    installed at /root/dbsamples-0.1.

    If psql and the sample database are installed, the script will install the sample database into the deployed postgres database and run a sample query against it.

Documentation and Downloads

CSI Driver for Dell EMC PowerMax v1.0 downloads and documentation are available on:

Github:  https://github.com/dell/csi-powermax

Docker Hub:
https://hub.docker.com/r/dellemc/csi-powermax

Below you can see a quick video that was done with the tech preview of the plugin

The VMware vRealize (vCenter) Orchestrator (vRO) 4.0 for XtremIO is now available

We have just released the 4.0 version of the plugin, I’m thankful for all the customers that are using it and showing us that the way forward is to automate everything!

This version brings the support for XIOS 6.3 as well as additional workflows for native replication and QoS. we have also introduced the concepts of low-level objects which greatly expands what can be done in vRO with XtremIO Storage.

In this release, there’re about 180 predefined workflows within XtremIO vRO plugin, including array configuration, storage provisioning, storage management, and vSphere-integrated workflows.

This allows users and storage admins to complete most common storage operations from vRealize Orchestrator automatically well as integrating those existing workflows and actions to create customized workflows, and by doing so, to handle both day I and day II operations to meet your specific automation needs, without the need to login to each storage array.

You can download the workflow user guide from https://support.emc.com/docu92210_XtremIO_Plugin_for_VMware_vRealize_Orchestrator_4.0.0_Workflows_User_Guide.pdf?language=en_US&source=Coveo

And the installation and configuration guide from https://support.emc.com/docu92209_XtremIO_Plugin_for_VMware_vRealize_Orchestrator_4.0.0_Installation_and_Configuration_Guide.pdf?language=en_US&source=Coveo

You can download the new plugin by clicking the screenshot below

You can watch a short recording of a workflow and it’s integration to vRA here

and a longer (more detailed) one, here

Virtual Storage Integrator 8.1 is here

We have just released a minor update to the new(ish) HTML5 based vCenter plugin, if you are new to this, I highly suggest you first read this post I wrote here https://xtremio.me/2018/12/14/vsi-virtual-storage-integrator-8-0-is-here/

This section provides a list of features supported in this release.
VSI 8.1 release supports the following features:


VMAX All Flash/PowerMax support:
Storage System and Storage Group Administration
Provisioning VMFS Datastores
Viewing VMFS DatastoresRDM creation on VMAX All Flash/PowerMax
View VM disk information for VMAX All Flash/PowerMax
VMAX All Flash/PowerMax Host Best Practices


XtremIO support:
Storage System Administration
Provisioning VMFS Datastores
Viewing VMFS Datastore Details
Increase VMFS Datastore capacity
RDM creation on XtremIO
View VM disk information for XtremIO
XtremIO Host Best Practices


Unity support:
Storage System and Storage Pool Administration
Provisioning VMFS Datastores
Viewing VMFS Datastores
Provisioning NFS Datastores
Viewing NFS Datastores
Increase NFS and VMFS Datastore capacity
RDM creation on Unity
View VM disk information for Unity
NAS Server Administration
Unity Host Best Practices
Users/User Group Administration

To download the new client, click the screenshot below

 

And you can download the documentation, clicking this url https://support.emc.com/docu94130_VSI_for_VMware_vSphere_Client_8.1_Product_Guide.pdf?language=en_US&source=Coveo

One more thing..

We are starting to containerize some of our data services, this statement is pretty vague deliberately but in the context of the VSI plugin, lets assume you are already running VSI 8.0 and want to do an upgrade to VSI 8.1, you have two options:

Upgrade VSI Plug-in using DockerHub
Use the procedure in this topic to upgrade the VSI plug-in using DockerHub.
Before you begin
You must log out of the vCenter before upgrading the plug-in.
Procedure
1. Login to the IAPI VM through SSH.
2. Stop the current IAPI container. Use the following command: docker stop iapi
3. Pull the latest IAPI/VSI image from DockerHub: docker pull dellemc/vsi
4. You can also perform the following optional steps:
a. Backup the previous IAPI container. Use the following commands:
a. docker rename iapi iapi-before-upgrade

b. docker update –restart=no iapi-before-upgrade
b. Backup the IAPI database. Use the following command:
docker cp vsidb:/data/appendonly.aof <OUTPUT_DIRECTORY>
c. Backup the existing VSI plug-in. Use the following command:
cp /opt/files/vsi-plugin.zip <OUTPUT_DIRECTORY>
d. Backup existing SSL certificates. Use the following command:
docker cp iapi:/etc/ssl/certs/java/cacerts
<OUTPUT_DIRECTORY>
e. Any files backed up in the previous steps can be restored after the container
upgrade if needed by copying the files to the new container. For example,
docker cp <OUTPUT_DIRECTORY>
5. Upgrade the VSI plug-in.
l To register the VSI plug-in without SSL verification, use the following
command:
python3 /opt/scripts/register_extension.py -ignoressl true
l To see additional options for using SSL certificate verification during
registration, use the following command:
python3 register_extension.py -h
Note
After upgrade, the SSL is not enabled by default. To enable SSL, see Change Settings in application.conf of the IAPI container.
6. Refresh the VSI plug-in:
l If you have a current vCenter already open, logout and log back in to refresh
the plug-in.
Or
l If you do not have a vCenter open, log in to the vSphere Client, log out, and
log back in to refresh the plug-in.
Upgrade VSI using Package
Use the procedure in this topic to upgrade VSI 8.0 to VSI 8.1 version.
Before you begin
You must log out of the vCenter before upgrading the plug-in.

Procedure
1. Download the VSI 8.1 upgrade.tar package from Dell EMC Support Site or DockerHub.
2. Copy the upgrade package to /tmp folder of the IAPI server.
To copy the upgrade file, use the following command:
scp iapi-vsi-upgrade.tar root@<IAPI server>:/tmp
Note
If SSH is enabled, the default password is root.
3. Change the directory to /tmp folder.
4. Extract the upgrade package. Use the following command:
tar -xf/tmp/<iapi-vsi-upgrade.tar>
Note
The upgrade package contains the .tar file, the upgrade script, and the docker
image.
5. Run the upgrade script. Use the following command:
./upgrade_iapi.sh –upgrade <iapi-vsi-upgrade.tar.gz>
The upgrade script backs up and renames the VSI 8.0 docker image, lay down
the new docker image. The script also restores the necessary files to a backup
folder. An example for the folder name, <iapi-beforeupgrade-20190412194829>

Note
You can also restore the application.conf and SSL certificates after the
completion of upgrade. Use the restore command with the .tar file that resides
in the backup folder:
./upgrade_iapi.sh –restore ./<20190412194829.tar>
6. To upgrade the VSI plug-in, use the following command:
python3 /opt/scripts/register_extension.py -ignoressl true
Note
After upgrade, the SSL is not enabled by default. To enable SSL, see Change
settings in application.conf of the IAPI container.


 

VMware SRM 8.2 is here with XtremIO SRA AND point in time failover support

VMware have just released a new version of SRM (8.2), it’s a pretty big release because for the first time, the SRM server itself can now run within an photon OVA as oppose to the windows server it had to run in previous version. You may ask yourself what’s the big deal, well:

  • You can now save on the windows server licensing costs.
  • Deployment is a breeze.
  • Updates to the SRM software will be literally a click of a button.

But that’s only one big change, the other one, is that now, the SRA (storage replication adapter) which is the glue between vCenter to the storage array is actually running as a docker image Inside the SRM appliance. Here at Dell EMC we were busy working on it and I’m proud to announce the day 0 support for SRM 8.2 and our XtremIO & Unity arrays (other arrays will follow shortly).

Apart from the SRA support, we have also certified our unique pit (point in time) integration with SRM 8.2, this feature will allow you to specify a specific pit to test or failover to when using SRM, think about it, let’s say you have a logical data corruption or a malware infection etc, it’s not enough to run a failover to the DR site, you want to actually failover to a point in time BEFORE the data corruption and we provide you just that with XtremIO, SRM and our free vCente plugin (VSI 7.4)

You can download SRM 8.2 clicking the screenshot below

You can download the new SRAs (one for the classic windows SRM or the new, photon based SRA) from clicking the screenshot below

And finally, you can download the VSI plugin from here (click the screenshot below)

Below you can see a demo showing you how it all works