Virtual Storage Integrator 8.2 is here

Dell EMC Virtual Storage Integrator is a plug-in that extends the VMware vSphere Client, allowing users of the vSphere Client to perform Dell EMC storage management tasks within the same tool they use to manage their VMware virtualized environment.

VSI 8.0 was the first release to support the new vSphere HTML 5 Client
Ÿ
VSI Administration Included:
Registering/Unregistering vCenter Servers in a linked mode group
Registering/Unregistering Dell EMC Storage
Granting/Revoking Storage Access to vCenter users/Groups (New user model)
Ÿ
Storage Management Included:
Managing VMFS 6 datastores
Ÿ
Host Management Included:
Host Best Practices for XtremIO

8.1 Recap
Ÿ
Unity NAS Server Administration
Ÿ
Unity NFS Datastore Support
Ÿ
Increasing Datastore Capacity (Unity, XtremIO)
Ÿ
RDM Support
Ÿ
Host Best Practices for VMAX and Unity
Ÿ
Changes in PowerMax Storage Group
Administration and VMFS datastore Provisioning

What’s new in 8.2

PowerMax
Usage Limits
Increase datastore size

You can see a more in-depth review of the Powermax features here https://drewtonnesen.wordpress.com/2019/09/27/vsi-8-2/


XtremIO

  • Edit XtremIO VMFS Datastores



    DRR Savings (Refresh)



    Enable/Disable IO Alerts



Unity
Setting Host IO Size during NFS Datastore Create



Advanced Deduplication Support for VMFS datastore provisioning
Note: Advanced Deduplication was supported for NFS and RDM in 8.1 release



Edit NFS and VMFS Datastore (additional storage settings)

If it’s a new installation, you can download the plugin by clicking the screenshot below

If you want to upgrade your current VSI 8.0 or 8.1, you have two options

Upgrade VSI Plug-in using DockerHub
Use the procedure in this topic to upgrade the VSI plug-in using DockerHub.
Before you begin
Log out of the vCenter before upgrading the plug-in.
Procedure
1. Log in to the IAPI VM through SSH.
2. Stop the current IAPI container. Use the following command:
docker stop iapi
3. Pull the latest IAPI/VSI image from DockerHub:
docker pull dellemc/vsi
4. You can also perform the following optional steps:
a. Back up the previous IAPI container. Use the following commands:
a.
docker rename iapi iapi-before-upgrade
b.
docker update –restart=no iapi-before-upgrade
Deploying and Unregistering VSI Plug-in
12 Dell EMC Virtual Storage Integrator (VSI) for VMware vSphere Client Product Guide

b. Back up the IAPI database. Use the following command:
docker cp vsidb:/data/appendonly.aof <OUTPUT_DIRECTORY>
c. Backup the existing VSI plug-in. Use the following command:
cp /opt/files/vsi-plugin.zip <OUTPUT_DIRECTORY>
d. Backup existing SSL certificates. Use the following command:
docker cp iapi:/etc/ssl/certs/java/cacerts <OUTPUT_DIRECTORY>
e. All the back up files in the previous steps can be restored after the container upgrade by
copying the files to the new container. For example, docker cp
<OUTPUT_DIRECTORY>
5. Upgrade the VSI plug-in.
l To register the VSI plug-in without SSL verification, use the following command:
python3 /opt/scripts/register_extension.py -ignoressl true
l To see other options for using SSL certificate verification during registration, use the
following command:
python3 register_extension.py -h
Note: After upgrade, the SSL is not enabled by default. To enable SSL, see Change
settings in application.conf of the IAPI container.
6. Refresh the VSI plug-in:
l If you have a current vCenter already open, logout and log back in to refresh the plug-in.
Or
l If you do not have a vCenter open, log in to the vSphere Client, log out, and log back in
to refresh the plug-in.

111.jpgUpgrade VSI using Package
Use the procedure in this topic to upgrade VSI 8.0 to VSI 8.1 version.
Before you begin
You must log out of the vCenter before upgrading the plug-in.
Procedure
1. Download the VSI 8.1 upgrade.tar package from Dell EMC Support Site or DockerHub.
2. Copy the upgrade package to /tmp folder of the IAPI server.
To copy the upgrade file, use the following command:
scp iapi-vsi-upgrade.tar root@<IAPI server>:/tmp
Deploying and Unregistering VSI Plug-in
Dell EMC Virtual Storage Integrator (VSI) for VMware vSphere Client Product Guide 13

Note: If SSH is enabled, the default password is root.
3. Change the directory to /tmp folder.
4. Extract the upgrade package. Use the following command:
tar -xf/tmp/<iapi-vsi-upgrade.tar>
Note: The upgrade package contains the .tar file, the upgrade script, and the docker
image.
5. Run the upgrade script. Use the following command:
./upgrade_iapi.sh –upgrade <iapi-vsi-upgrade.tar.gz>
The upgrade script backs up and renames the VSI 8.0 docker image, lay down the new
docker image. The script also restores the necessary files to a backup folder. An example for
the folder name, <iapi-before-upgrade-20190412194829>
Note: You can also restore the application.conf and SSL certificates after the
completion of upgrade. Use the restore command with the .tar file that resides in the
backup folder:
./upgrade_iapi.sh –restore ./<20190412194829.tar>
6. To upgrade the VSI plug-in, use the following command:
python3 /opt/scripts/register_extension.py -ignoressl true
Note: After upgrade, the SSL is not enabled by default. To enable SSL, see Change
settings in application.conf of the IAPI container.
7. Refresh the VSI plug-in:
l If you have a current vCenter already open, logout and log back in to refresh the plug-in.
Or
l If you do not have a vCenter open, log in to the vSphere Client, log out, and log back in
to refresh the plug-in.

VxFlex OS 3.0.1 is now available, Here’s what’s new


Few months ago, we have released our 3.0 version, if you haven’t seen what’s new in that release, I encourage you to start by first reading about it here https://xtremio.me/2019/05/01/vxflex-os-v3-0-is-now-available/

If you are new to VxFlex OS:

Dell EMC VxFlex OS, formerly known as ScaleIO Software, is scale-out block storage software that enables customers to create a scale-out server SAN or hyper-converged infrastructure on x86 server hardware.
VxFlex OS uses local storage devices and turns them into shared block storage that has all the benefits of SAN-but at a fraction of cost and complexity. VxFlex OS will continue to support the existing customer installations and will support an upgrade path from ScaleIO software to the VxFlex OS release.
VxFlex Ready Node is a combination of VxFlex OS and Dell PowerEdge
® servers, optimized to run VxFlex OS software, enabling customers to quickly deploy a fully architected, scale out server SAN with heterogeneous hypervisor support. It also includes Automated Management Services (AMS).
The lightweight VxFlex OS software components are installed on the application servers and communicate via a standard LAN to handle the application I/O requests sent to VxFlex OS block volumes. An extremely efficient decentralized block I/O flow, combined with a distributed, sliced volume layout, results in a massively parallel I/O system that can scale up to thousands of nodes.
VxFlex OS is designed and implemented with enterprise-grade resilience. Furthermore, the software features an efficient, distributed, self-healing process that overcomes media and node failures, without requiring administrator involvement.
Dynamic and elastic, VxFlex OS enables administrators to add or remove nodes and capacity onthe-fly. The software immediately responds to the changes, rebalancing the storage distribution and achieving a layout that optimally suits the new configuration.
Because VxFlex OS is hardware agnostic, the software works efficiently with various types of disks, including: magnetic (HDD) and solid-state disks (SSD), flash PCI Express (PCIe) cards, networks, and hosts. VxFlex OS can easily be installed in an existing infrastructure as well as in green field configurations.

New in 3.0.1

VMware vVols support


VxFlex OS now supports VMware Virtual Volumes (using VASA 2.0). vVols is another datastore type (in addition to VMFS and NFS) enabling VMs or virtual disks to be managed independently using storage policies (SPBM).

VM storage before vVols


VM storage with vVols


VASA Provider


Below, you can see a demo of the new support for vVols


Email Call-Home alerting
Email alerting for VxFlex OS systems using a customer based smtp email server alternative to the standard (E)SRS service.


Fine Granularity (FG) layout support with VMware ESXi 6.7 (U2) HCI configurations
Support for VxFlex Appliance and Integrated Rack customers only (managed by VxFlex Manager).

How it looks in vSphere



I/O latency metrics
Volumes/SDCs can now be queried for I/O latency metrics.


New features in VxFlex OS v3.0.0.2
There are no new features in VxFlex OS v3.0.0.2.
New feature in VxFlex OS v3.0.0.1
Support for upgrade of VxFlex OS on the VMware HCI platform, including SVM replacement support in Gateway/Installer and AMS. Refer to the relevant upgrade guides for detailed upgrade steps.
A new patched SVM is introduced in v3.0.0.1. There are two replacement options:
1. If you are using VxFlex OS v2.x with a SLES-based SVM, you can replace the SVM with a newly patched CentOS 7.5 based SVM. The replace feature is part of the release and is detailed in the documentation.
2. If you are using VxFlex OS v3.0 with the older, not fully patched CentOS 7.5 based SVM, there is an optional manual process to upgrade to the new SVM. The manual process is described in the Upgrade VxFlex OS Guide.

Download location
VxFlex OS 3.0.1 software packages can be downloaded from the following locations:


VxFlex OS Software:
https://support.emc.com/products/33925
ScaleIO Ready Node 13G:
https://support.emc.com/products/41077
VxFlex Ready Node 14G:
https://support.emc.com/products/42216
VxRack Node:
https://support.emc.com/products/39045

Dell OpenManage™ Integration for VMware vCenter, v5.0 is now available

We have just released the vCenter plugin which I constantly use all the time to manage my Dell servers lifecycle (firmware updates, profiling, bare metal deployments etc’)

To effectively run today’s data centers, you need to manage both physical and virtual infrastructure. This means using multiple disconnected tools and processes to manage your environment, wasting valuable time and resources.

Dell has created a solution to address this problem while increasing task automation right from within the virtualization management console that you use most, VMware vCenter. The OpenManage Integration for VMware vCenter is a virtual appliance that streamlines tools and tasks associated with the management and deployment of Dell servers in your virtual environment.

OpenManage Integration for VMware vCenter (OMIVV) is designed to streamline the management processes in your data center environment by allowing you to use VMware vCenter to manage your entire server infrastructure – both physical and virtual. From monitoring system level information, bubbling up system alerts for action in vCenter, rolling out firmware updates to an ESXi cluster, to bare metal deployment, the OpenManage Integration will expand and enrich your data center management experience with Dell PowerEdge servers.


Enhancements:
1.Support for HTML-5 Client
2.Enhancement in the system profile to support the following:
a.System profile types-Basic and Advanced
b.System profile edit
c.12G and 13G PowerEdge servers

3.Added support for vSphere 6.7 U3, vSphere 6.7 U2, and vSphere 6.5 U3
4.Enhancement in the deployment to support the following:
a.System profile baselining based on the associated cluster profile for cluster
b.System Profile Configuration Preview

5.Enhancement in the configuration compliance:
a.Support for firmware and hardware baselining for vSphere clusters
b.Cluster level view of drift details with vCenter context

6.Support for context-sensitive help
7.Enhancement in the repository profile to support online repositories-Dell EMC Default Catalog and Validated MX Stack Catalog
8.Support for MX chassis management module firmware update
9.Enhancement in admin console to support reset backup settings
10.Enhancement in deployment mode to support for 2000 hosts with extra large mode
11.Support for dual network adapter for OMIVV
12.Dashboard to monitor host and chassis

Key changes from 4.x for existing customers

  • 11th Generation server deprecation
    • With 5.0, 12th generation PowerEdge or higher is required to manage a host
    • Legacy customers can still use 4.3.1 with vCenter 6.5 U2 and 6.7 U1 for older environments
  • HTML5 requires a minimum vCenter level
    • The lowest supported vCenter version is 6.5 U2
    • The lowest supported ESXi version is 6.0 U3
    • Newer vCenters can still administrate older ESXi, generally up to 2 versions back
  • Hardware Profile is deprecated
    • A new version of the System Profile – Basic – replicates the old Hardware Profile for simpler configuration captures.
    • If NIC info is captured even at the Basic type in FX, Slot specific information is still captured per node unless explicitly removed

Defining server configurations – System Profiles


  • With HTML5, the old Hardware Profile becomes the System Profile Type “Basic” – a capture just of of BIOS, iDRAC, FC, and basic RAID and NIC settings
  • The System Profile Type “Advanced” allows for a full capture of all configuration settings allowed by iDRAC across PowerEdge peripherals

Defining update catalogs – Repository Profiles


  • Firmware and Driver catalogs to be used across the PowerEdge edge hosts are captured via the Repository Profiles
  • It is recommended to use the Dell EMC Repository Manager tool to help capture point-in-time firmware catalog baselines for vSphere needs

Creating compliance baselines – Cluster Profiles


  • Cluster profiles allow for a cluster level baseline to be set on any combination of:
    • Configuration via System Profiles
    • Firmware baselines via a firmware Repository Profile
    • Driver baselines via a driver Repository Profile
  • Compliance and drift are detected on scheduled drift checks, and status is shown in the OMIVV dashboard


Firmware updates managed from vCenter

  • Ability to deploy BIOS and firmware updates from within vCenter, including leveraging DRS for cluster-aware updates

Physical and virtual server health in one place

  • Physical server inventory, monitoring, and alerting directly within vCenter

Speed up new server deployment

  • Templates for easy server configuration and hypervisor deployment on a new system without PXE boot, dropping it into a cluster and linking to vCenter Host Profiles

Manage server lifecycle updates in vCenter


Firmware updates when you are ready

  • Schedule iDRAC supported firmware updates in advance via scheduled jobs, or simply by right-clicking on a managed ESXi host
  • With DRS enabled, OMIVV can perform a “cluster aware” update to help minimize downtime and maintenance windows:
    • Check system health
    • Place the system into Maintenance Mode (moves active VMs to another node in the cluster)
    • Update the firmware from a repository and reboot if needed
    • Pull the system out of Maintenance Mode, ready to work
    • Move to the next system into the cluster until finished
    • With DRS enabled, OMIVV can perform a “cluster aware” update to help minimize downtime and maintenance windows:
  • Perform up to 15 parallel cluster aware updates at the same time

Simplify new PowerEdge ESXi server deployments


You can download the new version by clicking the screenshot below

You can also watch a quick talk by Damon Earley, who is the Product Manager for it 

Tech Preview Of PowerProtect integration with Kubernetes

Back in December 2018, we had a customer advisory council where we shared our thoughts on our containers strategy. At VMworld 2019, Efri from the Dell Data Protection team shared the Containers Protection plans by talking and demonstrating a tech preview of using Dell EMC PowerProtect to protect containers, note that it doesn’t mean we don’t support taking snapshots of containers to the primary storage array, that is of course already supported but the idea here is to offload the backup to another device with a proper backup catalog and everything you can think about when dealing with enterprise backup polices.

If you think about it, with the power of Dell Technologies and VMware we should have the IP assets to come up with such an integration and that exact what we are doing, we are leveraging VMware Velero IP and adding our own magic sauce on (and in) it to allow our PowerProtect product to backup containers.

The Cube Interview:

A vBrownBag session:

like any future demonstration, please note that future features are subject to change

Dell EMC vSphere 6.7 U3 ISO is now available

On the heels of VMworld 2019, Dell have just released the ISO image of ESXi (vSphere) 6.7 U3

You can download it by clicking the screenshot below

 

The release notes for it:

ESXi 6.7U3 Build Number: 14320388

Dell Version : A00

Dell Release Date : 23 Aug 2019

VMware Release Date: 20 Aug 2019

 

Important Fix/Changes in the build

==================================

– Driver Changes as compared to previous build : yes

– Base Depot: update-from-esxi6.7-6.7_update03.zip

 

Drivers included:-

 

Intel Drivers:

=====================

– igbn: 1.4.7

– ixgben: 1.7.10

– ixgben_ens: 1.1.3

– i40en_ens: 1.1.3

 

Qlogic Drivers:

======================

– qcnic: 1.0.22.0

– qfle3: 1.0.77.2

– qfle3i: 1.0.20.0

– qfle3f: 1.0.63.0

– qedentv: 3.9.31.2

– qedrntv: 3.9.31.2

– qedf: 1.2.61.0

– qlnativefc: 3.1.16.0

– scsi-qedil: 1.2.13.1

 

Broadcom Drivers:

======================

– bnxtnet: 214.0.195.0

– bnxtroce: 214.0.184.0

 

Mellanox Drivers:

=======================

– nmlx5_core: 4.17.13.8

– nmlx5_rdma: 4.17.13.8

 

Emulex Drivers:

=======================

– lpfc: 12.0.257.5

 

Dell PERC Controller Drivers:

=======================

– dell-shared-perc8 – 06.806.90.00

 

Refer the below links for more information:

 

VMware ESXi 6.7 Update 3 Release Notes

https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-67u3-release-notes.html

 

Please find more details on this Dell Customized Image on dell.com/virtualizationsolutions

https://docs.vmware.com/en/VMware-vSphere/index.html

XtremIO is now certified for Pivotal PKS (Pivotal Container Service)

We are not stopping with the integration around and above Kubernetes

To improve competitiveness in a fast-paced world, organizations are embracing software as a prime differentiator. A software-driven business is more agile and efficient, innovates faster and responds dynamically to changing market and customer demands. Increasingly, gaining these compelling advantages requires adopting cloud-native applications.

Kubernetes is the de facto standard for container orchestration for microservices and applications. However, enterprise adoption of big data and databases using containers and Kubernetes is hindered by multiple challenges, such as complexity of persistent storage, availability, and application life-cycle management. Kubernetes provides the agility and scale that modern enterprises require. However, Kubernetes provides the building blocks for infrastructure, but not a turnkey solution.

Originally containers and Kubernetes were optimized for stateless applications, restricting enterprises to use container technology for workloads, such as websites and user interfaces. More recently, stateful workloads have been increasingly run on Kubernetes. Whereas it is relatively easy to run stateless microservices using container technology, stateful applications require slightly different treatment.

There are multiple factors which need to be taken into account when considering handling persistent data using containers, including:

  • Containers are ephemeral by nature, so the data that needs to be persistent must survive through the restart/re-scheduling of a container.
  • When containers are rescheduled, they can die on one host, and might get scheduled on a different host. In such cases, the storage should also be shifted and made available on a new host for the container to start gracefully.
  • The application should not have to deal with the volume/data and underlying infrastructure, and should handle the complexity of unmounting and mounting, etc.
  • Certain applications have a strong sense of identity (such as Kafka, Elastic, etc.), and the disk used by a container with a specific identity is tied to it. If, for any reason, a container with a specific identity becomes rescheduled, it is important that the disk specifically associated with that identity, gets reattached on a new host.

In this white paper, we present a solution that provides the robust capabilities and built-in high availability and scalability of XtremIO and PKS, bringing simplicity to Kubernetes, and creating a turnkey solution for data-heavy workloads. XtremIO CSI Plugin provides Pivotal PKS technology with built-in enterprise-grade container storage and uncompromising performance that extends Kubernetes’ multi-cloud portability to the private cloud.

With this proven integration, organizations can accelerate their digital business objectives via cloud-native applications, from development through testing and into production, on a single platform with ease and agility. Most importantly, eliminating the risk of application downtime preserves critical revenue streams, helps ensure customer satisfaction and strengthens your competitive advantage.

Planning, designing and building a private or hybrid cloud to support cloud-native applications can be a complex and lengthy project that does not address immediate business needs. IT also must maintain and manage the infrastructure that supports these new applications, to ensure that the environment is reliable, secure and upgradeable.

Kubernetes offers great capabilities. However operationalization of these capabilities is not a trivial task. There are many layers involved, such as hardware, operating system, container runtime, software-defined networking, and Kubernetes itself. And all the layers require Day 0, Day 1 and Day 2 activities. Most of these layers also require constant patching (for example, Kubernetes ships a new version every three months), so the effort to maintain this infrastructure grows exponentially.

Kubernetes is the de facto standard for container orchestration platforms. However, Kubernetes can be difficult to operationalize and maintain for the following reasons:

  • High availability — There is no out-of-the-box fault-tolerance for the cluster components themselves (masters and etcd nodes).
  • Scaling — Kubernetes clusters handle scaling the pod/service within the nodes, but do not provide a mechanism to scale masters and etcd Virtual Machines (VMs).
  • Health checks and healing — The Kubernetes cluster performs routine health checks for node health only.
  • Upgrades — Rolling upgrades on a large fleet of clusters is difficult. Who manages the system it runs on?
  • Operating systems — A Kubernetes installation requires multiple machines where an OS is installed. Installing and patching these systems is an overhead.

Pivotal Container Service (PKS) is a purpose-built product that enables enterprises to deploy and consume container services with production-grade Kubernetes, built with high availability, security, multi-tenancy and operational efficiency across private and public clouds.

It effectively solves the complexities around Kubernetes infrastructure and enables you to focus on your business applications. Furthermore, it benefits your developers, providing them with the latest version of Kubernetes and simple integrations with monitoring and logging tools.

Key Benefits

With Pivotal Container Service (PKS), not only do you receive a vanilla Kubernetes distribution supported by Pivotal and VMware, but you also receive the automation to maintain the Kubernetes stack with reduced effort, including the operating system, container runtime, software-defined networking and Kubernetes itself. The automation provided by BOSH enables you to install (Day 0), configure (Day 1), patch and maintain your Kubernetes infrastructure in a healthy state over time (Day 2).

This automation also extends your capabilities to have an effective Kubernetes as a service offering, and lets you create Kubernetes clusters with a single command, to either expand or shrink the clusters, or to repair them as needed. This is a unique capability that enables you to provide your Kubernetes users with as many clusters as they need, and to resize them, leveraging automation as your workload requirements change.

PKS also simplifies the networking configuration, by providing you with VMware NSX-T, which is a software-defined network infrastructure to build cloud-native application environments.

PKS also provides you with a container registry to host your containers, namely, Harbor, a graduated CNCF project.


  1. Pivotal PKS Components

Dell EMC XtremIO X2: The Next Generation All-Flash Array

Dell EMC XtremIO is a purpose-built all-flash array that provides consistent high performance with low latency; unmatched storage efficiency with inline, all-the-time data services; rich application integrated copy services; and unprecedented management simplicity.

The next-generation platform, XtremIO X2, builds upon unique features of XtremIO to provide even more agility and simplicity for your data center and business. Content-aware in-memory metadata, and inline all-the time data services have made XtremIO the ultimate platform for virtual server and desktop environments and workloads that benefit from efficient copy data management. The architecture offers VMware XCOPY operations that are twice as fast, which can reach as high as 40 GB/s bandwidth, due to them being in-memory operations.

X2 includes inline, all-the-time data services, thin provisioning, deduplication, compression, replication, D@RE, XtremIO Virtual Copies (XVC), and double SSD failure protection with zero performance impact. Only XtremIO’s unique in-memory metadata architecture can accomplish this.

This white paper is designed for developers, system architects and storage administrators tasked with evaluating or deploying Pivotal PKS, and interested in persistent storage within their deployment. While this document focuses on Dell EMC XtremIO X2 products, the concepts discussed and implemented can be applied to other storage vendor products.

Understanding the material in this document requires prior knowledge of containers, Kubernetes, PKS and of the Dell EMC XtremIO X2 cluster type. These concepts are not covered in detail in this document. However, links to helpful resources on these topics are provided throughout the document and in the References
section. In-depth knowledge of Kubernetes is not required.

Click the screenshot below to download the white paper.

You can also watch the below demos to see how it all works

 

Dell EMC PowerMax is now supporting Kubernetes CSI

Back in DTW, we have made a commitment to invest in the kubernetes eco-system (which is getting bigger and bigger..)

Every journey starts with small steps and if you are new to the who ‘storage persistency’ issues with Docker, I would highly advice that you first read the post I wrote back in november 2018

https://xtremio.me/2018/12/13/tech-previewing-our-upcoming-xtremio-integration-with-kubernetes-csi-plugin/ read it? Good! so after we have released the VxFlex OS and the XtremIO CSI plugins, it is now the time for PowerMax to get the support as well.

If you think about it, PowerMax is the only array that support both Mainframes and (now), containers. Think of all the mission critical apps it support and now, all the new Devops based apps as well, it can really do it all!

About the support for the PowerMax / CSI initial release:

The CSI Driver adheres to the Container Storage Interface (CSI) specification v1.0 and is compatible with Kubernetes versions 1.13.1, 1.13.2, and 1.13.2 running within a host operating system of Red Hat Enterprise Linux (RHEL) 7.6.

Services Design, Technology, & Supportability Improvements

The CSI Driver for Dell EMC PowerMax v1.0 has the following features

  • Supports CSI 1.0
  • Supports Kubernetes version 1.13.1, 1.13.2, and 1.13.3
  • Supports Red Hat Enterprise Linux 7.6 host operating system
  • Supports PowerMax – 5978.221.221 (ELM SR)
  • Persistent Volume (PV) capabilities:
    • create
    • delete
  • Dynamic and Static PV provisioning
  • Volume mount as ext4 or xfs file system on the worker node
  • Volume prefix for easier LUN identification in Unisphere
  • HELM charts installer
  • Access modes:
    • SINGLE_NODE_WRITER
    • SINGLE_NODE_READER_ONLY

Installation

Driver Installation Prerequisites

Full details are in the Installation Guide but here is a summary of pre-reqs for the driver.

  • Upstream Kubernetes 1.13.x, with specific feature gates enabled
  • Downstream distributions, like OpenShift and PKS, have not been tested and may not work at this time.
  • Docker daemon running and configured with MountFlags=shared on all k8s masters/nodes
  • Helm and Tiller installed on k8s masters
  • iscsi-initiator-utils package installed on all the Kubernetes nodes
  • Make sure that the ISCSI IQNs (initiators) from the Kubernetes nodes are not part of any existing Hosts on the array(s)
  • A namespace “powermax” should be created prior to the installation. The driver pods, secrets are created in this namespace.
  • Make sure that all the nodes in the Kubernetes cluster have network connectivity with the U4P instance (used by the driver)

Full details are in the Installation Guide.

  1. Clone the repository from the URL – github.com/dell/csi-powermax
  2. Create a Kubernetes secret – “powermax-creds” – with your U4P’s username and password in the namespace “powermax
  3. (Optional) Create a Kubernetes secret – “powermax-certs” – with CA cert(s) used to sign U4P’s SSL certificate. If not found, the install script will create an empty secret in the namespace “powermax
  4. Create your myvalues.yaml file from the values.yaml file and edit some parameters for your installation.These include –
    1. The U4P URL (must include the port number as well)
    2. clusterPrefix – A unique identifier identifying the Kubernetes cluster (max 3 characters)
    3. A list of port group names
    4. An optional (white)list of arrays the driver will manage
    5. A set of values for the default storage class (SYMID, SRP, ServiceLevel)
  5. Run the “install.powermax” shell script
  6. Customize the installation (if desired) by adding additional Storage Classes to support multiple arrays, storage resource pools, service levels

Using the Driver

See the Installation Document section “Test Examples that use the CSI Driver For Dell EMC PowerMax”

There are a number of test helm charts and scripts in test/helm that can be used as examples of how to deploy storage using the driver

  • Create a “test” namespace to hold the tests.
  • The directory 2vols contains a simple Helm chart that deploys a container with two volumes. To deploy it run “sh starttest.sh 2vols”. To stop it run “sh stoptest.sh”.
  • There is a sample master/slave Postgres deployment example.

    Deploys a master/slave postgres database You can deploy it using “sh postgres.sh”; however for the script to work correctly you need psql locally installed where you’re running the script from, and the world database from http://pgfoundry.org/projects/dbsamples/
    installed at /root/dbsamples-0.1.

    If psql and the sample database are installed, the script will install the sample database into the deployed postgres database and run a sample query against it.

Documentation and Downloads

CSI Driver for Dell EMC PowerMax v1.0 downloads and documentation are available on:

Github:  https://github.com/dell/csi-powermax

Docker Hub:
https://hub.docker.com/r/dellemc/csi-powermax

Below you can see a quick video that was done with the tech preview of the plugin