The VMware vRealize (vCenter) Orchestrator (vRO) 4.0 for XtremIO is now available

We have just released the 4.0 version of the plugin, I’m thankful for all the customers that are using it and showing us that the way forward is to automate everything!

This version brings the support for XIOS 6.3 as well as additional workflows for native replication and QoS. we have also introduced the concepts of low-level objects which greatly expands what can be done in vRO with XtremIO Storage.

In this release, there’re about 180 predefined workflows within XtremIO vRO plugin, including array configuration, storage provisioning, storage management, and vSphere-integrated workflows.

This allows users and storage admins to complete most common storage operations from vRealize Orchestrator automatically well as integrating those existing workflows and actions to create customized workflows, and by doing so, to handle both day I and day II operations to meet your specific automation needs, without the need to login to each storage array.

You can download the workflow user guide from https://support.emc.com/docu92210_XtremIO_Plugin_for_VMware_vRealize_Orchestrator_4.0.0_Workflows_User_Guide.pdf?language=en_US&source=Coveo

And the installation and configuration guide from https://support.emc.com/docu92209_XtremIO_Plugin_for_VMware_vRealize_Orchestrator_4.0.0_Installation_and_Configuration_Guide.pdf?language=en_US&source=Coveo

You can download the new plugin by clicking the screenshot below

You can watch a short recording of a workflow and it’s integration to vRA here

and a longer (more detailed) one, here

Virtual Storage Integrator 8.1 is here

We have just released a minor update to the new(ish) HTML5 based vCenter plugin, if you are new to this, I highly suggest you first read this post I wrote here https://xtremio.me/2018/12/14/vsi-virtual-storage-integrator-8-0-is-here/

This section provides a list of features supported in this release.
VSI 8.1 release supports the following features:


VMAX All Flash/PowerMax support:
Storage System and Storage Group Administration
Provisioning VMFS Datastores
Viewing VMFS DatastoresRDM creation on VMAX All Flash/PowerMax
View VM disk information for VMAX All Flash/PowerMax
VMAX All Flash/PowerMax Host Best Practices


XtremIO support:
Storage System Administration
Provisioning VMFS Datastores
Viewing VMFS Datastore Details
Increase VMFS Datastore capacity
RDM creation on XtremIO
View VM disk information for XtremIO
XtremIO Host Best Practices


Unity support:
Storage System and Storage Pool Administration
Provisioning VMFS Datastores
Viewing VMFS Datastores
Provisioning NFS Datastores
Viewing NFS Datastores
Increase NFS and VMFS Datastore capacity
RDM creation on Unity
View VM disk information for Unity
NAS Server Administration
Unity Host Best Practices
Users/User Group Administration

To download the new client, click the screenshot below

 

And you can download the documentation, clicking this url https://support.emc.com/docu94130_VSI_for_VMware_vSphere_Client_8.1_Product_Guide.pdf?language=en_US&source=Coveo

One more thing..

We are starting to containerize some of our data services, this statement is pretty vague deliberately but in the context of the VSI plugin, lets assume you are already running VSI 8.0 and want to do an upgrade to VSI 8.1, you have two options:

Upgrade VSI Plug-in using DockerHub
Use the procedure in this topic to upgrade the VSI plug-in using DockerHub.
Before you begin
You must log out of the vCenter before upgrading the plug-in.
Procedure
1. Login to the IAPI VM through SSH.
2. Stop the current IAPI container. Use the following command: docker stop iapi
3. Pull the latest IAPI/VSI image from DockerHub: docker pull dellemc/vsi
4. You can also perform the following optional steps:
a. Backup the previous IAPI container. Use the following commands:
a. docker rename iapi iapi-before-upgrade

b. docker update –restart=no iapi-before-upgrade
b. Backup the IAPI database. Use the following command:
docker cp vsidb:/data/appendonly.aof <OUTPUT_DIRECTORY>
c. Backup the existing VSI plug-in. Use the following command:
cp /opt/files/vsi-plugin.zip <OUTPUT_DIRECTORY>
d. Backup existing SSL certificates. Use the following command:
docker cp iapi:/etc/ssl/certs/java/cacerts
<OUTPUT_DIRECTORY>
e. Any files backed up in the previous steps can be restored after the container
upgrade if needed by copying the files to the new container. For example,
docker cp <OUTPUT_DIRECTORY>
5. Upgrade the VSI plug-in.
l To register the VSI plug-in without SSL verification, use the following
command:
python3 /opt/scripts/register_extension.py -ignoressl true
l To see additional options for using SSL certificate verification during
registration, use the following command:
python3 register_extension.py -h
Note
After upgrade, the SSL is not enabled by default. To enable SSL, see Change Settings in application.conf of the IAPI container.
6. Refresh the VSI plug-in:
l If you have a current vCenter already open, logout and log back in to refresh
the plug-in.
Or
l If you do not have a vCenter open, log in to the vSphere Client, log out, and
log back in to refresh the plug-in.
Upgrade VSI using Package
Use the procedure in this topic to upgrade VSI 8.0 to VSI 8.1 version.
Before you begin
You must log out of the vCenter before upgrading the plug-in.

Procedure
1. Download the VSI 8.1 upgrade.tar package from Dell EMC Support Site or DockerHub.
2. Copy the upgrade package to /tmp folder of the IAPI server.
To copy the upgrade file, use the following command:
scp iapi-vsi-upgrade.tar root@<IAPI server>:/tmp
Note
If SSH is enabled, the default password is root.
3. Change the directory to /tmp folder.
4. Extract the upgrade package. Use the following command:
tar -xf/tmp/<iapi-vsi-upgrade.tar>
Note
The upgrade package contains the .tar file, the upgrade script, and the docker
image.
5. Run the upgrade script. Use the following command:
./upgrade_iapi.sh –upgrade <iapi-vsi-upgrade.tar.gz>
The upgrade script backs up and renames the VSI 8.0 docker image, lay down
the new docker image. The script also restores the necessary files to a backup
folder. An example for the folder name, <iapi-beforeupgrade-20190412194829>

Note
You can also restore the application.conf and SSL certificates after the
completion of upgrade. Use the restore command with the .tar file that resides
in the backup folder:
./upgrade_iapi.sh –restore ./<20190412194829.tar>
6. To upgrade the VSI plug-in, use the following command:
python3 /opt/scripts/register_extension.py -ignoressl true
Note
After upgrade, the SSL is not enabled by default. To enable SSL, see Change
settings in application.conf of the IAPI container.


 

VMware SRM 8.2 is here with XtremIO SRA AND point in time failover support

VMware have just released a new version of SRM (8.2), it’s a pretty big release because for the first time, the SRM server itself can now run within an photon OVA as oppose to the windows server it had to run in previous version. You may ask yourself what’s the big deal, well:

  • You can now save on the windows server licensing costs.
  • Deployment is a breeze.
  • Updates to the SRM software will be literally a click of a button.

But that’s only one big change, the other one, is that now, the SRA (storage replication adapter) which is the glue between vCenter to the storage array is actually running as a docker image Inside the SRM appliance. Here at Dell EMC we were busy working on it and I’m proud to announce the day 0 support for SRM 8.2 and our XtremIO & Unity arrays (other arrays will follow shortly).

Apart from the SRA support, we have also certified our unique pit (point in time) integration with SRM 8.2, this feature will allow you to specify a specific pit to test or failover to when using SRM, think about it, let’s say you have a logical data corruption or a malware infection etc, it’s not enough to run a failover to the DR site, you want to actually failover to a point in time BEFORE the data corruption and we provide you just that with XtremIO, SRM and our free vCente plugin (VSI 7.4)

You can download SRM 8.2 clicking the screenshot below

You can download the new SRAs (one for the classic windows SRM or the new, photon based SRA) from clicking the screenshot below

And finally, you can download the VSI plugin from here (click the screenshot below)

Below you can see a demo showing you how it all works

What’s new with CloudIQ and XtremIO

Back in august 2018, I blogged about CloudIQ that now support XtremIO, if you are new to CloudIQ, I highly encourage you to first read the post here

https://xtremio.me/2018/08/27/vmworld-2018-xtremio-integration-with-cloudiq/ but in a nutshell, CloudIQ is the Software as a Service (SaaS) platform offered by Dell EMC that allows customers to manage all of their EMC arrays in a single pane of management. A Dell EMC customer needs to simply log into the CloudIQ application, and he will automatically be presented with an Overview of all of the Dell EMC arrays deployed

now, since CloudIQ IS a SaaS based application, we have updated it monthly since august 2018 with new features, here what’s new based on the month / XtremIO releases:

Wednesday, October 17, 2018 – CloudIQ now has a unified Alert View

We’ve consolidated alerts from different product types into a single alert log, and added support for XtremIO systems. Now you can view alerts from all monitored systems in one single view. In the Alert view, clicking the “Refine” button now includes filtering alerts based on product family and model.

Thursday, November 1, 2018 – New Mobile App

Monitoring Dell EMC storage systems just got easier – CloudIQ’s mobile application is now LIVE in the Apple App and Google Play stores! CloudIQ customers, trusted advisors and partners can now receive notifications about changes in users’ storage environments, view health details, and forward issues to others. Make sure to download the CloudIQ App now!

Tuesday, November 6, 2018 – Management Software Upgrade Indications

You will now see an indication when a VMAX or XtremIO storage system has a management software update available. These indications are visible in the system details Configuration tab for VMAX and XtremIO storage systems. Clicking the ‘Learn More’ link will open a dialog with summary information and relevant links to support resources.

Wednesday, November 14, 2018 – CloudIQ now supports XtremIO Hosts

In the Hosts listing page, users can now view XtremIO hosts. In a Host’s details page, users can see the Host’s attributes, including Volumes and Initiators related to the Host.

Wednesday, November 14, 2018 – Capacity prediction for XtremIO systems

We’ve added Capacity Prediction support for XtremIO. The System historical capacity chart displays Actual Free and Used Capacity. The prediction provides Forecasted Free and Used capacity with the Confidence range. Users can also optionally display Provisioned capacity as a reference.

Thursday, February 28, 2019 – Display Data Reduction Chart for XtremIO systems

This new chart in the System Details Capacity Tab for XtremIO systems visualizes deduplication and compression ratios of the system.

Tuesday, March 26, 2019 – XtremIO Anomaly Support

We’ve added support for anomaly detection on XtremIO systems. You can now view a range of statistically normal behavior for each of your systems’ performance metrics on the system Performance details page. Based on a rolling three week analysis per metric, the metrics charts visuals now highlight an anomaly any time the metric breaches the normal range within the last 24 hours

Tuesday, March 26, 2019 – Enhanced Storage Object Activity on XtremIO System Performance Page

We’ve expanded the scope of the Storage Object Activity lists on the Performance details pages for XtremIO systems. You can now explore all of your system’s volumes with a paginated list, sorted by the highest average activity over the last 24 hours.

Thursday, April 18, 2019 – VMware XtremIO Integration

Added VMware support for XtremIO systems in CloudIQ. Now you can get details about virtual machines in your XtremIO storage environment including performance and capacity at the VM level. To enable VMware data collection, you must first deploy and configure the new CloudIQ Collector vApp which is accessed directly from CloudIQ.

Thursday, April 18, 2019 – New Volume Properties View for XtremIO Systems

We’ve added a Volume Properties page for XtremIO systems. You can view a Volume’s attributes, including Hosts and Consistency Groups related to the Volume, from either the System Configuration Details view or the Host Details view of an XtremIO system.

VxFlex OS v3.0 Is now available

pic1

Wide distribution of Data for Massive Performance

Flex widely distributes data across all storage resources in the cluster, which eliminates the architectural problems of other IP-based storage systems.  With VxFlex OS, ALL of the IOPS and bandwidth of the underlying infrastructure are realized by a perfectly balanced system with NO hot spots.

Massive Availability and Resiliency

Flex has a self-healing architecture that employs many to many, fine-grained rebuilds, which is much different than the serial rebuilds seen with most storage products. When hardware fails, data automatically rebuilt using all other resources in the cluster. This enables a 6×9’s availability profile while using x86 commodity hardware.  Flex can rebuild an entire node with 24 drives in mere minutes – a fraction of the time it takes to rebuild a single drive on a traditional array.

Built In Multipathing

Flex automatically distributes traffic across all available resources. Every server can be a target as well as an initiator.  This means as you add/remove nodes in the cluster, multipathing is dynamically updated on the fly.  Inherent, dynamic built-in multipathing.

pic2

Dell EMC VxFlex Ready Nodes converge storage and compute resources into a single layer architecture, aggregating capacity and performance with simplified management capable of scaling to over a thousand nodes. VxFlex OS provides the maximum in flexibility and choice. VxFlex OS supports high performance databases and applications,
at extreme scale (again from as little as 3 nodes to over 1000 per cluster) and
supports multiple OS, Hypervisors or Media. You can build the infrastructure that best supports your applications. Choose your hardware vendors or use what you already have in house.

pic2

TWO LAYER-

  • Similar structure to traditional SAN
  • Supports organizations who prefer separation between storage and application teams
  • Allows scaling of storage needs separately from the application servers
  • New 100Gb Switch for aggregation layer

Or

HCI

  • Provides maximum flexibility and easier to administrate
  • Servers host both applications and storage
  • Modern approach to manage IT Data Center
  • Provides maximum flexibility and easier to administrate
  • Maintenance of servers impact both storage and compute

In a Storage-only Architecture:

The SDC exposes VxFlex OS shared block volumes to the application.

  • Access to OS partition may still be done “regularly”
  • VxFlex OS data client (SDC) is a block device driver

The SDS owns local storage that contributes to the VxFlex OS storage pool

  • VxFlex OS data server (SDS) is a daemon / service

IN two-layer, SDC and SDS run on different nodes and can grown independent of each other

We have just released VxFlexOS 3.0 which includes many asked features, here’s what’s new

 

Fine Granularity (FG) Layout

Fine Granularity layout (FG) – a new, additional storage pool layout using a much finer storage allocation units of 4KB. This is in addition to existing Medium Granularity (MG) storage pools using a 1MB allocation units.

4KB allocation unit allows better efficiency in thin-provisioned volumes and snapshots. For customers that frequently use snapshots, this layout will create significant capacity savings.

Note: FG storage pools require nodes with NVDIMMs and SSD/NVMe media type

Inline compression – Fine Granularity layout enables data compression capability that can reduce the total amount of the physical data that needs to be written to SSD media. Compression saves storage capacity by storing data blocks in the most efficient manner, when combined with VxFlex OS snapshot capabilities, can easily support petabytes of functional application data.

Persistent Checksum – In addition to the ‘inflight checksum’ available, persistent checksum presents an added data integrity for the data and metadata of FG storage pools.  Background scanners monitor the integrity of the data and metadata over time.


VxFlex OS 3.0 introduces an ADDITIONAL, more space efficient storage layout

Existing – Medium Granularity (MG) Layout
Supports either thick or thin-provisioned volumes
Space allocation occurs at 1MB units
No attempt is made to reduce the size of user-data written to disk (except with all-zero data)
Newly Added – Fine Granularity (FG) Layout
Supports only thin-provisioned, “zero-padded” volumes
Space allocation occurs at finer 4KB units
When possible, reduces actual size of user-data stored on disk
Includes Persistent Check-summing for data integrity
A Storage Pool (SP) can be either a FG or MG type
FG storage pools can live alongside MG pools in a given SDS
Volumes can be migrated across the two layouts (MG volumes zero padded)
FG pools require SSD/NVMe media and NVDIMM for acceleration

Inline Compression

Picture1Compression algorithms in general
What’s desirable is something off-the shelf, standard, field-proven
Lempel-Ziv (LZ) based compression (recurring patterns within preset windows) w/wo Huffman coding
Very good for compression: Text (>80%), DB (~70% [ranges from 60% – 80%] )
The algorithm used in VxFlex OS 3.0 is C-EDRS,
DellEMC proprietary (similar to LZ4). The same algorithm that XtremIO uses
Good balance of compression ratio and performance (light on CPU)
We test compressibility in-line (on the fly)
Some data is not a good candidate for compression (e.g. videos, images, compressed DB rows)
Invest CPU cycles up front. If not reducible more than 20%, consider incompressible and store uncompressed
Don’t waste CPU cycles later decompressing read IOs


Persistent Checksum
Logical Checksum (protects the uncompressed data)
All data written to FG pools, with or without compression, have a logical checksum always calculated by default (cannot be changed)
If we compress the data: the checksum of the original (uncompressed) data is calculated before being compressed and written to the
disk and is stored on disk with the data *
If the data is not compressed (by user selection or because of incompressibility), the checksum is calculated and stored elsewhere **
Physical Checksum (protects the compressed data)
Protects the integrity of the Log itself
Computed for the Log, after placing Entries into the Log
Computed over the compressed data and the embedded metadata
thus protects the integrity of the compressed data
Metadata Checksum
Maintaining the integrity of the metadata itself is crucial
Cannot reconstruct metadata from (compressed) user data
Disk level metadata
There is a checksum for each physical row in the metadata that lives on each disk
If we detect an error in the metadata, we do not trust anything on the disk and trigger a rebuild

Background Device Scanner


Scans devices in the system for errors
You can enable/disable the Background
Device Scanner & reset its counters
MG storage pools
Disabled by default
No changes, same as 2.x
FG storage pools
Enabled by default
Mode: device_only – report and rebuild on error
Cycle through each SSD and compare the Physical
Checksums
against the data in the Logs and Metadata
GUI controls/limits disk IO – default is 1024 KB/s per device

pic5

Choose the best Layout for each workload
being able to choose for each workload
the layout that works best for you
MG
Workloads with high performance requirements and sensitivity
All of our usual use cases still apply
FG compressed
A great choice for most cases where data is compressible
And where space efficiency is more valuable than raw IO
Esp. when there is snapshot usage / requirements
DevOps and Test/Dev environments
FG non-compressed
Data isn’t compressible (e.g. OS or application-level encryption)
But you use lots of snapshots & want the space savings
Read-intensive workloads w/ >4K IOs
Need persistent checksums
And change your mind… You can migrate “live” to another

on-Disruptive
Volume Migration

pic4

Prior to v3.0, a volume is bound to a Storage Pool on creation and this binding cannot be later changed
There are various use cases:
Migrating volumes between different performance tiers
Migrating volumes to a different Storage Pool or Protection Domain driven by multi-tenancy needs
Extract volumes from a deprecating Storage Pool or Protection Domain to shrink a system
Change a volume personality
Thin
-> Thick
FG
-> MG
Migrating volumes from one Storage Pool to another
V-Tree granularity – volume and all snapshots are migrated together
Non-disruptive to ongoing IO, hiccups are minimized
Migration supported across
Storage Pools within the same Protection Domain
Storage Pools across Protection Domains
Supports older v2.x SDCs

Snapshots and Snapshot Policy Management

Volume Snapshots
Prior to v3.0, there was a limit of 32 items in a volume tree (V-Tree)
31 snapshots + root volume
In v3.0, this is increased to 128 (for both FG and MG layouts)
127 snapshots + root volume
Snapshots in FG are more space efficient and have better performance
In comparison to MG snapshots
4KB block management, 256x less to manage with each subsequent write
Remove Ancestor snapshot
Ability to remove the parent of a snapshot and maintain the snapshot in the system
In essence merging the parent to a child snapshot
Policy managed snapshots
Up to 60 policy-managed snapshots per root volume (taken from the 128 total available)

Snapshot Policy
The policy is hierarchical
For example, we would like to keep:
An hourly backup for the most recent day
A daily backup for a week
A weekly backup for 4 weeks
Implementation is simplified – set the basic
cadence, and the number snapshots to
keep at each level
The number of snapshots to keep is the same as
the rate of elevating the snapshot to the next level
Max retention levels = 6
Max snapshots retained in a policy = 60



Auto Snapshot Group
The snapshots of an auto snapshot group


Consistent (unless mapped)
Share the same expiration and should be deleted at the same time (unless locked)
Auto Snapshot Group is NOT a Snapshot Consistency Group
A single snapshot CG may contain several auto snapshot groups
Snapshot CGs are not aware of locked snapshots
Therefore deleting snapshot CGs which contain auto snapshots is blocked
Auto snapshot is a snapshot which
was created by a policy
The auto snapshot group is an internal
bject not exposed to the user
Hinted when snapshots are grouped by date/time in several views

Updated System Limits

Maximum SDS capacity has increased from 96TB to 128TB
Maximum SDS per PD has increased from 128 to 256
Maximum snapshot count per source volume is now 128 (FG/MG)
Fine Granularity (FG)
Maximum allowed compression ratio: 10x
Maximum allowed overprovisioning: 10x (compare vs. 5x in MG thin-provisioned)
SDC limitation in vSphere
6.5 & 6.7 – up to 512 mapped volumes
6.0 – up to 256 mapped volumes

Updates and Changes

VxFlex OS 3.0
Added Support for native 4Kn sector drives
Logical Sector size & physical Sector size fields
Removed
Windows backend (SDS,MDM) support
No support for Windows HCI, only compute nodes (SDC)
AMS Compute feature support
Security updates:
Java: Enable newer versions of Java 8 builds
CentOS 7.5 SVM passed NESSUS security scanning and STIG
New mapping required to define disk type in a storage pool (SSD/HDD)
Attempt to add disks that are not of the correct type will be blocked
Existing SPs will need to be assigned a Media type post upgrade
Transition to CentOS 7.5 Storage VM
New 3.0 SVM installations only
Replace SVM from SLES11.3/12.2 to CentOS 7.5 will be available after 3.0 (3.0.x)

OS Patching
New Ability in the IM\GW to run a user provided script on a VxFlex OS system as part of an
orchestrated, non-disruptive process (like NDU), which is usually intended for OS patching
Supported on RHEL and SLES
Using this feature includes two main steps
User should manually copy the script file to each vxFlex OS host using those prerequisites:
1. Main script name must be patch_script (we check result code is 0 at the end of execution)
2. Verification script name must be verification_script (we check result code is 0 at the end of execution)
3. The script must be copied to ~/lia/bin folder and add execution permissions
RC codes are saved in the LIA log and an error is returned if needed to the GW
1. User execute the scripts from IM\GW UI
2. It’s the customer responsibility to test the patch_script and verification_scripts prior to running the process
via GW


Execution steps:
1. Login to IM\Gateway web view
2. Select “Maintain Tab”
3. Enter MDM IP & Credentials
4. Under “System Logs & Analysis” select “Run Scrip On Host”

OS Patching
1. Run Script on Host window open
2. Select the scope of running the script/s on
Entire System – All vxFlex OS Nodes
“In parallel on different Protection Domains” – By default the script is
running on first host’s PD then move to the second and so on. By
selecting this option, the patch_script will run in parallel on all PDs.
Protection Domain : Specific PD
Fault set : specific fault set
SDS : single node
Note: PD’s that don’t have MDM’s will be first , and cluster Nodes will be
last.
3. Define “Running configuration” parameters
Stop process on script failure
Script Timeout: How much time to wait for the script to finish
Verification Script: Do you want to run verification_script after
patch_script was run
Post script action: Do you want to reboot the host after patch_script
executed.
If reboot selected – patch_script will run
à Reboot à verification_script
will run


Press “Run script on Hosts”, Validate phase will start
This phase sends a request to each of the host’s LIA the verify the existence of patch_script and
verification_script (if selected) files under ~/lia/bin
Press “Start execution phase” button
IM will make some verifications: check no filed capacity, check spare capacity, check cluster is in valid
state and no other SDS is in maintenance mode.
Enter SDS to maintenance mode , Run the patch_script
Reboot host (If required)
Run verification script (If required)
Exit from maintenance mode
Operation completed
After successful run the patch_script file is deleted and backup file of it created on the same
folder with the name backup_patch_script
OS Patching
Configuration Step III


Multi LDAP Servers Support
Deploy GW as usual, post Deploy use FOSGWTool tool to add LDAP servers to the GW login
authority
Support up to 8 LDAP servers
We use the same method as configuring a single LDAP server to support multiple (change in
command syntax)
New capability in FOSGWTool, to add multiple LDAP servers
Details will be available in the LDAP TN
Log file is the same as Gateway logs (operations.log, scaleio.log and scaleio-trace.log)
Usual errors are related to syntax of commands or networking misconfiguration

GW Support in LDAP for LIA


New ability to deploy system with LIA user already configured to use LDAP
In Deployment you can configure the first LDAP server
In Query phase the communication to LDAP is performed to validate the info
So install will not proceed until the LDAP check has passed
Post upgrade ability to switch LIA from local user to LDAP user
Ability to Add \ Remove up to 8 LDAP servers
Check is done during add or Remove, any error will fail the operation
.

You can download VxFlex OS v3.0 from the link below (click the screenshot)


And you can download the documentation from here


You can also watch a video below, showing the new compression and snapshots functionalities

 

 

Dell EMC AppSync 3.9 is now Available, a MUST upgrade for the XtremIO & PowerMax Customers

We are proud to announce AppSync 3.9, if you are new to AppSync, I highly encourage you to start reading about it here

https://xtremio.me/2018/08/01/appsync-3-8-is-ga-heres-whats-new/

https://xtremio.me/2018/09/23/xtremio-6-2-part-5-the-xms-appsync-integration-is-now-ga/

Product Description

AppSync is a software that enables Integrated Copy Data Management (iCDM) with Dell EMC’s primary storage systems.

AppSync simplifies and automates the process of generating and consuming copies of production data. By abstracting the underlying storage and replication technologies, and through deep application integration, AppSync empowers application owners to satisfy copy demand for operational recovery and data repurposing on their own. In turn, storage administrators need only be concerned with initial setup and policy management, resulting in an agile, frictionless environment.

AppSync automatically discovers application databases, learns the database structure, and maps it through the Virtualization Layer to the underlying storage LUN. It then orchestrates all the activities required from copy creation and validation through mounting at the target host and launching or recovering the application. Supported workflows also include refresh, expire, and restore production.

AppSync supports the following applications and storage arrays:

l  Applications — Oracle, Microsoft SQL Server, Microsoft Exchange, VMware VMFS datastores, VMware NFS datastores, and NFS File systems.

l  Storage — VMAX, VMAX 3, VMAX All Flash, PowerMax, VNX (Block and File), VNXe (Block and File), XtremIO, ViPR Controller, VPLEX, Dell SC, and Unity (Block and File).

l  Replication Technologies—VNX Advanced Snapshots, VNXe Unified Snapshot, TimeFinder Clone, TimeFinder VP Snap, SRDF, SnapVX, RecoverPoint Bookmarks, XtremIO Snapshot, XtremIO Bookmarks, Unity Unified Snapshot, Unity Thin Clone, DellSCSnap, and ViPR Snapshot.

The AppSync Support Matrix is the authoritative source of information on supported software and platforms.

Services Design, Technology, & Supportability Improvements

AppSync 3.9 includes the following new features and enhancements:

XtremIO Quality of Service (QoS)

·       Applying or modifying a QoS policy on the target device created by AppSync as part of copy creation.

·       Applying or modifying a QoS policy during AppSync on-demand/on job mounts.

AppSync 3.9 now has the ability to associate mounted copies to an XtremIO QoS policy

  • XtremIO arrays allow users to enable user-defined service levels for IOPS on array resources such as luns

This allows the end user more control of the array network and storage resources

  • You can define a template or QoS policy on the array with the following attributes:

The maximum IOPS limit or (maximum I/O size + maximum number of I/O’s) on the array

Burst percentage

Type (fixed or adaptive)

  • Once a QoS policy is configured on the array, AppSync can leverage it
  • AppSync sets QoS on the target devices created by AppSync, it can’t alter production lun QoS policies
  • AppSync can not leverage QoS policies for CG’s or IG’s
  • AppSync can only apply or change the QoS policy on fresh mounts. If you wish to modify it, you need to unmount and remount with a new desired policy

  • The update of QoS policies happens when rediscovering the array

This includes when new policies are created or deleted

  • The QoS policy is applied after adding the target LUNs to the initiator group
  • If the QoS operation fails, the mount operation will complete with a warning
  • QoS is always applied at the snapshot volume level

For Silver, the QoS policy is applied on the target CG of the replication session

  • If the QoS policy is renamed on the array, it doesn’t affect AppSync, as AppSync persists the policy by the UUID, and not the name
  • QoS objects will be removed from the AppSync database when an XtremIO array is removed from AppSync


XtremIO Native Replication Restore

·       Supports for restore of source volumes with remote XtremIO copies created using native replication. Supported only on XtremIO 6.2 and later.

  • Native replication remote restore is supported on XtremIO XMS 6.2+
  • In protection workflows (service plans), only the read-only copies are restored

Any changes made to a mounted copy (the RW copy) will be discarded

This is different from the local copy workflow

  • In repurposing workflows, either read-only
    or
    modified copies are supported

User will be able to select the option

There is no difference with the local vs. remote copy workflow

  • All prior XtremIO native bookmarks will be deleted as part of the remote restore process

The process consists of 4 Phases: Validation, Unmount, Restore, Mount

Validation Phase

  • NR restore fails if the session is in test-copy mode, error, or inconsistent state
  • NR restore fails if direction isn’t source-to-target
  • NR restore fails if all remote bookmarks captured together are not intact

Example: Bookmarks of Oracle data CG and log CG

  • If restore is from remote repurpose RW copy, AppSync validates the linked CG copy is valid
  • Source application is unmounted from source host and storage is detached
  • Source luns will not be unmapped from production host IG
  • Restore Phase
  • Check the states of the replication session and maintain its current state – fail if invalid
  • Failover using the target bookmark, after which the direction is target-to-source
  • Wait until failover completes
  • When restoring from the RW repurposed copy, AppSync first refreshes the target CG using the remote linked CG
  • Failback, then after, the replication session is returned to original direction
  • Wait until failback completes

Mount

  • Re-attach the luns on the production host
  • Mount and recover the application

Oracle enhancements:

·       Support for Oracle Container Databases (CDBs).

  • Supports discovering, mapping, protecting, and repurposing Oracle CDBs

Same workflows supported as standalone databases

RecoverPoint environments are not currently qualified/supported

  • PDBs are viewed by selecting the CDB, then clicking the Show PDBs
    button



  • Requirements:

AppSync requires that all PDBs of a CDB reside on the same storage array

All PDBs within a CDB
must be in Read-Write mode
for hot backup

  • This includes all PDB instances on all RAC nodes, otherwise no hot backup option can be utilized
  • PDB$SEED is the exception and is always protected – mandatory for recovery purposes
  • Effected entities:

AppSync detects effected entities at the CDB level, for example:

  • If pdb1 on CDB1 shares a filesystem with pdb2 on CDB2, then when AppSync attempts to restore CDB1, it will detect that the second CDB is an affected entity as well, requiring that it be shutdown [manually] before a restore
  • It is best to not share filesystems between different PDBs/CDBs
  • Limitations:

Workflows at the PDB granularity level are not supported

everything happens at the CDB layer

·       Support for Oracle FLEX ASM aware node selection.

  • AppSync now has the ability to discover FlexASM type instances
  • AppSync requires at least one available RAC node of an ASM instance in order to place the database in hot backup mode

If an ASM instance is down on one node, AppSync will detect another node for a running ASM
instance, and then set it for further processing

  • Limitations:

AppSync only supports creating copies on nodes where both the
database
and
ASM
instances are
available

Mount and recovery works only if the ASM instance is up and running

  • If standalone, ASM must be up and running on at least that one node
  • If using Grid, an ASM instance is able to run on any one of the RAC nodes so long as cluster services are online on all remote nodes – cluster services must be running on all nodes

Deselect any node during mount that does not have both database and ASM instances running

Deselect any node during mount that does not have both database and ASM instances running


Note: Only nodes with live database and ASM instances are supported.

·       Support for ASM 4KB sector disk drives.

  • AppSync supports logical volumes with an underlying 4K sector configuration with limitations…

Supported on
Linux with Oracle ASM
(filesystems are not supported)

Supported on
Dell SC
when creating 4K sectors in
emulation mode

Supported on
XtremIO
when application hosts are
physical, or
iSCSI
connected using
native mode

Supported only with
Oracle 12c R2+

  • There is a new event message warning of a potential mount failure in case of an

unsupported environment, such as when running Oracle 12.1 and 11gR2

AppSync detected ASM diskgroup sector size as 4096 for Oracle version
{VersionNumber}.
Mount of
the diskgroup might fail with inconsistent sector size error

TCE enhancements:

·       Provides an option to unlink a VMAX target device when a SnapVX snapshot is unmounted.

·       Provides an option to consolidate or delete old Postgres datastore.log files.

  • Delete old datastore.log files (SER:6630)

AppSync now removes
datastore.log
files when they are over a month old

  • Removed “A reboot of Exchange may be required after installation of the host plugin” from UI and documentation
    (SER:6620)

AppSync stops services that use certain dll files when installing/removing the AppSync plug-in on MS
Exchange hosts

Option to unlink VMAX target device when a SnapVX snapshot is unmounted – Service Plan Settings

  • Previously, the unlink operations only occurred during copy expiration
  • Only supported when copy is linked in no-copy mode (snapshot)

Not
applicable when copy is linked in
copy mode
(clone)

  • Option is presented in on-demand, service plan, and second gen repurpose
  • Not supported for 1st Gen
  • Does not allow restore of modified target devices

Warning is seen

 

Option to unlink VMAX target device when a SnapVX snapshot is unmounted – Repurposing 2nd Gen


The AppSync User and Administration Guide provides more information on the new features and enhancements.

The AppSync Support Matrix on https://elabnavigator.emc.com/eln/modernHomeDataProtection
is the authoritative source of information on supported software and platforms.

Upgrade Path to 3.9

Must be at least running version 3.7 to upgrade to 3.9.

  • Please reference the AppSync Installation and Configuration Guide for details on how to perform this type of upgrade…
  • 3.0.x 3.1 3.7 3.9
  • 3.5.x 3.7 3.9
  • 3.8 3.9

Recent Changes to the AppSync Support Matrix

Please ensure to validate the ESM at time of the release – these updates are NOT exhaustive!

  • Added support for:
  • Support 4k sector drives
  • Oracle 18c
  • Without required URM00116223 hotfix
  • MS Windows Server & Exchange 2019
  • Without required URM00138 hotfix
  • Depreciated OS/app support for:
  • RedHat Linux versions 5.5 through 5.10

SLES 11, 11 SP1 through SP3

  • Oracle database versions 11.2.0.1, 11.2.0.2 and
    Additional notes added:

    11.2.0.3

  • Removed array support for:
  • ViPR controller
  • VNX 2e storage platform
  • EMC Enginuity 5876: .82.57, .85.59, .159.102,

    .163.105, .229.145, .251.161,.268.174, .269.175,

    .272.177

  • DELL EMC HYPERMAX 5977: .497.471, .498.472,

    .596.583, .691.684, .811.784, (all these released in 2014)

  • DELL EMC VNXe OE: 3.1.1, 3.1.3, 3.1.5, 3.1.8 P0
  • XFS File Systems are not supported for FS-consistency (SER: 6623)
  • Require AppSync server and host plugin to have the same version
  • AppSync server installation on Physical, VMware and Hyper-V virtual machines are supported

Support Links

You can download AppSync 3.9 by clicking the screenshot below



The Dell EMC VMware ESXi 6.7 U2 ISO, is now available

We have just released the ISO image of the Dell EMC ESXi 6.7 U2 (update 2)

To download it, simply click the screenshot below

 

 

Apart from the core ESXi 6.7 U2 functionality, the release notes are:

ESXi 6.7U2 Build Number: 13006603

Dell Version : A00

Dell Release Date : 22 Apr 2019

VMware Release Date: 11 Apr 2019

 

Important Fix/Changes in the build

==================================

– Driver Changes as compared to previous build : Yes

– Base Depot: update-from-esxi6.7-6.7_update02.zip

 

Drivers included:-

 

Intel Drivers:

=====================

– igbn: 1.4.7

– ixgben: 1.7.10

– i40en: 1.7.11

– ixgben_ens: 1.1.3

– i40en_ens: 1.1.3

 

Qlogic Drivers:

======================

– qcnic: 1.0.22.0

– qfle3: 1.0.77.2

– qfle3i: 1.0.20.0

– qfle3f: 1.0.63.0

– qedentv: 3.9.31.2

– qedrntv: 3.9.31.2

– qedf: 1.2.61.0

– qlnativefc: 3.1.16.0

– scsi-qedil: 1.2.13.1

 

Broadcom Drivers:

======================

– bnxtnet: 212.0.119.0

– bnxtroce: 212.0.114.0

 

Mellanox Drivers:

=======================

– nmlx5_core: 4.17.13.8

– nmlx5_rdma: 4.17.13.8

 

Emulex Drivers:

=======================

– lpfc: 12.0.257.5

 

Dell PERC Controller Drivers:

=======================

– dell-shared-perc8 – 06.806.90.00

 

Refer the below links for more information:

 

6.7 U2 VMware Release Notes:

https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-67u2-release-notes.html

 

Please find more details on this Dell Customized Image on dell.com/virtualizationsolutions

 

https://docs.vmware.com/en/VMware-vSphere/index.html