Tech Previewing our upcoming XtremIO integration with Kubernetes / CSI Plugin

–08/04/2019 update–

The plugin is now available, you can read about it here

https://xtremio.me/2019/03/03/the-xtremio-kubernetes-csi-plugin-is-now-available/

This blog post should have been really called “where are we as Dell EMC company where it comes to integration with Kubernetes” and really, we are already here

From a Dell Technologies perspective, you can use the Pivotal Container Service (PKS), from a VMware perspective, you can use the vSphere integrated containers.

Right, but you want to use vFlexOS / PowerMax / XtremIO / Unity etc’ and for this reason, the post exist, I wanted to start by looking at my personal experience with containers as my journey with Docker started at 2015..

(Credit: Laurel Duermaël)

Docker was the hottest thing and the biggest promise in the industry, everyone were talking about the way that they monetize the containers landscape by providing a technology that is simple to use and “just works”..but you see this bottle of mile at the screen above? That’s there for a reason, as an infant company, they found many challenges to the way that they perceived the usage of containers

  • The first one is that while “next gen apps” shouldn’t have a persistent layer of storage because the very fundamental design of the container runtime is to get deleted every time it’s being powered off, Docker found out that many customers care, they care a LOT about the persistency of their data because data is the most important aspect of everything in the data center so customers wanted to run either their current apps in a container with a persistent data layer or even next gen apps where the app that runs inside of the container may get shut down and deleted but the data shouldn’t!, fast forward to the Kubecon 2018 conference:

Yes, that’s google saying that more than 40% of the clusters are running state fullapps!

  • The second issue that is related to the first one is how can a small company such as Docker develop these plugins, even if they already agree that this is a real need.

And just like a clock, you know that if there is a gap in a product that is supposed to be the next big thing, there will be startups that will try to fill the gap, enter ClusterHQ, ClusterHQ was a startup that had one mission in mind, to provide a persistent storage layer for storage arrays and Docker, so back in June 2015 we (XtremIO and ScaleIO) already had a working plugin to docker.

But then, 2017 happened and the startup shut down their company, so the problem resurfaced itself again, the difference now was that the entire container industry was more mature with many more customers asking for the persistent storage layer.

The other change was the raise of the containers orchestrator, it wasn’t about the docker runtime anymore, it was all about how you spin up clusters of containers, how do you really manage all of these hundreds / thousands of containers that you may run, into the category of orchestrators, we first saw the raise of Mesos, than Docker swarm and kubernetes and after a year or two, when the dust settled down, it think it’s ok to say that Kubernetes has one as the orchestrator, it doesn’t mean that the other two do not exist but at least, from my conversation with customers, they ask support from Kubernetes as the orchestrator and then a possible upper abstraction layer such as offered by Pivotal or RedHat.

So, if kubernetes the leading orchestrator, shouldn’t they come with an solution to the persistent storage layer?

Enter Container Storage Interface (‘CSI’ in short), this was a multi year effort led by google with supporters from other companies to provide a true, common, open API to connect storage arrays to the Docker containers, the good thing about CSI was that it didn’t try to accommodate any protocol or a unique storage feature, they focused on file support first, block later on with the basic features and each continue to release versions, incrementally gaining support from the storage industry.

But lessons had to be learned.

The first implementation of the volume plugins were “In-Tree”, (https://kubernetes.io/docs/concepts/storage/volumes/) what it meant was that they are linked, compiled, built, and shipped with the core kubernetes binaries. Adding a new storage system to Kubernetes (a volume plugin) requires checking code into the core Kubernetes code repository. his is undesirable for many reasons including:

  • Volume plugin development is tightly coupled and dependent on Kubernetes releases.
  • Kubernetes developers/community are responsible for testing and maintaining all volume plugins, instead of just testing and maintaining a stable plugin API.
  • Bugs in volume plugins can crash critical Kubernetes components, instead of just the plugin.
  • Volume plugins get full privileges of kubernetes components (kubelet and kube-controller-manager).
  • Plugin developers are forced to make plugin source code available, and can not choose to release just a binary.

So if In-Tree volumes plugins aren’t working for the above reason, they had to think of a better model which you may already guess:

Yes, its “Out-of-Tree” volume plugins.

  • The Out-of-tree volume plugins include the Container Storage Interface (CSI) and FlexVolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository.
  • Before the introduction of CSI and FlexVolume, all volume plugins were “in-tree” meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API. This meant that adding a new storage system to Kubernetes (a volume plugin) required checking code into the core Kubernetes code repository.
  • Both CSI and FlexVolume allow volume plugins to be developed independent of the Kubernetes code base, and deployed (installed) on Kubernetes clusters as extensions.
  • Ok, so if both CSI and FlexVolume are suitable ways, why chose CSI over FlexVolume?
  • Flex Volumes
    • Legacy attempt at out-of-tree
    • Exec based
    • Deployment difficult and remember, it’s all about ease of use
    • Doesn’t support clusters with no master access which is a no-go for the landscape of containers
  • So it’s CSI then, what’s the status of CSI? I’m glad you are asking because as of October 28th, it has finally reached a 1.0 GA status which includes block support.

You can read the entire spec here https://github.com/container-storage-interface

What about Kubernetes support for CSI? As of December 3rd, Kubernetes 1.13 is fully supporting CSI 1.0

What about the other orchestrators? Well, in the case of Redhat OpenShift, based on this public roadmap (and roadmaps may change), it looks like OpenShift 4.1 which comes out in 03/2019 will support Kubernetes 1.13 which of course, support CSI 1.0

https://blog.openshift.com/wp-content/uploads/OpenShift-Commons-Whats-New-in-OpenShift-Container-Platform-3.11.pdf

os

While this was a brief history of time, I really wanted to share with you my thoughts of where we were and where the container industry is now and why now, it has a true, mature API that we can integrate to, knowing it will be supported for years to come, and with that, I want to introduce you to our upcoming XtremIO CSI plugin with Kubernetes 1.13, more platforms from our portfolio will follow suite in 2019 and the demo that you can see below is a tech preview, we are accepting beta nominations for it now, so please ask your local SE to participate if you want to participate

what do you see in the demo above?

The first part shows the three pods which are part of the XtremIO CSI plug-in,

External CSI attacher container translates attach and detach calls from Kubernetes to the CSI plugin,

External CSI provisioner container translates provision and delete calls from Kubernetes to respective CreateVolume and DeleteVolume calls to CSI driver.

The next part shows the deployment of a YugaByte DB stateful set which consists of 3 master nodes and 3 worker nodes, each one requires a persistent volume claim from the XtremIO storage class upon deployment

YugaByte DB is an open source database for high-performance applications that require ACID transactions and planet-scale data distribution which support both scale up and scale down.

By creating the stateful set, 6 containers have been deployed, 6 XtremIO volumes have been created, mapped, formatted, and mounted on each container using the XtremIO CSI plugin

The next part shows DB workload against all different worker nodes, we can see the load from the XMS as well.

The next part shows the scale-up capability of the DB cluster from 3 to 6 workers, 3 additional XtremIO volumes have been created, mapped, formatted, and mounted on each container using the XtremIO CSI plugin.

When running the workload generator again, we can see that the load is redistributed across all available worker nodes and persistent volumes (XtremIO LUNs).

XtremIO 3.1.0 Plugin for VMware vRealize (vCenter) Orchestrator is now available

We have just released a new version of our VMWare vRO plugin that is suitable for XtremIO 6.2 release, new to this release is the XtremIO 6.1 / 6.2 release so QoS and native replication workflows have been added

You can download the new plugin by clicking the screenshot below

You can download the workflows user guide from https://support.emc.com/docu90472_XtremIO-Plugin-for-VMware-vRealize-Orchestrator-3.1.0-Workflows-User-Guide.pdf?language=en_US

And the installation and configuration guide from https://support.emc.com/docu90471_XtremIO-Plugin-for-VMware-vRealize-Orchestrator-3.1.0-Installation-and-Configuration-Guide.pdf?language=en_US

RecoverPoint For VMs (RP4VMs) 5.2 Patch 2 is out

We have just released RP4VMs 5.2 Patch 2, here’s what’s new, if you want to have a full overview of the previous 5.2 initial release, you can read about it here https://xtremio.me/2018/07/11/recoverpoint-for-virtual-machines-rp4vms-5-2-is-here-heres-whats-new/

 

Security enhancements


During installation, the user is required to replace the default admin user password with a new unique password. This password serves also as the password for the root user across all vRPAs in the system.
In new installs, the predefined admin user has all permissions. All other predefined users have been eliminated.
ln RecoverPoint for VMS 5.2.0.2, to further enhance security, the run internal command option in the boxmgmt CLI menu is removed.
For upgraded systems, the role of the admin user is extended to include all permissions. Upgrade has no impact on other users.
Root access is disabled by default for local, as well as remote, access. Access can be enabled by admin user (or subsequently disabled) from the Boxmgmt CLI.
In new installations of RecoverPoint for VMs 5.2.0.2, and later, there is no predefined boxmgmt user. Use the admin user to launch the Boxmgmt CLI.
RecoverPoint for VMs 5.2.0.2 includes a new predefined role, ‘sysmgmt’. A user defined with this role logs in directly to the Sysmgmt CLI, and has the same permissions as the admin user, with the exception of the Web Download and SE permissions.


Support for VMware vSphere 6.7
RecoverPoint for VMs 5.2.0.2 introduces support for VMware vSphere 6.7. (including vSphere 6.7 U1)

Auto-Registration
When protecting VMs or adding a copy, ESX hosts of the production and copy VMs are now automatically registered in RecoverPoint for VMs.
Up to 15 shared datastores for journals are now automatically registered in RecoverPoint for VMs.

vCenter Registration
When using vCenter linked mode, it is no longer necessary to register all of the linked vCenter Servers, and only the vCenter Server(s) managing the protected VMs, copy
VMs and vRPAs must be registered with RecoverPoint for VMs.

Production Journal Reduction
The default production journal size has been changed to 3GB.

Auto VMkernel port creation
The user can now configure the VMKernel ports from the RecoverPoint for VMs plugin UI, and they will be automatically created for all the ESXi hosts in the ESXi cluster.

Licensing
In addition to VM-based licensing, the RecoverPoint for VMs licensing model now includes socket-based licensing. With socket-based licensing, the number of licenses is determined by the number of physical CPU sockets in the ESXi servers that host the production VMs. When users add socket-based and VM-based licenses to the same system, the VM-based license capacity automatically converts to socket-based licenses at a ratio of 15 VMs per socket.
Both VM and socket-based licenses may now be installed as subscriptions. A subscription license has a start and end date. Subscription and permanent licenses may co-exist in the same system.
Refer to the RecoverPoint for Virtual Machines Administrator’s Guide for more details.

Journal Access Module (JAM)
For fresh installs of RecoverPoint for VMs 5.2, JAM provides enhanced vSAN support, and improved scale and reliability for access to journal and repository volumes. It is installed automatically on ESXi clusters that run vRPAs.

One splitter communication mode
Starting from this release, only IP communication between the splitter and vRPAs will be supported. iSCSI communication between the splitter and vRPAs is no longer supported. While this simplifies network configuration and new installations, there are special instructions that must be followed when upgrading to version 5.2. For these instructions, see the Install and upgrade section in these release notes.

I/O throttling
The default I/O throttling setting varies according to the RecoverPoint for VMs product release. RecoverPoint for VMs 5.2 Release Notes

RecoverPoint for VMs release

Default I/O throttling setting

5.2.0.1 and earlier

none

5.2.0.2 and later

low

For more details on this change, see RP-8875 in Known issues in RecoverPoint for VMs on page 7 and the RecoverPoint for VMs Installation and Deployment Guide.

You can download the 5.2 P2 installation by clicking the screenshot below


the documentation set can be downloaded by clicking the screenshot below

Dell EMC OpenManage Integration for VMware vCenter v4.3.0 is now available

To effectively run today’s data centers, you need to manage both physical and virtual infrastructure. This means using multiple disconnected tools and processes to manage your environment, wasting valuable time and resources.

Dell has created a solution to address this problem while increasing task automation right from within the virtualization management console that you use most, VMware vCenter. The OpenManage Integration for VMware vCenter is a virtual appliance that streamlines tools and tasks associated with the management and deployment of Dell servers in your virtual environment.

OpenManage Integration for VMware vCenter (OMIVV) is designed to streamline the management processes in your data center environment by allowing you to use VMware vCenter to manage your entire server infrastructure – both physical and virtual. From monitoring system level information, bubbling up system alerts for action in vCenter, rolling out firmware updates to an ESXi cluster, to bare metal deployment, the OpenManage Integration will expand and enrich your data center management experience with Dell PowerEdge servers.

Fixes & Enhancements

Fixes:
1. Security fixes for known vulnerabilities in components consumed in OMIVV appliance:
a. Fixed BEAST (CVE-2011-3389) vulnerability issue in OMIVV appliance.
b. Fixed NFS Shares World readable and – NFS Exported Share Information Disclosure (CVE-1999-0170, CVE-1999-0211, CVE-1999-0554)
c. CentOS 6 : kernel (CESA-2018:2390) (Foreshadow)
d. CentOS 6 : kernel (CESA-2018:1854) (Spectre)
e.Oracle Java SE Multiple Vulnerabilities (July 2018 CPU) (Unix)
2. Fix to report NVMe disks present in the host in OMIVV.
3. Fix to support special characters in CIFS share password.
4. Fix to show BOSS related RAID attributes in System Profile.

Enhancements:
1. Support for the following:
a. PowerEdge MX7000 modular infrastructure
– PowerEdge MX740c and PowerEdge MX840c servers.
– Chassis and Servers management using unified Chassis Management IP
– Standalone and Multi-Chassis Configuration deployment modes of PowerEdge MX7000 chassis.
b. PowerEdge R940XA
2. Enhancement in Chassis Profile test connection flow to make it automated.
3. Support for OEM servers for OMIVV workflows.
4. Support for vSphere 6.7U1 and vSphere 6.5U2.
5. The following enhancements in the Firmware Update workflow:
a. Support to perform Clear Job Queue and reset iDRAC while creating a Firmware Update job.
b. Support to provide Custom Online Catalog location for a firmware update.
c. Support for HTTPS as default communication with Dell.com domains.
6. Enhanced support in the OS deployment workflow:
a. OS deployment management network on PCI NICs.
b. OS deployment on the BOSS disk.
7. Support to report Remaining Rated Write Endurance metrics for SSD in OMIVV.

You can download the new OVA from here (click the screenshot below)

Updating from 4.2 to 4.3

2018-10-26_19-04-35

if you are already using the 4.2 version, you can update it to 4.3, directly from the appliance management, simply browse to the appliance management url, create a backup of the existing configuration, then go to “Appliance Management” -> “Update”

if the appliance can see the internet, it will prompt you for the new version..

The Dell EMC VMware ESXi 6.7 U1 ISO is now available

==update== (13/03/2019)

you can now download an updated VMware-VMvisor-Installer-6.7.0.update01-10764712.x86_64-DellEMC_Customized-A04.iso from here https://downloads.dell.com/FOLDER05493094M/1/VMware-VMvisor-Installer-6.7.0.update01-10764712.x86_64-DellEMC_Customized-A04.iso

==update== (14/12/2018)

you can now download an updated VMware-VMvisor-Installer-6.7.0.update01-10764712.x86_64-DellEMC_Customized-A03.iso from here https://downloads.dell.com/FOLDER05323511M/1/VMware-VMvisor-Installer-6.7.0.update01-10764712.x86_64-DellEMC_Customized-A03.iso

 

VMware have just released vSphere 6.7 Update 1 which you can read all about here

https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-671-release-notes.html

and here

https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-671-release-notes.html

I also wrote a post about some enhancement to it’s Round Robin policy here

https://xtremio.me/2018/09/02/vmware-vsphere-6-7-u1-a-major-enhancement-is-coming-to-psp-round-robin/

traditionally speaking, when installing ESXi, you want to use your specific server vendor customized ISO as it contains the most up-to-date drives for their servers but if you browse to the download section of vSphere 6.71 U1 (the ESXi part of it), you won’t be able to see the Dell ISO there

That’s because, you can now download the customized Dell ISO directly from the Dell Web Site. Simply browse to your specific server downloads page, in my example, I’m using the M630 servers so the url is https://www.dell.com/support/home/il/en/ilbsdt1/product-support/product/poweredge-m630/drivers

And indeed, scrolling down, you will be able to see the new ISO image

The specific release notes of the customized 6.7 U1 ISO compared to the basic 6.7 U1 which you can download from the VMware web site are

ESXi 6.7U1 Build Number: 10302608

Dell Version : A00

Dell Release Date : 16 Oct 2018

VMware Release Date: 16 Oct 2018

Important Fix/Changes in the build

==================================

– Driver Changes as compared to previous build : No

– Base Depot: VMware-ESXi-6.7.0-10302608-depot.zip

Drivers included:-

Qlogic Drivers:

====================

– qcnic – 1.0.16.0

– qedentv – 3.7.9.1

– qedf – 1.2.24.6

– qedil – 1.0.22.0

– qedrntv – 3.7.9.2

– qfle3 – 1.0.69.0

– qfle3f – 1.0.51.0

– qfle3i – 1.0.16.0

– qlnativefc – 3.1.8.0

Dell PERC Controller Drivers:

=======================

– lsi-mr3 – 7.703.20.00

– dell-shared-perc8 – 06.806.90.00

Refer the below links for more information:

6.7 VMware Release Notes:

https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-671-release-notes.html

Please find more details on this Dell Customized Image on dell.com/support

https://docs.vmware.com/en/VMware-vSphere/index.html

XtremIO 6.2 – Part 5, The XMS/AppSync Integration is now GA

In talking to customers about their application Test/Dev and protection processes, there are two items, centered around a central theme, that I continuously hear: That there is a growing communication and collaboration gap between Storage Admins and Applications admin, and that increasingly Storage Admin and Application Admin responsibilities are being handled by the same person.

In fact, according to an IT Resource Strategies Survey, only 40% of Application and Storage Admins believe they are communicating well today, while 80% believe collaboration between the two functions is key to achieving organizational agility. Poor collaboration and communication between the two can lead to organizational tension which impacts the business, with a gap that is driven by technology and business trade-offs that each team is forced to make.

Although the key stakeholders share a common business goal, there are lots of natural tensions that exist between the application teams, storage teams, and the business planning functions that can limit the organization’s ability to have complete goal alignment.

All of the data movement operations, key to our business, requires tight planning and communication between the application admins and storage administrators.

Luckily, we live in an exciting time where data is a company’s most valuable asset, and the storage arrays, administrated by those trained in the craft of keeping this most valuable asset safe, are becoming smarter and smarter. With this, I am excited to share a key differentiated feature our team has been working on, an XMS VMware integration via AppSync, making XtremIO smarter by providing VMware visibility from the XMS. This features allows administrators to gain insight into what the storage LUNs they provisioned to their VMware environment are being used for, along with the ability to protect their VMware Datastores and VMs in an application consistent manner directly from the storage management interface (in this case the XMS).

First, let’s talk about providing application level visibility. Whether you are both an application and storage admin, or a standalone storage admin, administrators are transparent as to the relationship between a storage LUN and an application. This makes it difficult to communicate using the same language (Ex. “I need more storage provisioned for my Datastore called OS_Images”, or “I have a corruption on my Oracle DB copy called DB_101_Test and need an old copy restored” or to have an overall complete view of storage to application mapping from storage.

The second is local application consistent protection. You know that “someone who’s running toward you the system administrator / backup administrator moment? That’s the moment when you or a colleague just realized you or they just accidently deleted a VM or Datastore that was not meant to be deleted. In fact, I think you would agree with me that most of the data that needs to be restored is coming from the past 6-7 days. Now imagine being able to protect your VMware environment in an application consistent and aware manner, and being able to restore a Datastore or VM with the click of a button. Imagine being able to save the moment by responding with “Sure, which ESX host would you like me to restore a copy to?” “Whaaaaat? This guy knows the actual ESX hosts rather than just the server port addresses?”, the app admin will think? Your application admins will absolutely love you! And let’s not forget this is all powered by XtremIO Virtual Copies, which provide copies that are immediate, space and metadata efficient (no copied metadata or data blocks whether the copy is mounted or not), and have the same 100% performance as the original source volume without impacting it. By the way, this isn’t only for protection, the same can be done for a Datastore or VM copy, with the ability to create a copy and mount it to any ESX host the integration discovers.

This integration provides:

  • Application Awareness: Provides application information in XMS, such as correlation between VMware Datastores/VMs and the XtremIO volumes they are stored in
  • Application Consistent Copies: Provides the ability to create application
    consistent copies for backup/protection and repurposing, all while using XtremIO Virtual Copies. Storage Admins can provide Application Admins with certain SLAs for their applications
  • Ease of Use:
    One UI for admin to manage raw LUN copies (crash consistent) and application consistent copies
  • Common Language: Applications owners and storage admins are on the same page with a transparent copy workflow.

Let’s get into some details:

  • Setup: The integration works by pointing your XMS to an AppSync server. Just click Settings gearàAppSync and point the XMS to your AppSync server using the Host Name (only) of the AppSync server, and the AppSync server’s admin username and password created at time of AppSync install. (If the XMS cannot communicate with your AppSync server via hostname due to DNS resolution, please open a case with support to get the IP address of the AppSync server added to the XMS’s host file).

    This integration works in both Brownfield (existing AppSync deployments) and Greenfield (brand new AppSync deployments). You can simultaneously use AppSync from both the AppSync UI and XMS UI. Remember AppSync is absolutely free; if you aren’t using it today, download it now!

    You need to add a vCenter. In a Brownfield deployment, if a vCenter is already registered to the AppSync server, it will automatically load it. If not, add your vCenters.

  • VMware Object Visibility: Key feature of this functionality is to view the relationship between a VMware Datastore or VM, and the XtremIO volume it resides on. As a vCenter can have storage from many different storage arrays of different types, the feature has intelligence built in in which it will ONLY show you Datastores and VMs that reside on any of the XtremIO arrays that are currently being managed by the XMS you are pointing the AppSync server to.

You can see these relationships in different places:

First in the AppSync -> Datastores Tab, which will show you the Datastore name, the name of the Volume on the XtremIO array it resides on, the vCenter Server the datastore belongs to, the Size of the datastore, the type, the ESX servers its mounted on, and whether it is protected (meaning you have a Service plan subscribed to the datastore to protect it).

Second, is in AppSync Virtual Machines Tab, which will show you the VM name, the vCenter server the VM belongs to, the name of the Volume on the XtremIO array it resides on, the Datastore the VM resides in, the time the last copy of this VM was created. Note that VM discovery does not happen until a datastore has been protected via a Service Plan.

Third, is the original Configurations Volumes tab, in which we have added a new Column drop down called Datastore, and a new Related Entities Tab called Virtual Machines that will show you the Virtual Machine details of the VMs residing on this volume:

  • Application Protection: The protection of your datastores and VMs is handled via a Service Plan. The integration will load any previously defined Service Plans for VMware in a Brownfield deployment, or you can create a brand new one. When creating a brand new one, the key things to specify are:
    • Startup Type: Scheduled or On Demand
    • Consistency Type: VM Consistent or Crash Consistent.
      • VM Consistent will do it in a VMware application consistent manner, momentarily stunning the VM and then taking a copy using XtremIO Virtual Copies
      • Crash Consistent just takes a crash consistent snapshot using XtremIO virtual copies
    • Include Virtual Machines:
      • If the VM has additional disks attached to it, which are part of another Datastore or separate RDMS, selecting YES will include them, while selecting NO will include only the datastore the VM OS resides on.
    • Mount: If you want to mount the copy created, with a new signature or existing, along with unmounting the previous copy.




In order to then protect a Datastore and it’s VMs, we click ProtectàSubscribe and define the Service Plan we want to protect the Datastore with. If the Service Plan is set as Scheduled, it will run per the defined scheduled job, and if On Demand we simply click Protectà Run Now to run the Service Plan. Similarly, if i have a service Plan that is set per a Schedule and I need to run the protection Now without waiting for the next scheduled job (for example I know a maintenance window is about to happen), you can do the same by clicking Run Now.

If I run the Service Plan from within the Datastores tab, it will run it only for the subscribed Datastore. If instead, I have multiple Datastores subscribed to the same Service Plan, and I want to run the Service Plan across all these assigned Datastores, I can go to the Service Plans Tab, and select the Service Plan I want to run, while seeing all the Datastores it is assigned to, and select Run to run it across all.

  • Restore: Now that I have protected a Datastore, in the case of a failure, I now have the ability to select a PIT copy and restore the entire Datastore, or just the individual VM within a Datastore.
    • If restoring an entire Datastore, I must select the PIT to restore from, along with some options.


  • I also have the ability to restore an individual VM, without restoring the entire Datastore. Within the Virtual Machines tab, select a Virtual Machine and click Restore. Next, select a PIT copy to Restore from, along with whether you want to restore the VM to the original location, or to a different vCenter, ESX Host, or even Datastore (You actually have the ability to Restore the VM to a non XtremIO datastore as part of this flow.)

In summary, we are making XtremIO smarter by providing storage LUN to VMware level object relationship visible within the XMS, with the ability to protect or create test/dev copies of these objects in a VMware application consistent manner. Doing so from the XMS empowers the storage admins to have visibility as to what resides on their LUNs, and enables them to perform storage protection and copy functions in an application consistent manner, all from one UI. Try it today, and remember, AppSync is free!

You can see a demo on how it all works below!