XtremIO 6.2, Part 2 – Introduction to Quality of Service

Introduction to XtremIO Quality of Service

Enterprises are leveraging Dell EMC XtremIO high performance storage to consolidate multiple applications and environments in a single cluster. In such consolidated deployments there are tens of applications, each with different performance needs and importance to the organization, running dozens of workloads against the storage array. Customers may want to limit some low-priority applications so that they will not use too much of the storage array’s resources (noisy neighbors), and in that way to ensure that high-priority workloads will be better serviced by the cluster.

Another strong use case of XtremIO is its integrated Copy Data Management (iCDM) capabilities, allowing customers to create multiple copies of their production data within the same storage array for their test, development and data analytics environments, with close to none extra capacity used for those copies, and thus consolidating entire applications’ life-cycle in the same cluster and saving up lots of power and space. When doing so, customers may want to make sure that their non-production copies are not overloading the array and that there is enough storage resources to service production needs.

For such and similar reasons we present a new feature in XtremIO storage clusters – Quality of Service!

With this feature, starting in version 6.2, customers can now restrict a single Volume, an entire Consistency Group or an Initiator Group, to ensure that they will not pass a certain limit in terms of Bandwidth used or IOPS ran, and in that way allowing other workloads to utilize more of the cluster’s resources.

To use this feature all that is needed is to configure a new QoS Policy in the new QoS Policies tab under Configuration in the XMS WebUI and assign it to the entity in need of restricting.

There are 3 inputs relevant when setting a QoS Policy:

  1. The Limit Type
  2. The Limit itself
  3. Burst Percentage

The Limit Type would be one of two values – Fixed or Adaptive. A Fixed QoS Policy sets a solid limit for the XtremIO entity assigned to with no regards to its size. For instance, 200MB/s Max BW for a Volume. On the other hand, an Adaptive QoS Policy sets an adjusting limit for the XtremIO entity, taking its size (in GBs) into consideration. For instance 1MB/s Max BW per 1GB of size. An Adaptive QoS Policy can either be assigned to a Volume or a Consistency Group (it cannot be assigned to an Initiator Group, since an Initiator Group has no storage capacity). When assigning an Adaptive QoS Policy to a Consistency Group, the size to consider is the sum of all of its Volumes.

The Limit itself can be either a specific Max Bandwidth value (to be set in either MB/s or KB/s), or a calculated Max Bandwidth value using a chosen IO Size and a requested Max IOPS limit:

(MAX_BW = MAX_IOPS x IO_SIZE)

Let’s see a couple of QoS Policies configured in the WebUI, followed by their corresponding XMCLI commands:


On the left: an Adaptive QoS Policy limiting the assigned XtremIO entity to Max 1MB/s BW per 1GB with 50% Burst;

On the right: a Fixed QoS Policy limiting the assigned XtremIO entity to Max 6,400IOPS of 16KB blocks (=100MB/s)

And the XMCLI commands to create such Policies:

    > add-qos-policy qos-policy-name=”QoS_Adaptive_BW” limit-type=adaptive max-bw=”1m” burst-percentage=50

    > add-qos-policy qos-policy-name=”QoS_Fixed_IOPS” limit-type=fixed max-iops=6400 io-size=”16k”

For our example we will set a fixed BW QoS Policy of 200MB/s (as can be seen in the image below) and assign it to one of our Volumes (we will touch Burst Percentage at the second part of this post).


A Fixed 200MB/s QoS Policy

To view existing QoS Policies from the XMCLI run:

    > show-qos-policies


XMCLI show-qos-policies output

Let’s put our QoS Policy to use. We are running a deployment with 4 VMs, each running on a separate XtremIO Volume, with each VM running similar intensive IO workload against our single-brick XtremIO cluster.


A 4 Volume intensive workload

We will identify the VM running on vol4 as low priority workload in our environment and assign our new QoS Policy to it, to restrict its resource consumption and free up storage resources for the other Volumes in the system.

We will first assign the Policy to the Volume in Monitor Only mode so we can first view its expected effect on the Volume. This is done through the Volumes tab in the Configuration section:


Assigning a QoS Policy to a Volume

Or from the XMCLI:

    > modify-volume vol-id=”vol4″ qos-policy-id=”QoS_Fixed_200MB_BW” qos-enabled-mode=monitor_only

(Assigning a QoS Policy to other entities from the XMCLI is done the same way with the respective modify command for the entity – modify-consistency-group and modify-initiator-group.)

We created a custom report in the XMS Reports section to see the expected effect of the QoS Policy on vol4. In the graph we can see the actual BW of the Volume, the QOS Effective Max BW which is the QoS Policy we assigned to the Volume (for now, in a “Monitor Only” mode), and the QOS Exceeded BW (= QoS_Effective_BW – Acutal_BW) that the Volume now consumes with respect to the QoS Policy.


Assigning the QoS Policy in “Monitor Only” mode

After reviewing the effects of the QoS Policy on the Volume we decided we in fact want to enable it. We Modify Assigned Policy for the Volume, change the QoS State to Enabled and click apply. We will see the effects in the performance graphs instantly:


Assigning the QoS Policy in “Enabled” mode – vol4 graph


Assigning the QoS Policy in “Enabled” mode – performance dashboard

We can see in our custom report for vol4 that the actual BW of the Volume decreased to align with the QoS Effective Max BW, and that the QoS Exceeded BW reset to zero.

In the general Performance Dashboard of all 4 Volumes we can see that vol4‘s BW, that is now limited by a QoS Policy, is reduced to 200MB/s while the other Volumes’ BW now increased, utilizing the array’s resources that were freed when we limited vol4, thus achieving precisely the goal we aimed for!

We can also view QoS related properties for the Volume from the XMCLI with the show-volume command, for instance:

    > show-volume vol-id=”vol4″ prop-list=[qos-policy-id,qos-enabled-mode,qos_effective_max_bw,bw]


XMCLI show-volume output

Burst IOs


When defining a QoS Policy on XtremIO, we can also allow limited entities to have momentary bursts of IOs above the maximum configured BW allowed by the Policy. This is done using the Burst % input of the QoS Policy.

When an XtremIO Volume is limited by a QoS Policy with Burst Percentage above 0%, every time that that Volume is running a load against the storage cluster that is lower than its QoS limitation, it starts accumulating IO “credits”. Those credits are calculated as follows: if L is the QoS limit and X is the actual BW used, than for every second of the workload the volume gets L – X credits and can accumulate up to B credits (when B is the QoS Effective Burst).

In the example below, the actual BW in the beginning of the load is about 160MB/s while the QoS limitation is 200MB/s and the Effective Burst is 800MB (configured as 400 Burst Percentage, since 400% of 200MB/s is 800MB). This means that every second in the beginning of the load the Volume “earns” 40MB credits, and after about 20 seconds it reaches the max of 800MB credits.

The next time that the Volume runs a load that exceeds its QoS Policy, it is allowed to use its IO credits and run a load higher than its QoS Policy, until it uses all of its IO credits (in our example – 800MB of aggregated BW), in that point the Volume regresses back to the QoS BW limitation of 200MB/s.


Burst IOs in use under a QoS Policy

Conclusion

XtremIO 6.2 QoS feature is giving you, our customers and partners a safe of mind where it comes to isolating workloads from manipulating the array performance and / or, if you want to ensure a specific bandwidth per your internal customers, as demonstrated, the feature is very powerful, yet, very easy to configure

You can watch a demo showing everything by clicking the link below


XtremIO 6.2, Part 3 – The new Web UI

XtremIO 6.2 Web UI – Additional intelligence at your fingertips

In XtremIO Version additional intelligence has been added to the Web UI.

First of all, the popular Weekly Patterns widget has been promoted to the Dashboard. The Weekly Patterns widgets allows you to track your relative weekly traffic. Weekly or daily peaks can be easily identified. In addition, clicking on the Unusual Behavior button will allow you to easily determine any anomalies in your traffic. The Unusual Behavior button will highlight if your current traffic is either higher or lower than the average traffic demonstrated in the past 8 weeks. It will also present by how much (percentage number) is the anomaly deviance.

 

XtremIO has been providing the Top Performing Volumes and Initiator Groups for some time. This allows customers to determine which storage objects are consuming the highest IOPS or Bandwidth, or demonstrating the highest latencies. Following customer inputs, and the high adoption of our Consistency Groups object usage, we have added in Version 6.2 also the Top Performing Consistency Groups calculations. Customers can now easily see which application Consistency Groups are consuming the highest system resources (IOPS or BW).

 

In version 6.2 we have also introduced a dedicated Top Performers report. This report allows you to select a specific point in time, for example a time with a demonstrated peak, and understand which Volumes, Initiator Groups, or Consistency Groups was the cause of this performance peak.

Having the ability to easily determine which storage applications and objects are the cause of a performance peak, provides a Storage Administrator tremendous troubleshooting powers. In case that a performance peak happened in the past (for example, during the night), the Storage Admin can easily identify the culprits using the Top Performers dedicated report.

 

In order to better understand a Cluster’s behavior, it is necessary to understand which applications are running on the array. Different applications have different DRR and Performance patterns. For example, Databases are highly Compressible, while Virtual Server environments display high Deduplication rates. XtremIO has added to version 6.2 the ability to classify storage volumes by Application Type. Once the storage objects have been classified by Application Type, the Storage Admin can easily identify which application types are running on his array. In addition, he can then review the iCDM (Integrated Copy Management) policies for the different Application Types.

Lastly, we have also added a widget to the main UI dashboard that shows you the replication Bandwidth

XtremIO 6.2 – Part 1, New options for Scaling UP and Out

One of the most important requirements from today’s IT platform is ability to adjust the capabilities to dynamic business requirements. In modern enterprise storage subsystems this requirement translates to two basic features:

  • Dynamic growth in capacity. Allow non-disruptive capacity growth of existing array.
  • Dynamic growth in performance. Add more compute resources to support updated performance demands of existing system.

Will present here the XtremIO X2 scalability features and how it makes the IT infrastructure growth easier and more efficient.

Use-case 1: Help!!! There is no free space in my DB.

The XtremIO array supports non-disruptive capacity upgrade. The new SSDs could added to existing array without adding another CPU/memory resource. This supports all the potential capacity growth scenarios without any limitations:

  • Current production volumes growth
  • Copies of current volumes growth
  • New copies for existing volumes
  • New volumes

111

 

Use-case 2: Help!!! I need more performance for my DB

222

XtremIO designed as scalable distributed system optimized for high performance demands and linear scalability.


Scale-UP process:

Each DAE in an X-Brick holds up to 72 SSDs divided into two equal groups of 36 SSDs. These two groups are called a DPGs (Data Protection Groups). A DPG must be fully populated with SSDs before starting to fill the next DPG with SSDs.

For single brick array, the scale-up “granularity” steps looks as below:

  • An initial X-Brick configuration of 18 SSDs comprises the first DPG.
  • Scaling-Up with any even amount of disks between 2 and 18 (2 SSDs granularity) to a full configuration of the first DPG, and the X-Brick has a total of 36 SSDs.
  • Scaling-Up with 18 SSDs as an initial configuration of the second DPG, and the X-Brick has a total of 54 SSDs.
  • Scaling-Up with any even amount of disks between 2 and 18 (2 SSDs granularity) to populate the second DPG of 36* SSDs, and the X-Brick has a total of 72 SSDs.

Multi-brick scale-up require either:

  1. Leveled growth

    When the amount of SSDs is identical between bricks. The Scale-UP granularity and “steps” are similar to these of single brick, but required to be applied consistently across all the xBricks in XtremIO array

  1. Unleveled growth

    The initialization of second DPG will require 18 SSDs. In multi-brick systems leveled population of 18 SSDs per xbrick might require more SSDs than required by customer from capacity perspective. To support more granular growth, this step is allowed to be sequential and not concurrent between xBricks.

Scale-OUT process:

Scale-OUT is easy and doesn’t impact the current array performance. As part of the process, new resources physically installed and added to the array. The existing user data and performance workload evenly distributed across existing and the new resources as part of Scale-OUT process. No user-level planning, migration, rebalance or any other additional management tasks are required!!!

333

Like as “Scale-UP” process, the new resources during “Scale-OUT” could be added in two ways:

  1. Leveled growth.

    In this case, the amount of SSDs is identical between existing and the new bricks

  1. Unleveled growth

    When the existing xbricks are fully populated with 72 SSDs, leveled growth will require fully populated new xbrick. In order to support more granular growth that fit the customer capacity requirements in this scenario, the new xbrick could be added with minimum of 18 SSDs and with granularity of 6SSDs (18, 24, 30, 36, 54, 60, 66 or leveled 72).

Need more space?? 3.84TB drives are here

Starting version 6.2 XtremIO supports 3.84 TB drives in X2-R new systems. In current version 3.84 TB drives supported in new array only, and not as scale-up or out for existing array. The X2R cluster supports 60 SSDs of 3.84TB per xbrick, which will allow single xbrick raw physical space growing to 230TB raw capacity and 204TB usable capacity (Data Reduction 1:1).

X-Brick Configurations

To accommodate a wide range of capacity requirements we are offering the latest X2 platform in three configurations: X2-S, X2-T, and X2-R with different SSD sizes. Start as small as 7.2TB RAW capacity. In most use cases the X2-R will be the most typical configuration. X2-S will be used for use cases with very high levels of IO density, where dedupe ratios are very large or very large numbers of active snapshots are being used. Work with the sizing tools/XtremIO specialist SEs to identify the optimal option. X2-T is a cost effective entry point into the XtremIO family where deployment and growth expectations are expected to remain under 200TBe.

Annotation

XtremIO & Stratoscale White Paper

To improve competitiveness in a fast-paced world, organizations are embracing software as a prime differentiator. A software-driven business is more agile and efficient, innovates faster, and responds dynamically to changing market and customer demands. Increasingly, gaining these compelling advantages requires adopting cloud-native applications.

Building and maintaining a proper cloud platform (for example, life-cycle management, troubleshooting issues, etc.) on premises, by yourself can be capital-intensive and difficult, as most organizations do not have the required skills. Going to a public cloud is an option for some, but those with sensitive data, or compliance and performance requirements, may not want to take the risk. In either case, you need to devote special attention to ensuring high availability of your cloud-native applications, which requires a specialized set of skills.

Hybrid cloud solutions have historically focused on bridging infrastructure aspects of the enterprise data center with public clouds. By stretching compute, storage, and networking layers between the two, the notion of hybrid cloud has enabled new use cases, requiring better redundancy, elasticity and security.

The public cloud has since pressed forward to offer a wider and wider range of cloud (platform) services, ranging from container orchestration to analytics, and big data processing to managed database services.

In order for businesses to maximize the effectiveness of their cloud strategies, hybrid cloud solutions must evolve to something beyond simply infrastructure. Hybrid cloud solutions must now encompass the same types of cloud services as those powering applications that are born in the public cloud.

In this white paper, we present a solution that provides the robust capabilities and built-in high availability that you need for cloud-native applications; the combination of Dell EMC XtremIO X2 all-flash storage platform and Stratoscale’s hybrid cloud solution.

The paper details the features of this integrated solution around high availability and scalability for the applications and the platforms. Therefore, you can take your most critical cloud-native applications from development to production using this solution, with complete confidence.

With this proven cloud platform, organizations can accelerate their digital business objectives via cloud-native applications, from development through testing and into production, on a single platform, with ease and agility. Most importantly, eliminating the risk of application downtime preserves critical revenue streams, helps ensure customer satisfaction and strengthens your competitive advantage.

You can read the full white paper by clicking the screenshot below

img_1080



You can also watch a video on how it all works by clicking the video below

VMware vSphere 6.7 U1 – A Major enhancement is coming to PSP Round Robin

During VMworld 2018 US, VMware announced their upcoming vSphere 6.7 U1 version which you can read the highlights of it here

https://blogs.vmware.com/vsphere/2018/08/under-the-hood-vsphere-6-7-update-1.html

what they didn’t covered in that post, is their upcoming changes to the Path Selection Protocol (PSP) when used in conjunction with the Round Robin (RR) algorithm.

Up until now, when you wanted to have the best performance out of your storage array, you could either

  1. Install a third-party SATP like PowerPath/VE
  2. If using the default SATP adapter, change it’s policy from “fixed” or “MRU” to Round Robin
  3. If using Round Robin, many arrays (including XtremIO), recommend to change it to IOPS=1 which means that every one command, the traffic will go to another path in a round robin manner as can see in the screenshot below

While Round Robin is highly used in vSphere based environments, it has one drawback which is, it doesn’t know to detect if the paths are congested or not.

Well, in vSphere 6.7 U1, there is a massive (and a positive change) to Round Robin, it can now take into an account the latency measured.

https://storagehub.vmware.com/export_to_pdf/vsphere-6-7-core-storage-1

With the release of vSphere 6.7 U1, there are now sub-policy options for VMW_PSP_RR to enable active monitoring of the paths. The policy considers path latency and pending IOs on each active path.
This is accomplished with an algorithm that monitors active paths and calculates average latency per path based on either time and/or the number of IOs. When the module is loaded, the latency logic will
get triggered and the first 16 IOs per path are used to calculate the latency. The remaining IOs will then be directed based on the results of the algorithm’s calculations to use the path with the least
latency. When using the latency mechanism, the Round Robin policy can dynamically select the optimal path and achieve better load balancing results.
The user must enable the configuration option to use latency based sub-policy for VMW_PSP_RR:

esxcfg-advcfg -s 1 /Misc/EnablePSPLatencyPolicy
To switch to latency based sub-policy, use the following command:
esxcli storage nmp psp roundrobin deviceconfig set -d <Device_ID> –type=latency
If you want to change the default evaluation time or the number of sampling IOs to evaluate latency,
use the following commands.
For Latency evaluation time:
esxcli storage nmp psp roundrobin deviceconfig set -d <Device_ID> —
type=latency –latency-eval-time=18000
For the number of sampling IOs:
esxcli storage nmp psp roundrobin deviceconfig set -d <Device_ID>
type=latency –num-sampling-cycles=32
To check the device configuration and sub-policy:
esxcli storage nmp device list -d <Device_ID>
Usage: esxcli storage nmp psp roundrobin deviceconfig set [cmd options]
Description:
set Allow setting of the Round Robin path options on a given device controlled by the Round Robin Selection Policy.

Cmd options:
-B|–bytes=<long> When the –type option is set to ‘bytes’ this is the value that will be assigned to the byte limit value for this device.
-g|–cfgfile Update the config file and runtime with the new setting. In case device is claimed by another PSP, ignore any errors when applying to runtime configuration.
-d|–device=<str> The device you wish to set the Round Robin settings for. This device must be controlled by the Round Robin Path Selection Policy (except when -g is specified)(required)
-I|–iops=<long> When the –type option is set to ‘iops’ this is the value that will be assigned to the I/O operation limit value for this device.
-T|–latency-eval-time=<long> When the –type option is set to ‘latency’ this value can control at what interval (in ms) the latency of paths should be evaluated.
-S|–num-sampling-cycles=<long> When the –type option is set to ‘latency’ this value will control how many sample IOs should be issued on each path to calculate latency of the path.
-t|–type=<str> Set the type of the Round Robin path switching that should be enabled for this

device. Valid values for type are:
bytes: Set the trigger for path switching based on the number of bytes sent down a path.
default: Set the trigger for path switching back to default values.
iops: Set the trigger for path switching based on the number of I/O operations on a path.
latency: Set the trigger for path switching based on latency and pending IOs on path

-U|–useano=<bool> Set useano to true, to also include non-optimized paths in the set of active
paths used to issue I/Os on this device, otherwise set it to false

The diagram below shows how sampling IOs are monitored on paths P1, P2, and P3 and eventually selected. The time “t” sampling window starts. In the sampling window, IOs are issued on each path in Round Robin fashion and their round-trip time is monitored. Path P1 took 10ms to complete in total
for 16 sampling IOs. Similarly, path P2 took 20ms for the same number of sampling IOs and path P3 took 30ms. As path P1 has the lowest latency, path P1 will be selected more often for IOs. Then the sampling window again starts at ‘T’. Both “m” and “T” are tunable parameters but we would suggest to
not change these parameters as they are set to a default value based on the experiments ran internally while implementing it.


Legend: T = Interval after sampling should start again
m = Sampling IOs per path
t
1 < t2 < t3 —————> 10ms < 20ms < 30ms
t
1/m < t2/m < t3/m —–> 10/16 < 20/16 < 30/16
With the testing, we found that with the new latency monitoring policy, even with latency introduced up to 100ms on half the paths, the PSP sub-policy maintained almost full throughput.
Setting the values for the round robin sub-policy can be accomplished via CLI or using host-profiles.

You can now watch my VMworld 2018 Session

I had fun, delivering my XtremIO best practices session at VMworld 2018, clicking the link below will take you to a video recording of the session

Also, here are some references to topics I discussed in my session

VMworld 2018, XtremIO Integration with CloudIQ

VMworld 2018, XtremIO Integration with CloudIQ

We are excited to announce that XtremIO has been integrated to CloudIQ.

CloudIQ is the Software as a Service (SaaS) platform offered by Dell EMC that allows customers to manage all of their EMC arrays in a single pane of management. A Dell EMC customer needs to simply log into the CloudIQ application, and he will automatically be presented with an Overview of all of the Dell EMC arrays deployed. CloudIQ currently supports Unity, SC, XtremIO and PowerMAX. CloudIQ is a SaaS application, and as such, no software needs to be installed at the customer site. The fact is SaaS based also means that its keep updating itself with new features that you the customer will benefit from, the moment they are out. The initial rollout for XtremIO customers will start in September 2018 and as mentioned above, it will continue to update itself with new metrics in each release, this is also your opportunity to let us know if you like it and if you would like to see more metrics to be surfaced via it’s portal.

The CloudIQ Storage Analytics Foundation

PCF provides a production ready application container runtime, fully automated service deployments and the security controls required to enable modern application development using Microservices architectures, containers, CI/CD and DevOps.

The PCF offers developers a production-ready application container runtime and fully automated service deployments. Meanwhile, operations teams can sleep easier with the visibility and control made possible by platform-enforced policies and automated lifecycle management.

Portal Security

CloudIQ provides each customer an independent secure portal, and ensures that customers will only be able to see their own data via CloudIQ.

CloudIQ access requires that each user has a valid Dell EMC support account.

Each user can only see those systems in CloudIQ which are part of that user’s site access as per configuration in Dell EMC Service Center.

Cloud Hosting Infrastructure

CloudIQ is hosted on Dell EMC infrastructure which is Highly Available, Fault Tolerant, and guarantees a 4-hour Disaster Recovery SLA.

Because it is web-based, CloudIQ is accessible anytime, anywhere.

CloudIQ delivers several key values to customers:

  • Reduce Total Cost of Ownership: CloudIQ provides an easy single pane of glass from which you can monitor your Unity and SC systems, all from the web so you can access anytime, anywhere.
  • Expedite Time to Value: Because it is deployed from the EMC Cloud, customers can simply log into their CloudIQ account and immediately access this valuable information. There is nothing to set up, no licenses, no burdens.
  • Drive Business Value: CloudIQ’s Proactive Health Score provides an easy way to identify and understand potential vulnerabilities in the storage environment. With these proactive and targeted guidelines, the result is a more robust and reliable storage environment, resulting in higher uptime and optimized performance and capacity.

Cloud IQ takes Storage System Support to a whole new level, by delivering:

  • An Operational Dashboard – for Proactive Monitoring, and Alerting
  • Predictive Analytics – to help both with short-term Risk Avoidance and longer-term Capacity and Performance Planning
  • A Proactive Healthscore – providing Proactive Support + Recommended Remediation

Proactively Monitoring Storage Health

CloudIQ is proactively running Health Checks in these 5 categories:

  • System Health – checking for faulty components, such as unreliable cables, or fans
  • Configuration – this is where CloudIQ adds the “IQ” by understanding the context for a given component – such as whether a failed drive is an unused spare, or active in a RAID 5 configuration
  • Capacity – CloudIQ is actively monitoring capacity issues that could result in DU – such as Pools that are both Oversubscribed and also approaching Full
  • Performance – CloudIQ will actively monitor for potential performance issues, such as when your CPUs are running at high rates, and will also identify if there is an imbalance, so that you can shift workloads for better balance
  • Data Protection – CloudIQ is monitoring your Data Protection policies, to ensure your data is protected as expected – both replication RPOs and also Snapshot policies for scheduling and retention

Once logged in, customers can view XtremIO clusters from different perspectives:

  • Configuration View – a summary view of the key metrics of the XtremIO cluster configuration
  • Health View – CloudIQ offers a consistent health score mechanism across all Dell EMC arrays. From this perspective, customers can easily identify which arrays are healthy, and which arrays have issues that require attention.
  • Performance View – key performance metrics are displayed per each XtremIO cluster
  • Capacity View – capacity and savings numbers are displayed for all the XtremIO clusters

If more information is required, the user can easily drill down into the cluster details. More details are then shown regarding each of the functional areas (Configuration, Health, Capacity, Performance) described above.

CloudIQ Analytics with Machine Learning

  • Predictive Analytics – Seasonal Decomposition of Time Series by Loess (STL)
    • Capacity – Predict time to full for Storage Pools
    • Performance – Performance anomaly at System, Pool, and Object level
  • Root Cause Investigation
    • Hierarchical, partitional & fuzzy time-series clustering algorithms
  • Known Issue Identification, e.g.
    • KB article – iSCSI subnet address requirements
    • TSA – Defer OE Upgrade if customer has VVOLs with Snapshots

ETA content – CIFS/SMB File System vulnerability on a specific code version

you can also watch Susan Sharpe showing CloudIQ during TFD

VM Awareness


A repeating ask from XtremIO customers has been to integrate with vCenter and provide information regarding the Virtual Machines (VMs) that are stored on the XtremIO Cluster. Now with XtremIO being integrated with the CloudIQ platform, all CloudIQ features can easily be implemented for XtremIO.

Coming soon, XtremIO will display information on all of the VMs stored on the XtremIO Cluster. In addition, when selecting a specific Volume, CloudIQ will provide a list of all of the VMs that are stored on that particular Volume.

CloudIQ – There’s An App for That

We’re also very excited about our new mobile app. Even without a mobile app, CloudIQ makes it simple for anyone with a browser to connect to cloudiq.dellemc.com and understand their storage health. But now we’ve storage health even more accessible with a new mobile app available for iPhone and Android. This will be available in the next month and can show your storage health across all your connected systems – right from the palm of your hand.


Conclusion

XtremIO joining the CloudIQ platform is exciting in multiple facets:

  • By simply entering the CloudIQ URL, customers have access to all key metrics of XtremIO clusters. No local installations are required.
  • The storage admin is provided a cockpit view of all of his Dell EMC storage array, offering a single plan of management
  • CloudIQ features (for example VM Awareness) are easily implemented by XtremIO, increasing the amount and velocity of XtremIO feature development.

CloudIQ integration with XtremIO will be available in September 2018

A link to the original PR can be found here

https://blog.dellemc.com/en-us/cloudiq-powermax-vmax-sc-series-xtremio-instantly-view-dell-emc-storage-health/