EMC ScaleIO is awesome, here’s how you can run a full functional demo environment on your home pc / laptop.
im using 16GB of RAM in my laptop and I have a 512gb SSD drive, all the network in this configuration is done in my case using NAT but feel free to use physical IP addresses if your environment allows it..
first, you can download the EMC ScaleIO software and documentation from here:
I will be demoing the installation of the ScaleIO virtual appliance in a vSphere environment but you can also install it on bare metals Linux and Windows hosts
the first stage is to install ESXi 5.5 as a VM, im using VMware workstation 10 so it will automatically detect your ESX iso file and will correspond with the right OS type (ESX5), you will need at least 4GB for the ESX VM to run (and we need a minimum of 3 ESX VM’s (N+1)
you can download the VMware tools for nested ESXi host from here:
I also added additional 4 NICs to the nested ESXi host so in essence, each ESX should have 5 NICS? why, well:
NIC1 = management + vMotion, NIC2 = iSCSI1, NIC3= iSCSI2, NIC4= internal ScaleIO traffic1, NIC5 = Internal ScaleIO traffic2
I also created an additional 2 “drives” per ESX server, one of 10GB and one of 5GB, the first one will be used for deploying the ScaelIO appliance into + the spare capacity for the actual storage and another 5gb for storage that will also be used for ScaleIO.
you will need at least 3 ESX servers configured in this manner. (N+1)
after you SSH to the ESX host and install the vib, its time to connect a vCenter, im using the vCenter 5.5 virtual appliance..just double click the ovf descriptor, it will import and convert it to vmdk and from there it’s business as usual for installing the vCenter appliance
ok, the first stage is to install the ScaleIO OVA, easy! just deploy the OVA to each one of you ESX hosts
I selected thin disk format since disk space is precious on my home SSD drive..
since the communication to the ScaleIO VSA appliance is done over iSCSI, I created to NICS that will be used as iSCSI nics, multipathing in vSphere 5.5 is a bit different than what it was in 4.1/5.1, you basically create one vSwitch and one nic per the vmKernel port group in it, in the past you could create one vSwitch with two port group and then In each port group you made one NIC as active and the other as Standby and vice versa, anyway, that’s not how it works in 5.5
after you are done with the iSCSI port groups, it is time to bind them which is all done via the GUI as shown below..easy.
the next stage is to assign an IP address per ScaleIO OVA
open the console on them and log in (root / admin )
and configure the IP address
you need to do it for all the ScaleIO appliances that you deployed
after we are done with this, it’s time to put into a good use our created iSCSI initiators and point them to the ScaleIO appliances as they expose their volumes via iSCSI
again, we need to repeat these steps on the other ESX servers as well.
the next part is to upload the RPM package from you ESX ScaleIO installation folder
Use SFTP to upload the ScaleIO SW package to the
• Run chmod u+x <package name> to change the package permissions to
be executable by user
• Run ./<package name> as executable.
you need to repeat these steps on all the ScaleIO appliances.
then its time to run the package itself
which will result in extracting the files to an ecs folder
now, run ./install.py –make_config –vm
When prompted to enter a password encryption string. Enter an alphanumeric
When the main menu opens, it displays a series of configuration steps. To start,
select the MDM Cluster and follow the configuration instructions
An MDM cluster consists of three components:
◆ Primary MDM
◆ Secondary MDM
Each component must be installed on a different physical server, or in the case of a
VMware environment, on a different ScaleIOVM.
Note: When configuring ScaleIO with a single MDM, only a Primary MDM is configured.
Select: Configure Primary MDM
2. Define the following parameters:
IP address of the server on which the Primary MDM will be installed. With
VMware, this is a ScaleIOVM IP address holding the Primary MDM
This is the password of the root account on the server. Although the default
password will always be marked as *****, the password has to be entered at
least once for each component (I chose admin)
• Virtual Network Interface Card (NIC)
This is the Network Interface Card that will be used for the virtual IP address.
A list of possible NICs will be displayed
Operating system of the server where the SDS is going to be installed. Select
“linux” for Linux or ESX
you need to repeat this on the secondary MDM and the “TIE” MDM.