Space reclamation API helps storage partners to efficiently reclaim deleted space in coordination with vSphere. It is a garbage collection process for thin volumes to help storage arrays reuse deleted space.
ESXi 5.0 Background: VMware introduced a new feature in vSphere 5.0 called “space reclamation” as part of block VAAI thin provisioning. This feature was designed to efficiently reclaim deleted space to meet continuing storage needs. ESXi 5.0 issued UNMAP commands for space reclamation during several
operations. Because of varying or slow response time from storage devices, VMware recommended disabling UNMAP on ESXi 5.0 hosts with thin provisioned LUNs.
In ESX 5.0 U1 VMware implemented the vmkfstools command with revised –y option enabling customers to reclaiming unused space during their maintenance window provided that storage arrays support UNMAP for hardware acceleration. Because system performance is less of a concern during a maintenance window, customers could run the vmkfstools command as needed to consolidate storage. Partners with T10 compliant storage arrays that implement the UNMAP command could take advantage of this feature.
vSphere issues UNMAP command for Space Reclamation during several operations such as Storage vMotion, Snapshot consolidation. UNMAP is issued to storage devices in critical regions with the expectation that the operation would complete quickly. However, the implementation and response times
for this command to ESX varies significantly among sample set of storage arrays from the ecosystem. This variation of response times in critical regions can potentially interfere with other services.
When the UNMAP is issued on the free blocks in a volume, it is required to ensure that those blocks are not allocated to another file until the UNMAP operation completes. The mechanism used is to allocate these
blocks to a temporary file.
In ESX5.O U1 and ESX5.1, the temp file(s) were created at the start of the operation and then an UNMAP was issued to all the blocks that were allocated to these temp files. If the temp file(s) take up all the free blocks in the volume then any other operation that required a block allocation would fail. To prevent this scenario we allowed the user to specify the % of the free blocks the temp file(s) would use
• Why did we have to create these large temp files?
Let’s assume that you create a temp file of size 200 MB – assuming 1 MB vmfs block size. This file would contain 200 blocks. If we issue UNMAP on all 200 blocks and place them back on the free list, the next
time you create another temp file of size 200 MB the same blocks that were just "unmapped" could be allocated to this file as well. Therefore, we would not be making any progress in issuing UNMAP to all the unused blocks in the volume. So it is required to allocate all the free blocks in one instance and issue UNMAP on the entire blocks.
• What is being changed in ESX 5.5?
We have introduced a new kernel interfaces in ESX 5.5 that would allow the user to ask for blocks beyond a user-specified block address in the file system (See below for detailed implementation). Using these interfaces we can ensure that the blocks allocated to a file were never allocated to this file previously.
Therefore, we would be able to create any size temp files and only issue UNMAP to the blocks allocated to that file and yet be sure that we can issue UNMAP on all the free blocks in the volume.
The user options have changed accordingly. The user now only specifies the size of the temp file to be created. Using any specified size we can issue UNMAPs on all the free blocks in a volume.
We are integrating this operation within the esxcli command framework. The vmkfstools –y option is deprecated for external use.
We have also enhanced the UNMAP command implementation to support multiple block descriptors (in ESX 5.1 we only issued one block descriptor per UNMAP command). We now issue up to 100 block descriptors depending on the storage target capabilities specified in the Block Limits VPD (B0) page.
ESXCLI has a new command “unmap” under “excli storage vmfs” namespace which users can use to Unmap the free blocks from the VMFS volume givin UUID/LABEL.
esxcli storage vmfs unmap <–volume-label=<str>|–volume-uuid=<str>> [–reclaimunit=<long>]
-n|–reclaim-unit=<long>: Number of VMFS blocks that should be unmapped per iteration.
-l|–volume-label=<str> : The label of the VMFS volume to unmap the free blocks.
-u|–volume-uuid=<str> : The uuid of the VMFS volume to unmap the free blocks.
Sample uses are:
# esxcli storage vmfs unmap –volume-label datastore1 –reclaim-unit 100
# esxcli storage vmfs unmap -l datastore1 -n 100
# esxcli storage vmfs unmap –volume-uuid 515615fb-1e65c01d-b40f 001d096dbf97 –reclaim-unit 500
# esxcli storage vmfs unmap -u 515615fb-1e65c01d-b40f-001d096dbf97 -n 500
# esxcli storage vmfs unmap -l datastore1
# esxcli storage vmfs unmap -u 515615fb-1e65c01d-b40f-001d096dbf97
Frequently Asked Questions
Will this implementation still issue UNMAP to non-‐mapped LBA?
This implementation will still issue UNMAP to all blocks that are considered by the vmfs volume to be "free". That includes any blocks that were never written to and hence would be non-‐mapped at the storage back-‐end.
What happens to the current recommendation of running UNMAP in the maintenance window.
With ESXi 5.5, we will be recommending customers to use UNMAP in normal mode of ESXi operation. There is no restriction to limit performance under maintenance window.