Archive

Posts Tagged ‘Storage’

EMC StorReclaim–VMAX thin pool reclaim for Windows

June 3rd, 2011

Since I started at EMC just over 2 months ago I’ve been spending a lot of time getting to grips with the large range of products that EMC has in it’s portfolio.  One key product I’ve been lucky to spend some time learning about it is VMAX / Symmetrix.  A Product range I knew a little about but had never used as a customer or had the chance to deep dive technically. Luckily for me I got the chance to do the deep dive with 3 days of VMAX training with some of the Symm engineering team from the US.

During this VMAX training some of my more “Symmetrix savvy” colleagues (David Robertson and Cody Hosterman) were telling us about something called EMC StorReclaim. At the time I couldn’t say anything about it as it wasn’t due for unveiling until EMC world but I did take notes with the aim of following up after EMC world.  I only found those notes today hence the delay folks!

First things first,  this is an EMC internal tool for EMC Global Services usage. I am publishing this in order to bring it to your attention.  If you feel you have a need for using this EMC product then please speak to your EMC Rep / TC for more details.

So what is EMC StorReclaim? I could explain it myself but this extract from the release document explains it perfectly.

StorReclaim is a Windows command line utility designed to free allocated but unused storage space as part of EMC’s Virtual Provisioning solution.

StorReclaim determines exactly where the allocated but unused space is and then passing that information on to Symmetrix for space reclamation. Once the storage capacity is reclaimed, it is then put back into the device storage pool for consumption. The process is performed in real time and does not require any running application to shutdown.

So what does it support / not support and what can I run it on?

Host operating system requirements
StorReclaim is fully supported on the following operating systems with various Service Packs.

◆ Windows Server 2003 x86/x64
◆ Windows Server 2008 x86/x64
◆ Windows Server 2008 R2
◆ Windows Server 2008 R2 with SP1

StorReclaim also supports Windows Guest OS running on:

◆ Hyper-V Server 2008 or 2008 R2
◆ VMware ESX Server 3.5 or 4.0
◆ VMware vSphere client 4.1.1

Note: For VMware ESX, the physical disks in a virtual environment must be attached to virtual machines using RDM (Raw Device Mapping). For Microsoft Hyper-V, the physical disks must be configured as pass-throughdisks.

File system requirement
Microsoft Windows NTFS
MBR & GPT
Basic Disks
Dynamic Concatenated, Mirrored (excludes striped / RAID 5 Dynamic disks)

Logical volume managers requirement
Microsoft Windows LDM

Storage environment support
StorReclaim supports Symmetrix arrays running Enginuity 5875 and higher.

Supported with EMC Clone and SNAP but only de-allocates storage on the source

Just to clarify one of the points around virtualisation support. This tool does support both physical and virtual windows server workloads.  Key point here is that the virtual machine in question must have RDM attached disks served from a thin pool.

One other key point worth mentioning is that this tool does not require you to install Solution Enabler or any other EMC host based software.  The tool works via a windows filter driver and sends SCSI UNMAP commands directly to the VMAX array in order to return the blocks to the thin pool.

As I mentioned earlier if you want access to this tool you will need to obtain it from EMC Global Services.  I hope that this will be released as a customer tool at some point in the future, this decision may well be based on demand so please ask if it is something you could use.

EMC, New Products , , ,

Whats new in vSphere 4.1 Storage

September 2nd, 2010

So I haven’t done a lot of real time blogging at VMworld this year as I’ve been busy trying to see and soak up as much as possible.  It’s not every day that you get access to the likes of Chad Sakacc (VP EMC / VMware alliance) Scott Drummond (EMC – ex VMware performance team) and a whole host of other technology movers and shakers. As you can imagine I took full advantage of these opportunities and blogging became a bit of secondary activity this week.

However, I’ve now had time to reflect and one of the most interesting areas I covered this week which was the new storage features in vSphere 4.1. I had the chance to cover these in multiple sessions, see various demo’s and talk about it with the VMware developers and engineers responsible. There are two main features I want to cover in depth as I feel they are important indicators of the direction that storage for VMware is heading.

SIOC – Storage I/O Control

SIOC had been in the pipeline since VMworld 2009, I wrote an article on it previously called VMware DRS for Storage, slightly presumptuous of me at the time but I was only slightly off the mark. For those of you who are not aware of SIOC, to sum it up again at a very high level let’s start with the following statement from VMware themselves.

SIOC provides a dynamic control mechanism for proportional allocation of shared storage resources to VMs running on multiple hosts

Though you have always been able to add disk shares to VM’s on an ESX host, this only applied to that host, it was incapable of taking account of VM I/O Behaviour of other VMs on other hosts. Storage I/O control is different in that it is enabled on the datastore object itself, disk shares can then be assigned per VM inside that datastore. When a pre-defined latency level is exceeded on a VM it begins to throttle I/O based on the shares assigned to each VM.

How does it do this, what is happening in the background here? Well SIOC is aware of the storage array device level queue slots as well as the latency of workloads.  During periods of contention it decides how it can best keep machines below the predefined latency tolerance by manipulating all the ESX Host I/O Queues that affect that datastore.

In the example below you can see that based on disk share value all VM’s should ideally be making the same demands on the storage array device level queue slots.  Without SIOC enabled that does not happen. With SIOC enabled it begins throttling back the use of the second ESX host’s I/O queue from 24 slots to 12 slots, thus equalising the I/O across the hosts.

Paul Manning (Storage Architect – VMware product marketing) indicated during his session that there was a benefit to turning SIOC on and not even amending default share values.  This configuration would immediately introduce an element of I/O fairness across a datastore as shown in the example described above and shown below.

SIOC_Fairness_Full

So this functionality is now available in vSphere 4.1 for Enterprise Plus licence holders only.  There are a few immediate caveats to be aware of, it’s only supported with block level storage (FC or ISCSI) so NFS datastores are not supported. It also does not support RDM’s or datastores constructed of extents, it only supports a 1:1 LUN to datastore mapping. I was told that extents can cause issues with how the latency and throughput values are calculated,  which could in turn lead to false positive I/O throttling, as a result they are not supported yet.

It’s a powerful feature which I really like the look of. I personally worry about I/O contention and the lack of control I have over what happens to those important mission critical VM’s when that scenario occurs. The “Noisy Neighbour” element can be dealt with at CPU and Memory level with shares but until now you couldn’t at a storage level. I have previously resorted to purchasing EMC PowerPath/VE to double the downstream I/O available from each host and thus reduce the chances of contention.  I may just rethink that one in future because of SIOC!

Further detailed information can be found in the following VMware technical documents

SIOC – Technical Overview and Deployment Considerations

Managing Performance Variance of applications using SIOC

VMware performance engineering – SIOC Performance Study

VAAI – vStorage API for Array Integration

Shortly before the vSphere 4.1 announcement I listened to an EMC webcast run by Chad Sakacc.  In this webcast he described EMC’s integration with the new vStorage API, specifically around offloading tasks to the array. So what does all this mean, what exactly is being offloaded?

VAAI_Features 
So what do these features enable? Let’s take a look at them one by one.

Hardware assisted locking as described above provides improved LUN metadata locking.  This is very important for increasing VM to datastore density.  If we use the example of VDI boot storms, if only the blocks relevant to the VM being powered on are locked then you can have a more VM’s starting per datastore.  The same applies in a dynamic VDI environment where images are being cloned and then spun up; the impact of busy cloning periods, i.e. first thing in the morning is mitigated.

The full copy feature would also have an impact in the dynamic VDI space, cloning of machines taking a fraction of the time as the ESX host is not involved. What I mean by that is when a clone is taken now, the data has to be copied up to the ESX server and then pushed back down to the new VM storage location.  The same occurs when you do a storage vMotion, doing it without VAAI takes up valuable I/O Bandwidth and ESX CPU clock cycles. Offloading this to the array prevents this use of host resource and in tests has resulted in a saving of 99% on I/O traffic and 50% saving on CPU load.

In EMC Labs a test of storage vMotion was carried out with VAAI turned off, it took 2 mins 21 seconds.  The same test was tried again with VAAI enabled, this time the storage vMotion took 27 seconds to complete. That is a 5x improvement, and EMC have indicated that they have had a 10x improvement in some cases. Check out this great video which shows a storage vMotion and the impact on ESX and the underlying array.

There is also a 4th VAAI feature which has been left in the vStorage API but is currently unavailable, Mike Laverick wrote about it here. Its a Thin Provisioning API and Chad Sakacc explained during the group session that its main use is for Thin on Thin storage scenarios. The vStorage API will in the future provide vCenter insight into array level over provisioning as well as the VMware over provisioning.  It will also be used to proactively stun VM’s as opposed to letting them crash as currently happens.

As far as I knew EMC was the only storage vendor offering array compatibility with VAAI. Chad indicated that they are already working on VAAI v2 looking to add additional hardware offload support as well as NFS Support. It would appear that 3Par offer support, so that kind of means HP do to, right? Vaughan Stewart over at NetApp also blogged about their upcoming support of the VAAI, I’m sure all storage vendors will be rushing to make use of this functionality.

Further detailed information can be found at the following locations.

What does VAAI mean to you? – Chad Sakac EMC

EMC VAAI webcast – Chad Sakac EMC

Storage DRS – the future

If you’ve made it this far through the blog post then the fact we are taking about Storage DRS should come as no great surprise.  We’ve talked about managing I/O performance through disk latency monitoring and talked about array offloaded features such as storage vMotion and hardware assisted locking. These features in unison make Storage DRS an achievable reality.

SIOC brings the ability to measure VM latency, thus giving a set of metrics that can be used for storage DRS.  VMware are planning to add capacity to the storage DRS algorithm and then aggregate the two metrics for placement decisions.  This will ensure a storage vMotion of an underperforming VM does not lead to capacity issues and vice versa.

Hardware Assisted Locking in VAAI means we don’t have to be as concerned about the number of VM’s in a datastore, something you have to manage manually at the moment.  This removal of limitation means we can automate better, a storage DRS enabler if you will.

Improved Storage vMotion response due to VAAI hardware offloading means that the impact of storage DRS is minimised at the host level. This is one less thing for the VMware administrator to worry about and hence smoothes the path for storage DRS Adoption.  As you may have seen in the storage vMotion video above the overhead on the backend array also appears to have been reduced, so you’re not just shifting the problem somewhere else.

For more information I suggest checking out the following (VMworld 2010 account needed)

TA7805 – Tech Preview – Storage DRS

Summary

There is so much content to take in across all three of these subjects I feel that I have merely scratched the surface.  What was abundantly clear from the meetings and session I attended at VMworld is that VMware and EMC are working closely to bring us easy storage tiering at the VMware level.  Storage DRS will be used to create graded / tiered data pools at the vCenter level, pools of similar type datastores (RAID, Disk type). Virtual machines will be created in these pools; auto placed and then moved about within that pool of datastores to ensure capacity and performance. 

In my opinion it’s an exciting technology, one I think simplifies life for the VMware administrator but complicates life for the VMware designer. It’s another performance variable to concern yourself with and as I heard someone in the VMworld labs comment “it’s a loaded shotgun for those that don’t know what they’re doing”.  Myself, I’d be happy to use it now that I have taken the time to understand it; hopefully this post has made it a little clearer for you to.

Gestalt-IT, Storage, VMware, vmworld , , ,

Virsto One, Hyper-V and the I/O Blender effect

February 24th, 2010

One of the things I’ve come to love about blogging is the fact that I occasionally get contacted by the odd tech start-up. Keen to demonstrate their latest market leading idea that is going to revolutionise the industry as we know it.  Earlier this month I was contacted by Mindy Anderson who is the Product director at tech start-up Virsto (short for Virtual Storage). Virsto had a new product for Microsoft Hyper-V that they wanted to demonstrate to me in advance of their big product launch. Having looked at Mindy’s background in the storage industry I was very keen to hear more about their new product.

The product is called Virsto One and is aimed solely at Windows 2008 R2 Hyper-V. The product introduces some new features like thin provisioned clones & snapshots, that expand the functionality of the standard Hyper-V product. The most interesting feature in my opinion though is the attempt to tackle the virtualisation / storage problem commonly known as the I/O blender effect.

So what does Virsto One look like?

The software itself installs in the parent partition of each Hyper-V host and consists of the filter driver, a system service and a VSS provider.  The filter driver sits above the raw storage (any block storage) and presents a VHD object to the parent partition.  This setup allows users to configure physical storage once and then use Virsto One to carry out all future provisioning tasks. This includes full support for creating  thin provisioned, high performing, cluster aware snapshots and clones from either the Virsto One Hyper-V MMC snap-in or Powershell.

Virsto_1

So what about the I/O blender effect?

Most storage technologies are not designed for the virtual data centre, most are still designed around the one to one physical server to storage model. Think of a number of virtual machines all with predictable I/O behaviour (if you think of them as physical).  What tends to come out of the physical hypervisor host is a large amount of completely random I/O.  Random I/O has an obvious performance impact when compared with sequential I/O so as you increase the number of VM’s you increase the random I/O from your Hyper-V host.  So as VM density increases performance drops, as we all know low VM density is not your objective when you embark on a virtualisation project.

So Virsto One has an interesting way of dealing with this. Although the “secret sauce” has never been divulged in-depth in its basic form they journal the random I/O that comes down from the Hyper-V host to staging disk.  A staging area is required per physical Hyper-V host and about 20GB / 30GB of disk should support multi-terabyte write downs through use of de-dupe technology. Periodically the data in the staging disks will be flushed / written down to the primary storage location, at this point the Random I/O is laid down sequentially on primary storage to improve read performance. Virsto indicated that in time they would look to support multiple de-stages so that data could be de-staged to another array for business continuity purposes or to the cloud for disaster recovery purposes.

Virsto_2
Are there any performance figures to back this up?

Performance figures from the Virsto test lab show the I/O Blender effect in full effect as VM density increases in the standard Microsoft setup.  With the Virsto software sitting in the middle, staging the data and de-staging it sequentially, there is an obvious improvement in performance.  These test results were from Virsto’s own lab and I stressed the importance of having these independently benchmarked by customers or an external consultancy.  Wendy indicated to me that this was something they were looking into,  I look forward to reading and sharing the whitepaper when it is eventually produced.

Virsto_Graph

So who would be interested in a product like this?

Well ideally the product would benefit Hyper-V customers who require high density, high performing virtual environments.  Hosting companies making use of Hyper-V for selling virtual server instances may well see Virsto as a good way of increasing performance and reducing costs through the use of golden images, snapshots, etc.  Who knows though,  individual companies with an investment in Hyper-V may well see the benefit in this kind of product.  In a way I see it’s not to dissimilar to a company buying PowerPath/VE to increase I/O performance in a vSphere environment.

It is important to note that although this product has been initially built for Microsoft Hyper-V the principals behind it are hypervisor agnostic.  I asked the question “why Hyper-V?” at the start of my chat with Virsto,  the answer was that Hyper-V had functionality gaps and was easier to integrate into.  VMware on the other hand is a more mature product where VMFS has gone some way to deal with the traditional virtualisation storage problems.  Citrix virtualisation customers will be happy to hear that testing has already begun in the lab with a version of Virsto one for XenServer, ETA unknown at this stage.

So how much does all this cost?

At the time of the interview,  which was a good few weeks back the per socket price being talked about was $1,000 – $2,000 USD per socket, again not to dissimilar to the pricing for EMC PowerPath/VE.

Conclusion?

My impression at the time of the demo and interview was that this was an interesting product and very clever idea. The main selling point for me was the increase in performance, if it can be independently verified you would think the product will simply sell itself.  I look forward to hearing more about Virsto in the future and I am particularly interested to see what they can do for other hypervisors especially VMware vSphere with it’s new storage API’s.

Hyper-V, New Products, Storage , ,

vSphere 4.0 – What’s new in vSphere Storage

May 17th, 2009

This weekend I finally had the chance to catchup on some of the new storage features released as part of vSphere 4.0,  there are quite a few changes to cover,  some of them quite exciting.

VMFS Upgrade

Once of the good pieces of news to come out is that the VMFS changes in vSphere are minimal.  vSphere 4.0 introduces a minor point release (3.3.0 to 3.3.1) with some subtle changes,  so much so that it’s not really been documented anywhere.  Most of the changes with VMFS are actually delivered within the VMFS driver at the VMKernel level,  this is where most of the I/O improvements and features such as thin provisioning have been delivered as part of vSphere.

Upgrading VMFS was a major step in the upgrade from VMFS 2 to VMFS 3,  good to hear that there are no major drivers to upgrade VMFS as part of your vSphere upgrade.  Any new VMFS datastores created with the new vSphere hosts will of course be VMFS 3.3.1 however this is backwardly compatible with earlier versions of ESX 3.x.  If you really want to move onto the new version of VNFS, format some new datastores and use Storage vMotion to move your VM’s onto the new VMFS 3.3.1 datastores. 

Thin Provisioning

Thin provisioning is one of the areas that excites me most about the new vSphere release.  I conducted a very quick survey of my employers development and system test ESX environments recently and found that currently we were only utilising 48% of virtual storage that had been provisioned.  It’s easy to see where immediate savings can be made simply by implementing vSphere and thin provisioning.  I’ll be using that in the cost benefits case for sure!

Thin provisioning is nothing new,  it has been available at the array level for a while now, so one of the big questions is where should I thin provision?  Well that really depends what kind of environment you have I suppose.  Smaller customers will benefit greatly from VMware thin provisioning as they probably don’t own arrays capable of TP.  Bigger companies on the other hand might well benefit from carrying out both as they have both the skill sets and the equipment to full utilise it at both levels. 

Chad Sakac has written a superb article entitled “thin on thin where should you do thin provisioning vsphere 4.0 or array level” which goes deep into the new thin provisioning features and the discussions around what’s the best approach. I strongly suggest people give it a read,  it explains pretty much all you need to know.

Storage VMotion

The Storage vMotion in ESX 3.5 had a few limitations which vSphere addresses.  It’s now fully integrated with vCenter as opposed to being command line based in the previous version,  it allows for the moving of a VM between different storage types, i.e. FC, ISCSI or NFS.  One excellent usage of Storage vMotion is the ability to migrate your thick vm’s and change them to thin VM’s.  Perfect for reclaiming disk space and increasing utilisation without downtime, brilliant!
 
Storage vMotion has also been enhanced from an operational perspective. Previously storage vmotion involved taking a snapshot of a disk,  copying the parent disk to it’s new location and then taking the child snapshot and re-parenting the child disk with the parent.  This process required the 2 x the CPU and memory of the VM being migrated in order to ensure zero downtime.  In vSphere 4.0 Storage vMotion uses change block tracking and a process very similar to how vMotion deals with moving active memory between hosts.  The new Storage vMotion conducts an iterative process scanning what blocks have been changed, each iterative scan should result in smaller and smaller increments and when it gets down to a small enough size it conducts a very quick suspend / resume operation as opposed to using the doubling up resources method that it previously needed to.  Making it faster and more efficient than it was in it’s previous incarnation.

Para Virtualised SCSI

Para Virtualised SCSI (PVSCSI) is a new driver for I/O intensive virtual machines. VMware compare this to the vmxnet adapter,  which is an enhanced and optimised network driver providing higher performance.  PVSCSI is similar, it’s a specific driver that offers higher I/O throughput, lower latency and lower CPU utilisation within virtual machines. Figures discussed by Paul Manning on the recent Vmware community podcast included 92% increase in IOPS throughput and 40% decrease in latency when compared to the standard LSI / BUSLogic virtual driver.

A caveat of this technology is that the guest OS still has to boot from a non PVSCSI adapter (LSI / Buslogic),  you would look to add your PVSCSI adapter for your additional data virtual disks.  Currently only Windows 2003, Windows 2008 and RH Linux 5 have the software drivers to take adavantage of this new adapter.

Update  - Chad Sakac has posted a new EMCWorld I/O Performance comparison of the vSphere PVSCSI adpater vs the LSI SCSI adapter, check out the link for more details.

VMware Storage Book

Paul Manning mentioned on the recent podcast that VMware are planning a book dedicated to Virtualisation and storage in an attempt to consolidate the amount of documentation out there on Storage configuration and best practice.  Currently users need to look through 600 pages of the SAN Config guide and vendor guidelines. VMware would hope to try boil this down to a much more manageable 100 – 150 pages.

If you can’t wait that long, Chad Sakac has written the storage chapter in Scott Lowe’s new vSphere book which I believe is available for pre-order on Amazon

vSphere Storage WhitePaper

Paul Manning who I’ve mentioned in this blog post has written a great 10 page white paper explaining all of these features in more detail along with some of the more experimental features I haven’t mentioned. 

http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereStorage_P10_R1.pdf

Gestalt-IT, New Products, Storage, VMware, vSphere , , , ,

Implementing VMware with EMC Storage – Techbooks

January 4th, 2009

Later this year I’ll be looking at moving our VMware environment across to a EMC CX4 system. I saw these posted before somewhere but could not remember who did it,  so I had to search for them all over again.  I’m personally interested in the Clariion document but the following links may come in handy for anyone else that is using Vmware on the Symmetrix or Celerra ranges.

These were all found using EMC’s documentation search which I will be adding to my favourites for future use.

TechBook: VMware ESX Server Using EMC Celerra Storage Systems Solutions Guide
TechBook: VMware ESX Server Using EMC CLARiiON Storage Systems Solutions Guide

TechBook: VMware ESX Server using EMC Symmetrix Storage Systems Solutions Guide

For my own personal reference,  I found some interesting documents on using EMC and QLogic HBA’s in both a Linux (ESX) and Windows environment.  These will come in handy as we use HP based Qlogic HBA’s and EMC storage,  maybe someone else may find them useful. 

** Please note that these documents appear to relate to a specific driver,  please check before using the content within **

EMC Fibre Channel and ISCSI with QLogic Host Bus Adapters in the Windows Environment
EMC Fibre Channel with QLogic Host Bus Adapters for the Linux Environment 

General , ,

Compellent – PowerShell Storage Automation

December 15th, 2008

Was reading through my RSS feeds this morning while standing in line for a coffee (I love the iPhone) and came across Eric Sloof’s article on Compellent.

Compellent have introduced a powershell based command set to compliment their storage offering.  Now offering the ability to script a lot of the everyday CLI based tasks in a format that is becoming more and more popular.  For me the main benefit I can see is from an integration angle,  linking in windows and storage tasks together is the kind of thing that I can see becoming handy for BCP and DR scripting.

I had a small smile on my face when reading Eric’s article and I will tell you why,  I’ve recently been looking at Compellent for a solution I was building.  Compellent really impressed me with their offering, SAN simplified as far as I was concerned.  The fact that they started from scratch 4 years ago and heavily utilise software management, means that they’re not limited in the same way as some of the other legacy based vendors.  We didn’t end up choosing Compellent for the solution, however the choice was not based purely on the technical angle.  Compellent are still quite small in the UK but do seem to be making serious inroads into the market.  The awards they picked up this year are very impressive and I will be keeping an eye on their growth in the future.

The introduction of powershell to their offering I think shows what a good dynamic company they are. Be interested in any comments anyone has on working with Compellent or their solutions.

New Products, Storage, VI Toolkit / Powershell , ,