Archive

Archive for the ‘vSphere’ Category

XenDesktop 5, vSphere 4.1 and VNX Reference Architecture

May 23rd, 2011

A common theme that I see coming up up time and time again with customers is VDI using Citrix XenDesktop and vSphere 4.1.  It’s popularity generally stems from the previous success companies have had with the more traditional Citrix products such as Presentation Server / XenApp.  I know when I was looking at VDI solutions I was very much in favour of Citrix due to one thing, the ICA protocol.  It works and it works well over long distances, in a lot of companies it has proven itself over a long period of time, it is a protocol they trust to deliver.

Following a customer meeting recently I was desperately searching for an EMC reference architecture (RA) for a XenDesktop / vSphere deployment.  At the time it turned out we didn’t have a completed one,  we did however have one in draft format that was going through the final stages of review.  That RA has now been completed and released for public consumption, an overview of the documents purpose is below.

The purpose of this reference architecture is to build and demonstrate the functionality, performance and scalability of virtual desktops enabled by the EMC VNX series, VMware vSphere 4.1 and Citrix XenDesktop 5.  This solution is built on Machine Creation Services (MCS) in XenDesktop 5and a VNX5300 platform with multiprotocol support, which enabled FC block-based storage for the VMware vStorage Virtual Machine File System (VMFS) and CIFS-based storage for user data.

The RA covers the technologies listed below and details why the VNX array with FAST cache enabled is a perfect match for your Citrix VDI deployment.  One other interesting area that is discussed is the use of Citrix Machine Creation Services (MCS) which is a new feature XenDesktop 5 and provides an alternative to Citrix Provisioning Server (PVS).  For those new to MCS I suggest you have a read through the following Citrix blog post as their are some additional design considerations around IOPS that need to be considered.

  • Citrix XenDesktop 5
  • Microsoft Windows 7 enterprise (32-bit)
  • Citrix Machine Creation Services (MCS)
  • VMware vSphere 4.1
  • EMC VNX 5300 – (Fast Cache & VAAI enabled)
  • EMC virtual storage integrator (VSI) – Free on EMC PowerLink

If you are considering XenDesktop 5 and vSphere 4.1 then I suggest you download and have a read through the RA linked below.

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1 and Citrix XenDesktop 5

Citrix, EMC, vSphere , ,

Virtual Storage Integrator 3.0.0 for vSphere and EMC Storage

May 11th, 2010

I have been trying desperately this week to keep up to date with the latest announcements coming out of EMC World 2010.  Problem is they appear to be making them and blogging about them faster than I can read and assimilate them.

One blog post that did catch my attention was a post by EMC’s Chad Sakac. Chad constantly amazes me, he generates a massive amount of super high quality technical content for the EMC and VMware community. His blog post was entitled “EMC’s next generation vCenter Plugins” and details the latest and greatest offerings from EMC’s Free vCenter plugins.

The Virtual Storage Integrator (VSI) V3.0.0 is a renaming of the existing EMC Storage Viewer 2.1 plugin that has been available for a while.  Why the rename? Well EMC are introducing tighter integration by enabling storage provisioning from within vCenter, it’s now surpassed being just a storage viewer.  The storage provisioning integration works with the CLARiiON across all protocols (FC, ISCSI, FCOE) and it also works with NFS on the Celerra. It also adds greater degrees of simplicity and reduces risk by automating all the tasks involved in provisioning and presenting storage to your vSphere cluster.

Chad explains it in much more detail and much better than I ever could in the following video.

I personally feel that the benefits of EMC’s ownership and tight working relationship with vSphere are beginning to shine through.  Such tight levels of integration are now being delivered and future development doesn’t look likely to slow down either. The quote from Chad below show’s how aggressively his team are working to constantly bring new features to the table and best of all, there completely free!

EMC Virtual Storage Integrator (or VSI) is the main EMC vCenter plug-in.  Over time, more and more functions that currently require installing one or more additional plugins will fold into the EMC Virtual Storage Integrator.  We’re beefing up the team behind it, and they have a very aggressive roadmap (wait till you see what’s next!!!)

Click the link below to find out more about what vCenter plugins are available, what they’re designed to do and where you can download them from in EMC Powerlink.

Plugin_Image

Storage, VMware, vSphere , , , ,

VMware PVSCSI Adapter performance and low I/O Workloads

February 21st, 2010

I’ve recently been implementing a vSphere deployment and have been looking at the new features introduced as part of Virtual Machine Hardware 7.  Obviously one of the major new components is the new Para Virtualised SCSI (PVSCSI) adapter which I wrote about way back in May 2009.  When it first came out there were a number of posts regarding the much improved I/O Performance and latency reduction this new adapter delivered, such as Chad Sakac’s I/O vSphere performance test post.

So the other day I stumbled across a tweet from Scott Drummond who works in the VMware Performance Engineering team. Following a little reading and a bit of digging around it appears that the use of PVSCSI comes with a small caveat.  It would appear that if you use the PVSCSI adapter with low I/O workloads you can actually get higher latency than you get with the LSI Logic SCSI adapter (see the quote below)

The test results show that PVSCSI is better than LSI Logic, except under one condition–the virtual machine is performing less than 2,000 IOPS and issuing greater than 4 outstanding I/Os.

This particular caveat has come to light following some more in-depth testing of the PVSCSI adapter performance.  The full whitepaper can be found at the following link.

PVSCSI whitepaperhttp://www.vmware.com/pdf/vsp_4_pvscsi_perf.pdf

For those who don’t want to read the technical whitepaper, a summary of the issue can be found in the following VMware KB article.

VMware KB 1017652http://kb.vmware.com/selfservice/1017652

So basically, as opposed to just using the PVSCSI adapter as default with VMs running version 7 of the virtual hardware have a think about it’s I/O profile and whether the PVSCSI or LSI logic adapter would be best.

Gestalt-IT, VMware, vSphere , , , , ,

vSphere vMotion Processor Compatibility and EVC Clusters

January 25th, 2010

In today’s economic climate it’s currently the done thing to sweat existing assets for as long as you possibly can.  At the moment I am working on a vSphere deployment and we are recycling some of our existing ESX 3.5 U4 hosts as part of the project.  So over the weekend I was testing out vMotion between a new host with the Intel Xeon X7460 processor and an old host with the Xeon 7350 processor.  I was getting the following error message displayed which pointed to a feature mismatch relating to the SSE4.1 instruction set.  Thankfully the error pointed me to VMware KB article 1993

EVC_1

Within this KB article it immediately refers you to using Enhanced vMotion Compatibility (EVC) to overcome CPU compatibility issues.  I had never used EVC in anger and wanted to read up on it a bit more before making any further changes.  A quick read of page 190 on the vSphere basic configuration guide gives a very good brief overview for those new to EVC. 

So I was referred to VMware KB article 1003212 which is the main reference for EVC processor support.  Quite quickly I was able to see that EVC was supported for the Intel Xeon 7350 and 7460 using the Intel® Xeon® Core™2 EVC baseline.  In essence as far as vMotion is concerned all processors in the cluster would be equal to an Intel® Xeon® Core™2 (Merom) processor and it’s feature set.  This basically masks the SSE4.1 instruction set on the Intel Xeon 7460 that was causing me the problem.

So I set about enabling my current cluster for EVC,  however when I went to apply the appropriate baseline I was getting the following error displayed. The error related to the host that was currently running 3 Windows 2008 R2 x64 servers.  These servers were obviously using using the advanced features of the Intel Xeon 7460 and as such that host could not be activated for EVC.

EVC_2

The vSphere basic configuration guide (Page 190) makes the following recommendation for rectifying this issue, the example matched my situation exactly.

All virtual machines in the cluster that are running on hosts with a feature set greater than the EVC mode you intend to enable must be powered off or migrated out of the cluster before EVC is enabled. (For example, consider a cluster containing an Intel Xeon Core 2 host and an Intel Xeon 45nm Core 2 host, on which you intend to enable the Intel Xeon Core 2 baseline. The virtual machines on the Intel Xeon Core 2 host can remain powered on, but the virtual machines on the Intel Xeon 45nm Core 2 host must be powered off or migrated out of the cluster.)

Now here is the catch 22, my new vCenter server is virtual and sits on the ESX host giving me the EVC error message.  I had to power it off to configure EVC but I can’t configure the EVC setting on the cluster without vCenter,  how was I going to get round this?  Luckily VMware have another KB Article dealing with exactly this situation.  The aptly titled “Enabling EVC on a cluster when vCenter is running in a virtual machinewas exactly what I was looking for. Although it involved creating a whole new HA / DRS cluster complete with new resource groups, etc it was a lot cheaper than buying a large number of expensive Intel processors. It worked perfectly, rectifying my issue and allowing me to use all servers as intended.

Moral of the story…..… Check out VMware KB article 1003212 for processor compatibility before buying servers and always configure your EVC settings on the cluster before adding any hosts to the cluster.  If it’s to late and you have VMs created already,  well just follow the steps above and you should be fine.

vCenter, VMware, vSphere , , , , , , ,

Using the VMware PVSCSI adapter on a boot disk

January 24th, 2010

Having put in the first of a number of new vSphere ESX 4 Update 1 hosts a colleague today set about building our new Windows 2008 R2 64 bit vCenter server.  He later informed me that he was not able to boot the Windows 2008 virtual machine using the PVSCSI adapter (Paravirtualised SCSI) 

I was absolutely positive I had read that this was now supported in Update 1. As it turns out it is, however its not quite as straight forward as just adding the PVSCSI adapter and installing windows.  In fact windows will not recognise the boot disk as it has no native driver for the VMware PVSCSI adapter.

There are a couple of ways to get round this,  the first involves creating your VM with a normal SCSI adapter that Windows 2008  does support and then installing Windows.  Once the installation is complete add a second virtual disk with a second controller set up as PVSCSI and then install VMware tools.  VMware tools will deploy the driver required for the PVSCSI adapter,  once installed you can safely reconfigure the original SCSI controller to be PVSCSI and remove the secondary controller and virtual disk.  Now when you reboot your machine you won’t be met with a blue screen of death, instead you will have a fully working Windows 2008 server using all the benefits of the PVSCSI adapter.

For full step by step instructions to complete the above process I recommend using Alan Renouf’s article. For those who prefer to use powershell scripting to make their changes, check out this fantastic script from LucD’s website which will do it all for you.

The above method is one way to get the PVSCSI adapter working on the boot drive but to be honest it’s a bit of a hassle to be doing this every time you deploy a Windows 2008 VM.  So I had a look to see what was involved in obtaining the driver files for loading prior to installation.

First you need to extract the driver files from your vCenter server. You can find the relevant driver files located in the following directory.

C:\Program Files\VMware\VMware Tools\Drivers\pvscsi\

Take the 3 files contained within and make a floppy image, I used UltraISO for this particular task but something like WinImage works just as well. Now you need to boot your VM and once the windows installation files have loaded attach your floppy image.

As you can see in the screenshot below the Windows VM does not pick up the attached virtual disk due to the lack of native driver support.

PVSCSI

Once you’ve pointed windows to the floppy image it picks up the VMware PVSCSI controller driver contained upon it.  Click next to apply the driver.

PVSCSI_2

Once the system has applied the driver you can see the virtual disk for your installation.

PVSCSI_3

For those who are looking to add this driver or any other VMware tool driver into a Windows 2008 Pre-Installation environment,  this VMware KB article on how to do it could be handy.

VMware, vSphere , , , , ,

VMware VMSafe – Are there any actual products yet?

November 29th, 2009

I was doing some work out of hours the other night on my employers Virtual Infrastructure when bang on time the little red triangles started popping up against certain ESX hosts in vCenter.  Why you ask? well it’s AV scanning time on our VM’s of course, or the Sophos summit as we affectionately call it due to its uncanny resemblance to a mountain range when you look at the CPU performance stats in vCenter.

It got me thinking, has any one vendor actually got a product out there utilising the VMSafe API that could help me rid our virtual infrastructure of this problem?

My first stop was of course the main VMSafe page where I did find a large list of official partners who are working on developing products to utilise the VMSafe API. The pleasing thing to see was that there are plenty of mainstream security vendors taking part.  However I’ve still to see any of them releasing a product to market that actually utilises VMSafe.

Earlier this year in Glasgow I heard Mcafee talk about VMSafe as part of the VMware vSphere launch road show.  They talked about building a vApp that could sit in your Virtual Infrastructure and take care of AV scanning with the aim of reducing the CPU overhead that AV scanning introduces. I did a little trawl of the web and couldn’t find anything official, I did however find the following forum post (quoted below) which is definitely the unofficial line.

Virus Scan for Offline images is available, which uses VMSafe APIs to scan offline disks accessed via ESX

Nothing is currently road mapped for on-access scanning – no AV vendor has this technology available (or even road mapped as far as I’m aware) yet.

I did a bit more digging on this “scan offline disks” comment and found a recent article by VMware’s Richard Garsthagen.  This article reveals that a piece of software called the VMware Virtual Disk Development Kit (VDDK) can be used to conduct an offline scan of disks attached to powered on or off virtual machines (quoted below). 

VMware VDDK (also being seen as part of the VMsafe initiative, but has been available for longer). The VDDK is an disk API, that allows other programs to access a virtual machine’s hard disk like the VMware Consolidated Backup solution does. It does not matter is the VM is powered on of off, but a disk can just be ‘extra’ mounted to another virtual machine that for instance runs a virus scanner. The clear downside of VDDK is that nothing is real time.

Surely this would rid me of my daily scheduled Sophos summit, wouldn’t it? Think of a hypothetical scenario where you have a VDI setup with 1000 windows XP VM’s,  imagine the strain put on your ESX clusters by 1000 machines kicking off a scheduled daily AV Scan. Would an appliance that could offline scan disks reduce the strain? Well thinking about it, possibly not.  It would still have to conduct a scan of 1000+ virtual disks, only this time it wouldn’t have nearly as many CPU cycles available to churn through the work. All it would have is the resources assigned to the vApp which is likely to be completely inadequate for such a large task. With this in mind it’s likely that it would probably take a large amount of time to complete.  It could even take longer than a day which wouldn’t be much use for a daily AV scan. I’m sure some companies would rather suffer the ESX CPU resource pain point as opposed to sacrificing security through ineffective or untimely AV Scanning.

Richard’s article along with the solutions tab on the VMSafe webpage did however reveal that a couple of products that use VMSafe have made it to market.  One is called vTrust from Reflex Systems which appears to be a multi faceted application, which according to their site provides dynamic policy enforcement and management, virtual segmentation, virtual quarantine and virtual networking policies.  The other application is a hypervisor based firewall appliance from Altor that supports virtual segmentation and claims to provide better throughput by using the Fast Path element of the VMSafe API.

So it would appear on the surface that progress has been slow.  To only find two VMware certified appliances in the market place was, I have to admit, quite a surprise!  It looks like it’s going to be a while before we see VMsafe being fully utilised by vendors, even then we will  have those wary individuals who will never quite be convinced.

Neil Macdonald of Gartner makes a good point about the potential for VMSafe appliances to introduce possible security vulnerabilities at a lower level in the infrastructure.

If I’m responsible for VM security, I’ll consider it after the APIs ship, after the vendors finally ship their VMSafe-enabled solutions, after I’ve got a level of comfort that these VMSafe-enabled security solutions don’t in of themselves introduce new security vulnerabilities

Edward L Haletky who is very much focused on virtualisation security also makes a good point about low level vulnerabilities and the interaction of multiple VMSafe appliances. 

I fully expect VMware to not only ensure the VMSafe fastpath drivers do nothing harmful to the virtual environment, but also address interaction issues between multiple VMSafe fastpath drivers. In addition, I would like such reports made available to satisfy auditing requirements.

So was VMSafe simply something to bolster the vSphere marketing launch,  an announcement made before it should have been?  Usually VMware are quite good at keeping these kind of things under wraps and releasing them when they are a little more mature and ready for use in real world scenarios.  Now I don’t know what work was done with partners in advance but I would have liked to have seen a couple of the major security vendors releasing appliances at the same time as VMSafe was announced.  For me that certainly would have installed a little more confidence in VMSafe than writing this article has.

If anyone out there is writing appliances utilising the VMSafe API and wants to comment, please do.  I would love to hear some news from the front line as to what is being developed, where it will be applied and when we can expect to see it.

General, Gestalt-IT, vSphere ,

How to: Install QLogic HBA Libraries for HP Insight Manager on ESX 3.5 U4

November 8th, 2009

I’ve just finished a piece of work upgrading a few hosts to ESX 3.5 Update 4 using the ESX deployment appliance.  As part of the upgrade I install version 8.2.0 of the HP management agents for ESX. Upon completion of the install I realised that I could not see the Qlogic HBA’s within the Insight Manager home page.

Upon investigation it appears that HP have removed the libraries from their software from version 8.0.0 onwards.  Now ideally I would have installed the libraries before installing HP Insight Manager as indicated by Arnim Van Lieshout in his great HP agent upgrade script post.  In my case I had already done the install, so in order to see the HBA’s and the connected HP Storage, I had to carry out the following steps.

I downloaded the relevant Qlogic API library for ESX 3.5 U4 from the following site  
http://support.qlogic.com/support/oem_product_detail_vmware.asp

I created a temporary folder in the /tmp partition called qlogiclib

I unzipped the contents and copied it over to the new folder using WinSCP.

I logged on to the service console and ran the following commands to stop the HP agents.

service hpsmhd stop and service hpasm stop

I then navigated to the /tmp/qlogiclib folder and ran the command ./Install.sh

I then ran the following commands in the order shown below to restart the HP agents.

service hpsmhd start
and service hpasm start

To check if it worked I navigated to https://hostname:2381 and checked for the external storage connections entry under the storage section on the home page.

Finally I deleted the /tmp/qlogiclib folder as it was no longer required.

Now the same steps carried out with vSphere ESX 4 does not have the desired effect.  Even though you can use the HBA test script located in the Qlogic API library they will not show up in HP Insight manager (Much to my annoyance).  When I eventually find a solution I will of course provide an update.

ESX, VMware, vSphere ,

How to: Check and change the ESX Swap Partition

November 1st, 2009

An interesting problem occurred the other day with one of our older production ESX 3.0.2 hosts. For the first time with any ESX host we have the service console memory ran out,  this resulted in all VM’s becoming unresponsive and loss of service to our users.

Now these hosts were built a couple of years ago by a consultant and all had their service console memory set to the default value of 272MB. I’m in the process of upgrading all hosts to ESX 3.5 U4 and changing the memory levels to the maximum 800MB,  this particular host was due to be upgraded in the next 2 weeks.  Unfortunate timing!!

VMware support were as helpful as ever and informed my colleague to up the service console memory to 800MB.  My only concern was the fact that your swap space is meant to be twice your service console memory.  If the memory was only set to 272MB you can be sure that the swap partition wasn’t going to be set to 1600MB.

My colleague was having trouble finding out what size the swap partition was so I gave him a hand. First of all he was doing a df –k at the service console,  which shows him the named linux partitions but not the swap partition we were looking for.  To get information on all disks and partitions attached to the host we need to run fdisk – l

This command showed us the swap partition created was made up of 1044225 blocks, though we weren’t sure exactly what this equated to in MB.

3.0.2-Swap-partition

I took a look at one of our newly built ESX 3.5 U4 hosts and compared it’s fdisk –l results to the scripts used to build it.  I quickly found that by dividing by 1024 you could get the size of the partitions.  So in this case the swap partition on the ESX 3.0.2 host was roughly 1GB which was less than the recommended 2 x console memory sizing.

3.5U4-Swap-partition

On this occasion VMware support advised us that it should be OK as it was.  That coupled with the fact we are going to rebuild the server in the coming weeks was enough for us to call the case closed.

However what if we did want to change it? I’d always been taught that changing the swap partition after the host had been built usually meant a full rebuild.  However as I’ve been working my way through Scott Lowe’s Mastering Vmware vSphere 4 book I came across the steps to do it without a rebuild.  It’s always recommended to rebuild a host as opposed to take this action, however occasionally needs must.

first create a new swap file on an existing service console partition, the command below will create a 1.6GB within the path entered /path/to/

dd if=/dev/zero of=/path/to/swap.file bs=1024 count =1640144

Use the following command to turn this into a usable swap file

mkswap /path/to/swap.file

Now enable the swap file with the following command

swapon /path/to/swap.file

If you do try this, it is entirely at your own risk. I haven’t as I am planning to rebuild in the near future.  If I wasn’t I would probably have given this a shot just to put my mind at ease.

ESX, VMware, vSphere , , ,

Scottish VMware User Group

September 9th, 2009

I’ve been chatting to an ex colleague recently who was trying to get a Scottish VMware User Group setup.  Through his hard work and determination he’s finally managed to get it off the ground, here’s hoping that we get enough interest to keep it up and running in the future.

The first meeting agenda looks good with Mike Laverick of RTFM attending to discuss vSphere 4 storage.  We also have a talk from our hosts for the day, State Street alongside EMC about the storage setup for their VMware infrastructure.

If you’re interested in becoming a Scottish VMUG member click the link and sign up

If you want to attend this particular VMUG click the register now button

Register Now 

General, VMware, vSphere

VMware vSphere Thin Provisioning

June 24th, 2009

I’ve recently been evaluating some of the new features in VMware vSphere to see what use they would be to my current employer. One of the areas that I touched upon in my “what’s new in vSphere Storage”  blog post was thin provisioning.  I wanted to come back and cover this particular topic in more detail as it’s a key feature and it’s available throughout all versions of vSphere so I’m sure everyone will be interested in it.

What is Thin Provisioning?

Thin Provisioning in it’s simplest form is only using the disk space you need.  Traditionally with virtual machines if you create a 500GB virtual disk it will use 500GB of your VMFS datastore. With Thin Provisioning you can create a 500GB virtual disk, but if only 100GB is in use only 100GB of your VMFS datastore will be utilised. Credit to Chad Sakac for the diagrams below.

thin-provision

How does it work in vSphere?

Thin Provisioning is being heralded as something new with vSphere,  when in truth it was already available in VI3.  In VI3 creating a thin provisioned disk involved using vmkfstools and was also not a production supported VM configuration.  Now in vSphere the creation of thin provisioned disks can be carried out from the VI Client (see below) and is a supported production configuration for a VM.

tpoptions

It’s as simple as checking a check box, the results are pretty good to.  Below you can see I have created two thin provisioned VM’s on my new ESX4i host and you can see the provisioned space and the used space stats being shown in the VI Client. 

thinprovision

What are the benefits?

The thin provisioning  feature is perfect for my home lab environment where disk space is at a premium, but how does it translate into real world implementations of ESX.  Well I for one have been looking at exactly this to identify what benefits could be achieved within my employers ESX estate. A quick audit found that our development and system test ESX environment was running at 48% disk utilisation,  so straight away thin provisioning would save us 52% on storage capacity used. Paul Manning of VMware mentioned on a recent communities podcast that on average vSphere would save users 50% on storage.   This is possibly not such a big thing when your talking about test environments, but when you move up to production SAN Storage, saving 50% on an expensive SAN array is a very real and tangabile cost saving.  One that people should definately take into account when making a cost benefit case for buying or upgrading to vSphere.

What are the potential downsides?

One of my personal concerns with thin provisioning is the potential overhead on any write activity that would requires the extension of the VMDK file.  To me there is an obvious VMFS operation that needs to take place there which would add to the overall time to complete the disk write.  When there is a requirement to expand a disk, the VMDK files will increase in increments based on the block size of the underlying VMFS partition, 1MB, 2MB, 4MB or 8MB.  So the overhead may be smaller if your VMFS has been formatted with a bigger block size, i.e. for a 16MB write it only has to expand 2 blocks when the VMFS block size is 8MB but would have to expand 16 blocks if it was formatted with the 1MB block size.  I can imagine this percieved overhead could put people off using thin provisioned disks for certain production based environments, especially those where there is a lot of write I/O activity,  SQL Server or Exchange for example.  To counteract that though,  the improvements in the VMware I/O Stack should compensate for this performance overhead.  This could potentially leave you in a situation where you’ve reduced a VM’s storage footprint and still have performance equal to that experienced in VI3,  possibly not a bad trade off.  I’d also expect people running their VMware environment on enterprise SAN technologies from the likes of EMC or NetApp to notice minimal performance impact with thin provisioning as SAN memory caches help take up the strain.

Another downside is if you want to use VMware Fault Tolerance to protect a VM then you cannot use thin provisioned disks.  To be honest this is a small issue as Fault Tolerance protection is most likely going to be on virtual machines that are important to your organisation.  These machines are probably the ones you wouldn’t thin provision in the first place for performance reasons.

Thin provisioning creates it’s own unique problem in that what we’re basically doing here is over provisioning the storage.  You need to keep a very close eye on thin provisioning as it’s quite feasabile that your VMFS datastore could fill up and your virtual machines fall over.  Not what you want to come into on a Monday morning,  or any morning for that matter.  So you need to monitor your storage and ensure that there is enough free space.  One of the simplest ways to do this is through the use of the new alarms in vSphere that allow you to alert on datastore usage and datastore over provisioning.  These should keep you from filling a datastore and killing your VM’s or ESX Servers

storage_alarms

One gotcha that you should watch out for is VM swap files, as these are usually stored with your virtual machines vmdk files in the VMFS datastore.  In VI3 the swap file was not deleted when a VM was powered down,  in vSphere the swap file is deleted on power down and recreated when the VM is powered up.  You should be aware of this when over provisioning storage as you could get into a situation whereby you find you can’t power on a VM because there isn’t enough space for the swap file to be created.  This becomes more likely as servers and VM configuration maximum’s increase,  if you have a VM with 20GB of RAM it’s going to need 20GB of disk space for the swap file.  if you have 256GB of RAM in your vSphere host and you allocate it all out to VM’s then you need to think about the 256GB of disk capacity required to service virtual machine swap files.

Storage vMotion

If you’ve already got a VI3 environment then the chances are that your VM’s aren’t thin provisioned,  how on earth are you going to take advantage of this new feature? Well if you have purchased a vSphere edition that supports storage vMotion then you can of course migrate the underlying storage and have it thin provisioned during the move.  This should allow existing VI3 customers to claim back a lot of space,  as I mentioned before I found that our development and test VI environments were only 48% utilised.  If I storage vmotion all those VM’s and thin provision at the same time I will free up about 1.5TB of storage that wasn’t being used in the first place.

I’ve included a video below which demonstrates the Storage vMotion and thin provisoning features in vSphere quite nicely, enjoy!

Gestalt-IT, Storage, VMware, vSphere , ,