Archive

Archive for the ‘VMware’ Category

Blending IaaS, Iaas+ and PaaS to deliver today’s IT

March 2nd, 2014

So,  I have to start off by thanking Brian Gracely (a fellow EMCer) for starting me off on this runaway train of thought.  a few weeks ago I read his article entitled “Will IaaS+ happen before PaaS” the question posed in the title is very apt, a question which only reinforces the fact that we work in such a rapidly changing IT world.  One where the battle for market and mind share continues to alter the landscape on a monthly if not daily basis.

IaaS, PaaS, SaaS…   Reality!!

I remember when I first started at EMC as a vSpecialist,  we were all about building private Infrastructure as a Service (IaaS) capabilities.  Virtualise everything on VMware, utilise vCloud Director to create pools of resource,  securely divide them between tenants, deliver a self service catalogue through a portal and conduct chargeback against all of it.  We also talked about extending that private cloud to the public cloud through a federated hybrid cloud model, but ultimately when you strip away the bells and whistles what we were all talking about from an end user perspective was the basic provisioning of VM’s to the OS layer.

Roll forward 3 years and things have changed up a gear significantly. Platform as a Service (PaaS) has become the new hot topic and the ability to write applications without worrying  about the underlying infrastructure is well and truly upon us.  Pivotal Cloud Foundry, Red Hat OpenShift, AWS Elastic BeanStalk, Microsoft Azure and SalesForce.com’s Heroku are some of the key players in today’s PaaS market. Some of these PaaS offerings are open and can sit on top of multiple cloud infrastructures,  others are more proprietary and locked in. All of them however are trying to capture a share of the customers who broadly speaking fall into the following categories.

Innovate or die — PaaS offers a way to “leapfrog” the competition with the ability to quickly integrate the latest innovations in software development and scale them quickly. Customers get that pie-in-the-sky seamless experience, which is a win for everybody.

Agility is key — PaaS is a strong entry point to embracing the DevOps mindset with minimal investment, helping organizations work toward agile development. When you don’t have to worry about the underlying infrastructure, it becomes a lot easier to achieve continuous deployment and quick, responsive feature updates. Developers don’t need to handle operations and operations don’t need to know how to code in order to take advantage of a PaaS.

Build once, deploy anywhere — This is relatively specific to the open source players, but the ability to build an application on a common platform means that you can write it once and deploy it on any infrastructure you’d like. In other words, if you build an application on Cloud Foundry, that application will run the exact same way on any other instance of Cloud Foundry, with the implication that you can ideally move from the public cloud to the private cloud or between public cloud providers with lessened fear of lock-in.

Quoting – Director of Pivotal’s Cloud Foundry Product Team Dekel Tankel in CiteWorld

The reality (in my opinion)… well it’s a mixed bag of course. Today I still see customers / IT teams striving to provide basic IaaS for their internal users.  In truth developers wanted that simple capability from internal IT over 2 years ago, it is one of the main reasons that many public cloud providers, such as AWS are what they are today.  They offered a viable and quick alternative for infrastructure while internal IT teams were either still trying to work out how to do IaaS themselves or were simply asleep at the wheel not realising that some very real competitor existed out there.

PaaS is an exciting shift in the industry, it’s allowing businesses to move up the stack, forget about the infrastructure and concentrate on building the applications that differentiate them from their competitors.  It’s still early days for PaaS,  though the momentum is building, not least with the Cloud Foundry foundation announcement this week which saw some of the  industry heavy hitters commit to developing the Cloud Foundry project.

image
I certainly don’t think anyone can argue with the concepts of PaaS, or the fact it will rapidly take share in  the coming years as development methods change.  I do however feel that it won’t suit everyone immediately,  It’s great for greenfield start-ups who are not constrained by legacy IT and want to operate in the public cloud, but how will it be adopted into the existing businesses? How quickly will it be adopted into the enterprise?  All enterprise customers should be taking a look at this today,  working out how they can integrate PaaS into their IT function to fundamentally change how they manage the software development lifecycle.

I personally think we’re at another one of those interesting inflexion points.  The business and the developers that work for them want to move ever faster.  Internal IT has maybe only just got to grips with IaaS and can now service them with the VM’s they want quickly.  However the developers have moved on and already want more,  they want to consume services and not just VM’s,  they look to the public cloud and they see rich services being layered on top of cloud infrastructure,  messaging services, Database as a service, in-memory capabilities, etc.  The developers and the business are again demanding more than internal IT are currently delivering,  sounds familiar doesn’t it!  Sounds like the IaaS story from a few years back all over again.

Delivering Iaas, IaaS+ and PaaS

The question I asked myself after reading Brian’s article was what are EMC doing to assist our customers to deliver in this crazy fast moving IT world of ours.  When I say “customers” I mean that in the broadest sense,  it could be assisting small IT departments,  enterprise customers or service providers looking to deliver services back to businesses and public consumers.

At EMC we’ve been talking a lot about the 1st, 2nd and 3rd platform. Some of you may have seen the picture below before,  sums up today’s modern IT world very well I think.  I certainly speak to customers who operate somewhere in all three of these platforms and I can safely say that I don’t see that model changing overnight,  however I do see companies striving hard to leave the legacy behind and leapfrog straight into the 3rd platform.

2014-02-09_2111

If we break it down, what businesses are technically looking to achieve today is optimising the 2nd platform and at the same time enabling the 3rd platform.  This is where I think a blended model of IaaS, IaaS+ and PaaS will cover the majority of use cases.  IaaS and IaaS+ will help optimise legacy and new 2nd platform applications and change how they are delivered.  IaaS+ and PaaS will find itself used for new application requirements in both the the 2nd platform and 3rd platform.

A picture speaks a thousand words,  so I’ve attempted to draw out below what’s in my head on this subject. Thanks to Eric Wright for the graphic inspiration in his recent blog post. In theory by mixing traditional IaaS with PaaS and looking to utilise that combined stack to layer services on top of IaaS,  whether that be PaaS delivered (MemDB aaS or Messaging aaS) or more traditionally delivered app blueprints (DBaaS straight on top of IaaS) we eventually come up with a hybrid model that caters to lots of different requirements.

IaaS_Plus

So if we then take that on one step, what does the EMC federation have to offer in this space to help customers achieve this blended model of IaaS, IaaS+ and PaaS?  I came up with the below diagram, pretty busy isn’t it!!

VMware_EMC_Offering1

Lets break it out from the top down

Services – there is a plethora of services that theoretically could be consumed,  some you may put together yourself (via application blue prints or custom buildpacks for Pivotal CF) others may be pre-packaged for you (either blueprints or packaged services).  They may be consumed on PaaS (buildpacks) or may be deployed straight onto IaaS (blueprints).  Pivotal services such as GemFire, RabbitMQ and TC Server are some of the offerings that can be deployed on either today.

vCloud Automation Center – vCAC has recently had the vFabric Application Director functionality folded in can be utilised to deploy VM’s, Operation Systems and application blueprints straight onto physical infrastructure, multiple hypervisors and multiple cloud platforms.  I’ve included Puppetlabs on the diagram as the integration with vCAC has greatly expanded the capability for service deployment, with vCAC being able to take advantage of the puppet modules library.   I think once VMware get vCAC fully plugged into Pivotal CF and the vCloud Hybrid Service (hopefully both aren’t too far away) it will make it an exceptionally powerful tool for automation whether that be VMware or a heterogeneous cloud environment.

Pivotal Cloud Foundry – The open source Platform as a Service offering that is today compatible with multiple cloud environments (VMware, OpenStack, AWS and I’m hoping vCloud Hybrid Service soon).  It currently comes in an Open Source and a Pivotal enterprise flavour, other custom variations will undoubtedly appear, IBM Bluemix is a recent PaaS based on CF that I’ve been reading about.  Cloud Foundry is creating a real stir in the IT world at the moment with a lot of the IT heavyweights throwing their backing behind this open source project, prompting comments such as the one below.

“Cloud Foundry is on an absolute tear. The number of companies that have bought into the initiative, the amount of code being contributed, the customer wins that ecosystem members are enjoying suggest that Cloud Foundry is preeminent among all the open source PaaS initiatives.”

— Ben Kepes, Forbes


VMware Software Defined Data Centre
– I think everyone knows the story with this, if you don’t you really should :-)   VMware in combination with it’s partners (EMC included) are working hard on delivering the software defined data centre.  The basic software defined compute layer is where VMware earned it’s stripes, that area of the SDDC needs no introduction.  Software Defined Networking became a mainstream topic with the VMworld 2013 announcement of VMware NSX.  With NSX VMware are bring the same consolidation, control and flexibility benefits to the network world as they did to the server world. Granular network policies that can follow a VM around regardless of it’s location (Private or Public clouds) is a key factor in enabling wide spread SDDC and hybrid cloud adoption. The last element is Software Defined Storage, something VMware and EMC are working very hard on.  EMC announced our ViPR offering at EMC world in 2013,  a fundamental change in thinking around storage.  Abstracting the control plane and enabling a single point of management for EMC storage, other vendors storage, as well as commodity and cloud storage was a major change of direction for EMC.  Providing software only data services is another fundamental shift in mindset, but an essential one as the storage world slowly becomes more commoditised.  Today EMC offer HDFS and Object data services through ViPR,  in the future there will be a lot more as EMC focus efforts on producing more abstracted software features.  VMware have also gone down the commodity route with VSAN,  still in Beta but due for release soon,  it will prove popular for those VMware only shops who want to consume commodity server direct attached storage.

VCE vBlock – The leading converged infrastructure stack company created as a joint venture between EMC, Cisco, VMware and Intel.  Converged infrastructure stacks are the easiest means of quickly deploying private cloud infrastructure within your own data centre. Built from best of breed components, available in multiple T-shirt size offerings, fully integrated, built in the factory, certified and ready to roll in 45 days or less,  it is the perfect way to consume infrastructure and underpin a blended IaaS, IaaS+ and PaaS model.

vCloud Hybrid Cloud (vCHS) – This week saw VMware announce their new vCHS service in Europe,  based in the UK.  This is a key announcement for VMware and one that will sure to get customers excited.  Lots of customers have VMware in their own private cloud deployments today,  enabling those customers to extend their internal VMware environments while use the same automation and monitoring tooling will be very appealing.  The capability to move your VM’s back and forth between on-prem VMware private cloud and vCHS or simply deploying your existing VM templates directly into a public cloud offering without upheaval is a huge plus or vCHS.  With IPSEC VPN or direct connect network capabilities on offer,  vCHS offers the simplest means of extending of your data centre to consume as required.  Once the offering beds in and a few more bells and whistles are added (I’m thinking Pivotal CF PaaS here) it will be an even more compelling offering and a large part of VMware’s future business.

Summary

This has been a very long post, written over a number of weeks and written about a moving target (I’m thinking Cloud foundry foundation announcement and vCHS launch here).  It basically comes down to businesses and the developers wanting to constantly innovate and do more.  It’s about IT struggling along to keep up with that insatiable appetite and deliver what they want.  I believe IT departments wants to help them do that, but they have to balance delivering with all the challenges that comes with existing IT Legacy and the requirement to maintain secure and compliant environments as per the rules and regulation that governs their business.

I believe the answer is a mixture of platform 2 and platform 3 solutions,  it’s a mixture of legacy and new world applications, it’s a mix of legacy IT infrastructure, IaaS, IaaS+ and PaaS as I outlined above. With the work going on at EMC + VMware + Pivotal it’s a no brainer that this particular federation of companies is in a perfect position to help the businesses, developers and IT Infrastructure teams with that journey to innovate and change how they do what they do!

EMC, Gestalt-IT, Pivotal, VMware , , , , , , , , , ,

Configuring VASA with EMC arrays – CLARiiON, VNX and VMAX

December 20th, 2011

Since last week I have seen a number of questions in and around VASA and how it is configured for EMC arrays.  I got a couple while doing the Q&A for EMC’s recent VNX best practices with vSphere 5 live WebEx and the day after I was asked by Cormac Hogan over at VMware to take a look at a question asked on the VMware blog site.  So I’ll admit now,  I hadn’t really had a chance to look at VASA in-depth, shame on me!  However I thought that this was as good a chance as any to learn and I thought I would do a post on how to configure it for both EMC’s VNX and VMAX systems. Big thank you to my EMC colleague Garrett Hartney for providing both his time and an environment that we could set this up in.

EMC’s VASA implementation

For those not familiar with VASA I strongly suggest reading this article to familiarise yourself with the what and why around this new VMware API. For those who just want the short version, VASA is essentially an API that allows storage vendors to publish array capabilities up to vCenter.  This allows VMware admins to see characteristic information about the storage underpinning their datastores and also allows them to use VMware storage profiles to enforce VM storage placement policy compliance, e.g. This SQL VM will always sit on performance disks.

The table below shows how EMC currently publishes it’s array capabilities through VASA 1.0 up to vCenter.

Capabilities

An example of how this looks for a datastore when pulled through to vCenter can be seen in the screenshot below.

vCenter_View

Core Components and architecture

Regardless of which array you are connecting to, EMC’s implementation of VASA is done using Solution Enabler and something known as the SMI-S provider.  Together these two components act as a middle tier between vCenter and the different arrays being queried.  It’s worth pointing out that the SE \ SMI-S server supports in-band (SAN attached) and out of band (network) connectivity for VNX and CLARiiON arrays and in-band connectivity only for Symmetrix arrays. The architecture of the setup is demonstrated in the diagram below.

VASA_SMI-S

SE / SMI-S server deployment

To get started with VASA you will need to download SMI-S version 4.3.1 which already comes pre-bundled with Solution Enabler 7.3.1.  This software can be downloaded from the link below and comes with an option for 32 and 64 bit Windows as well as Linux.  For full details on OS support see the release notes for SMI-S 4.3.1 – (PowerLink Account Required for downloads)

Home > Support > Software Downloads and Licensing > Downloads S > SMI-S Provider

As part of my own deployment I am using a Windows 2008 R2 64 bit server to deploy the core components.  The server has been built as standard with no special configuration required.

  • We first of all need to deploy Solution Enabler and SMI-S on your designated server, locate the installation media and run the install package.

install1

  • When presented with the welcome screen click next.

install3

  • Leave the install location as default and click next.

install4

  • When prompted select the array provider only and click next.

install5

  • Review the installation settings and space requirements and click next to install.

install6

  • Once the install is complete, click finish.

install7

  • Configure the environment variables on the server to include the SYMCLI path

install8

  • Locate the following file and open it for editing

install9

  • Locate the line below and change the value from 100 to 1200, save and exit the file.

install11

  • Navigate to the services console and restart the ECOM service

install12

  • navigate to the location shown below and run testsmiprovider.exe

install13

  • the next step is to connect to the SMI-S provider. I used the defaults which are shown in the square brackets, just hit enter on each line to use the default.

install14

  • Once connected you will see the following at the command prompt

install15

  • type the dv command at the prompt to display version information about the SMI-S provider install.  This basically proves that everything is working as expected

install16

  • that concludes the basic installation and configuration of the SMI-S and Solution Enabler server,  now all we need to do is add in the storage arrays we want displayed to vCenter via the VASA api.

CLARiiON and VNX

SUPPORTED – CLARiiON Navisphere Release 28, 29 & 30, VNX Unisphere Release 31
(SMI-S supports many earlier CLARiiON releases but vSphere 5 does not)

Earlier I mentioned that the CLARiiON and VNX arrays could be added to SMI-S in-band or out of band.  The most common method and the one I intend to use here is to connect out of band, i.e. across the network.  If you do want to connect in-band with direct SAN connection then check out page 39 of the SMI-S v4.3.1 release notes.

One major pre-requisite for connecting CLARiiON and VNX is that the user account used to connect to the arrays must be an administrator login with global scope. At this point you should hopefully still be connected to the testsmiprovider.exe application used earlier,  if you are not then please repeat the command line steps shown above to reconnect.

  • Once connected successfully type the commands addsys to begin adding the array
  • Enter the IP address / DNS name for Storage Processor A and hit enter.
  • Enter the IP address / DNS name for Storage Processor B and hit enter.
  • You can continue to add additional arrays here or hit enter to move to the next step.
  • Accept the default for the address type, i.e. IP/Nodename by hitting enter.
  • Continue answering this question for each storage processor / array added
  • enter the global scope administration user account for connecting to the arrays.
  • enter the password for the administration account being used
  • You will then see the message +++++ EMCAddSystem ++++

install17

  • After a while you will see the output from the addsys operation, as you can see below the output is 0 which indicates success.  The details of the system added are then listed.

install18

  • If you now run the dv command the arrays added will be listed as connected.

install19

  • Now that the array is registered we now need to add the VASA provider into vCenter. Log into vCenter and navigate to the home screen, locate and click on the storage providers icon.
    install20
  • Within the storage provider screen click on the add button as shown below.

install21

  • Enter a name for the provider and enter the URL shown below,  the IP address of the server hosting SE / SMI-S should be entered where it has been blanked out below.  The user name is admin and the password is #1Password.

install22

  • When prompted accept the certificate for the SMI-S provider

install23

  • Once successfully added you will see the provider displayed

install24

  • Highlight the provider and you will see the array that was connected to the SMI-S provider server earlier.

install25

  • To check that VASA is working correctly in vCenter click the VM Storage Profiles icon on the home screen within vCenter.

install26

  • When setting up a new Storage Profile you should be able to see the storage capabilities presented to vCenter,  these are shown below and are marked with system.

install27

  • Job done, VASA successfully deployed and storage capabilities showing in vCenter!

DMX4, VMAX and VMAXe

SUPPORTED – Enginuity 5875
(SMI-S supports earlier Enginuity releases but vSphere 5 does not)

Now unfortunately I do not have access to a Symmetrix to complete my testing, however the release notes for SMI-S state the following which makes it sound very easy.

When started, the SMI-S Provider automatically discovers all Symmetrix storage arrays connected to the host on which the Array provider is running. No other action is required, such as running a symcfg discover command.

As mentioned earlier Symmetrix discovery is done in-band through small gatekeeper LUNs presented to the SE / SMI-S server.  If it is a virtual server then ensure that the LUNs are presented to the VM as physical mode RDMs.  The SMI-S release notes has the following to say about best practice.

When using the SMI-S Provider to manage Symmetrix arrays, it is recommended that you configure six gatekeepers for each Symmetrix array accessed by the provider. Only set up these gatekeepers for the host on which the SMI-S Provider is running.

So in theory it should be as simple as completing the following steps

  • Present the gatekeeper LUNs to the server (physical or virtual)
  • Restart the ECOM windows service to restart the SMI-S provider (auto discover arrays)
  • Use testsmiprovider.exe tool,  run the dv command, verify Symmetrix array is attached.
  • Thanks to my colleague Cody Hosterman (who does have a Symm) for the screenshot.

Symm_Config

One point to note, if you have SMI-S installed on the same host as the EMC Control Center (ECC) Symmetrix Agent or Symmetrix Management Console (SMC) there are a couple of steps you need to take to avoid some spurious errors.  Check out page 37 of the SMI-S v4.3.1 release notes for further information on the changes required to avoid this.

Summary

I think the important thing to remember here is that this is version 1.0 of VASA. It may not be the most elegant solution in the world but it is a start on what I think will become a key feature in years to come. We are fast moving into an age where VMs become objects where we simply check a box to ensure our requirement or service level is delivered.  Imagine a scenario where a VM is created and as part of the creation process you select the storage based on the VASA information passed up to vCenter from the array.  Do I want it on a RAID 5 or RAID 6 protected datastore? Do I want it on a RecoverPoint replicated datastore? Do I want it on a vPlex distributed datastore? Do I want it on a datastore that is SRM protected?  Although it is v1.0 you can see the potential use cases for this feature in the future are going to continue to expand.

Some of you may well have seen Chad Sakac’s blog post back in September entitled Help us make VASA (and EMC’s VASA provider) better! It includes a questionnaire with questions about what you, the end customer wants to see from VASA. This is a great chance to have your say and influence how EMC implement VASA going forward, lets make v2.0 of VASA a feature that delivers on the huge potential V1.0 has shown.

EMC, VMware , , ,

EMC VSI Storage Viewer, Solution Enabler and Symmetrix arrays

September 26th, 2011

I’ve recently been looking at the implementation of EMC’s free Virtual Storage Integrator (VSI) with a few our older Symmetrix customers. Now customers using VMAX and VMAXe have the ability to deploy delegated storage provisioning for their VMware admins. However DMX customers only have the ability to use the read only storage viewer functionality as the DMX is not supported with Storage Pool Manager (SPM) which back ends the storage provisioning.  Some interesting questions came up recently with a customer about how best to deploy the VSI storage viewer with DMX arrays and I thought it would be worth sharing the findings with a wider audience. Basically I’m looking to cover off the different ways the VSI can connect to a Symmetrix array and how some of the options selected affect end to end operations.

VSI to Symmetrix Connectivity

So the VSI tool can be used in two ways with Symmetrix arrays,  you can utilise the local Solution Enabler (SE) installation that comes with the VSI or you can use a dedicated Solution Enabler server. It’s important to remember that Symmetrix arrays can only be discovered in-band, basically this means the SE install needs direct connection with the physical array.  This is achieved through the presentation of small LUNs known as gatekeeper LUNs, something existing Symmetrix storage teams will be very familiar with. So lets look at the two different possible setups.

Local Solution Enabler Deployment

LocalSEDeployment

The local deployment model shown above would require a gatekeeper LUN being presented / zoned through to the machine that the VI Client, VSI and local SE install have been deployed on. Communication with the array in this instance flows directly between the client PC and the array. In the majority of instances this isn’t going to be very practical for a number of reasons.

  • Each VMware admin client with VSI deployed would need a direct array connection.
  • Most Symmetrix arrays are FC attached and client PC’s are not.
  • Arrays live in remote data centres and VMware admin PC’s live in the office.
  • Increased security risk, i.e. too many direct array connections to secure

 

Remote Solution Enabler Deployment
 RemoteSEDeployment

The remote deployment model shown above would require gatekeeper LUNs being zoned through to a dedicated server. VMware admins would then connect through this remote SE server when querying Symmetrix arrays from information with the VSI. Communication flow in this instance always goes through the server, however as you’ll see later results can be returned from the SE server or the array depending on VSI configuration. This kind of setup is more practical for a number of reasons.

  • Remote SE servers are usually already in place for storage management tasks.
  • Available as a virtual appliance for quick deployment if not in place already.
  • Supports connectivity by multiple remote VMware admins using VSI.
  • Manage multiple Symmetrix arrays through one server.
  • Decreases security risk, i.e. single device connection to array.

 

Mix and Match

The model above is by no means rigid,  you can craft a number of solutions out of the principals shown above.  If your vCenter server sat in the same data Centre as the array then you could present gatekeeper LUNs to it and use this as a management point whenever you want to get information from the array.  Another possible solution is to put a management virtual machine in the datacentre with the VI Client and VSI installed and present a gatekeeper as an RDM,  whenever a VMware admin needs information from the array they connect into that management VM to carry out the work. Basically there is a solution for deploying VSI with Symmetrix arrays no matter what you’re setup looks like.

VSI Discovery Process Flow

One question that did come up recently was what happens when you select the AutoSync option for a symmetrix array and you are using the remote SE server solution. How often does it poll the array?  Well the answer is it doesn’t, which is strange as the term Autosync gives the impression that it syncs with the array on a regular basis. So how does it work?

VSI_SYMM_1

AutoSync enabled

When AutoSync is enabled each time you request array data,  e.g. clicking on the EMC VSI tab for a datastore.  The request forces the SYMAPI database on the remote SE server to be updated from the array, the up to date array information is then returned to the VSI. There is obviously a slight cost involved in doing this as the remote SE server needs to make the calls to the array in order to update it’s local database before responding.  Typically this would introduce a 10 – 20 second delay but that cost means you guarantee the information received is up to date and valid.

AutoSync Enabled

AutoSync disabled

When Autosync is disabled each time you request array data the request is returned from the cached information in the local SYMAPI database on the remote SE server. This is obviously the fastest method as you don’t have the cost of querying the array directly for an update but the information may be out of date.

AutoSync Disabled

With Autosync disabled it’s up to the VMware administrator to initiate the sync of the array from within the VSI.  Alternatively the storage team can initiate sync with the array directly through the SE server using SYMCLI. To initiate a sync manually go into the VSI tool and select Symmetrix arrays from the list of features, highlight the array and click on Sync Array.

VSI_SYMM_2

Summary

The free EMC VSI Storage Viewer tool can be of great benefit to Symmetrix customers, allowing VMware admins improved visibility of the underlying storage layers.  In larger environments where Symmetrix arrays are traditionally used you tend to find VMware and Storage are managed by separate teams.  Anything that improves the information flow between the two teams during troubleshooting has to be a must have tool. As show above some thought needs to be given to how you set it up. My personal preference would be to always go for the remote SE server solution.  Enable Autosync if your underlying VMware storage environment changes often and if it doesn’t then a manual sync every now and again should suffice.

Additional notes and links

SPC-2 Flags

It’s worth noting that SPC-2 flags need to be set on the FA port or on the initiator of the ESX host for the VSI to work correctly, in fact this is a required setting for ESX generally. This has come up a couple of times recently so I though it worth mentioning to ensure that people have it setup correctly,  the following whitepaper gives you more information.

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H4116-enabling-spc2-compl-emc-symmetrix-dmx-vmware-envnmt-wp.pdf

VSI installation Media

Home > Support > Software Downloads and Licensing > Downloads T-Z > Virtual Storage Integrator (VSI) – (Please Note: PowerLink account required)

Solution Enabler Media

Home > Support > Software Downloads and Licensing > Downloads S > Solutions Enabler(Please Note: PowerLink account required)

Solution Enabler Documentation

Home > Support > Technical Documentation and Advisories > Software > S > Documentation > Solutions Enabler – (Please Note: PowerLink account required)

Storage, VMware ,

EMC Virtual Storage Integrator and the Access Control Utility

September 8th, 2011

At EMC the vSpecialist team often end up talking to a lot of customers about EMC’s FREE Virtual Storage Integrator (VSI) Plug-ins for vCenter Server.  Not only do customers love the fact that it is FREE they also love the features delivered. The ability to accurately view, provision and manipulate EMC storage directly within vCenter empowers VI admins and makes everyone’s life that little bit easier.

When I started writing this article we were on version 4.2 of the VSI plug-ins, following VMworld 2011 we are now up to version 5.0 the fifth generation of this excellent VMware / EMC toolkit. The plug-ins that make up the VSI are listed below, to download use the link below or use the cookie trail to navigate to the page on EMC PowerLink.

  • VSI Storage Viewer Plug-in 5.0
  • VSI unified Storage Management Plug-in 5.0
  • VSI Storage Pool Management Plug-in 5.0
  • VSI Path Management Plug-in 5.0

Home > Support > Software Downloads and Licensing > Downloads T-Z > Virtual Storage Integrator (VSI) – (Please Note: PowerLink account required)

One of the great features that people are drawn to is the ability to allow VI admins to provision storage directly from within vCenter. This is done with the VSI Unified Plug-in for Celerra, CLARiiON and VNX(e) and done with the VSI Storage Pool Management plug-in for the VMAX. One of the first question I often get asked is how is the secured,  how does the storage team ensure that only the right VMware admins are manipulating the underlying storage?

The answer previously was… well to be honest we didn’t really have an answer to this one. Technically if you allowed the VMware admins to provision storage you needed to trust them not to go provisioning crazy and fill up your storage array.  Obviously that response was not really acceptable for any environment and EMC have been working to rectify that.acu_icon

The Access Control Utility is a new part of the VSI framework which allows storage administrators to granularly control availability of storage platforms and storage pools on those platforms.  These security profiles when created can be exported and passed to the VMware administrators and imported into the VSI unified storage management plug-in. The following blog post details the steps involved in completing this process for a VNX array in vSphere 4.1

So we start by double clicking on the shiny padlock icon that will have been added to your desktop when you installed the VSI unified storage management plug-in.  When the ACU starts we are presented with the profile management screen.  This will of course be blank the first time you start the utility, in this screenshot below however you can see a couple of existing access profiles I have created for some VNX arrays in the lab.

Profile_Main

To Create a new profile you simply click the Add button, you are then presented with the details screen for the new access profile being created.  Here you enter the name of the profile and a suitable description and click next when finished.

Profile_New1

The next step in the wizard is where you define the storage system that will be permissioned as part of the security profile.  You click on Add and then select the system you are going to permission,  as you can see the VSI ACU supports Celerra, CLARiiON, VNX and the VNXe arrays. For VMAX you need to look at Storage Pool Manager (SPM) to control access,  I’ll look to blog about this one at a later date.

Profile_New2

The next screen presented very much depends on the storage system you select.  If you chose the Celerra option you’re prompted for the details of the control station, username and password.  Select the CLARiiON and you’re prompted for the Storage Processor details and login credentials. If you select the VNXe then you’re promoted for the management IP and the login credentials.  I’m sure you can see the pattern developing here! Winking smile

In this example we are dealing with a VNX array and as such the option is whether you want to give access to block storage, file storage or both. As both are controlled differently within the VNX, if you select both you will need to enter the IP and credentials for the Storage Processor (Block) and the VNX Control Station.  For the purposes of this example I’m going to use Block only as you can see in the screenshot below.

Profile_New3
When you click next you’re prompted to enter the storage processor IP address and log on details as shown below.

Profile_New4

Once you are authenticated you get to select the granularity of access you want to provide.  It’s important to note that when the ACU refers to storage pools it means any storage pools and traditional RAID groups that may have been created on the VNX array.  There are 3 options available as you can see in the screenshot below.

  • All storage pools
    This option basically gives a VMware Admin free reign to provision LUNs with the VSI all over the array.  A potential use case for this may be a dedicated development VMware environment with its own dedicated array where the storage team don’t care to much about usage.

  • No Storage Pools
    This option is a complete lockdown and acts as an explicit deny to prevent any accidental provisioning on an array, i.e. the VSI unified storage management feature cannot talk to the array full stop, it won’t even show up as an option.

  • Selected storage pools
    As the name indicates this option allows the selection of certain storage pools for VSI provisioning.  A potential use case here would be a mixed environment where the array is shared between VMware and physical workloads.  As a storage administrator you would grant permission to the VMware storage pools only thus preventing any potential mis-provisioning (not sure that is actually a word but it certainly has its place when we talk about VSI provisioning)

Profile_New5

In this example I’ve chosen selected storage pools as I think this is probably the scenario that most people will be looking for the ACU to help them with.  Within the next screen you are presented with a list of all storage pools / RAID groups on the array.  Here you select the storage pools / RAID groups you want to give the VMware admin access to, when your happy with your selection you simply select finish.  Note in the screenshot below that I have select two individual storage pools (one is a RAID group) to be part of this particular storage profile.

Profile_New6_V2

Once you’ve completed storage pool selection you are returned to the profile screen,  you can finish your profile creation right here by clicking on finish or you can add additional storage systems if your VMware environment consists of multiple arrays.

Profile_New7

Once you have completed the creation of your security profile the next step is to export it so you can pass it over to your VMware admins. To do this simply highlight the Security profile, click on export and save the file

Profile_New8

Chose a location to save the file and don’t forget to add a passphrase to the file so that it cannot be misused.

Profile_New9
It’s important to remember that the login credentials provided by the storage admin during the ACU profile setup are the ones used when the profile is imported into the VSI.  The VMware admin will see the connection details and username being used but will not see the password. For audit purposes on the array it may be best to setup a dedicated account for use with the VSI and storage profiles. It should also be noted that the full details of the storage profile are encrypted within the profile export file as you can see below.

Profile_New10

So now that you’ve finished creating your storage profile you can pass it on to the VMware administrators to import into the VSI.  To do this you go into vCenter and open up the EMC VSI screen from the home screen.  Click on the Unified Storage Management feature,  then click on add and select Import Access Profile before clicking next.

vCenter_Import1

You now select the XML file created by exporting the ACU storage profile, you enter the passphrase you selected and click next.

vCenter_Import2

As you can see below the VNX array has been added to the VSI and provisioning access is marked as Restricted.  This is as expected as we configured the profile to give access to only two storage pools, FAST_Pool_3_Tier and RAID Group 10.

vCenter_Import3_MODIFY

When you use the EMC VSI to provision storage you will be presented with the VNX array that was part of the imported profile.  You select the storage array and as you can see in the screenshot below you can only create storage on the two storage pools that were added to the ACU storage profile.

vCenter_Import4

Summary

The EMC Access Control Utility was something I have been looking to write about for a while. Since it’s release I’ve often wondered how exactly it worked, what it could / could not do and how it could better meet customer needs. The steps above show that it is possible for a storage team to delegate control of storage pools so VMware admins can quickly provision the storage that they need. Becoming more efficient is something we as vSpecialists talk about on a daily basis, this tool is one of those first steps that you can take to make life easier.  If you are a VMware admin who is working with EMC storage then I suggest you speak to your storage team about this.  Likewise if you are a storage admin, reach out to your VMware counterparts and discuss how this could save you both time in the long term.

Video

My boss Chad Sakacc put a video together for VMworld 2011 which maybe explains it better (certainly quicker) than I maybe have in this blog post. I left it to the end though so you read the article before discovering it Smile. My step by step approach is simply so I can fully understand how it fits together and as I go deal with the many “what if” or “how does that work” kind of questions.  Hope you find it useful in some way, feel free to comment or ask questions.

EMC, New Products, VMware , , ,

XenDesktop 5, vSphere 4.1 and VNX Reference Architecture

May 23rd, 2011

A common theme that I see coming up up time and time again with customers is VDI using Citrix XenDesktop and vSphere 4.1.  It’s popularity generally stems from the previous success companies have had with the more traditional Citrix products such as Presentation Server / XenApp.  I know when I was looking at VDI solutions I was very much in favour of Citrix due to one thing, the ICA protocol.  It works and it works well over long distances, in a lot of companies it has proven itself over a long period of time, it is a protocol they trust to deliver.

Following a customer meeting recently I was desperately searching for an EMC reference architecture (RA) for a XenDesktop / vSphere deployment.  At the time it turned out we didn’t have a completed one,  we did however have one in draft format that was going through the final stages of review.  That RA has now been completed and released for public consumption, an overview of the documents purpose is below.

The purpose of this reference architecture is to build and demonstrate the functionality, performance and scalability of virtual desktops enabled by the EMC VNX series, VMware vSphere 4.1 and Citrix XenDesktop 5.  This solution is built on Machine Creation Services (MCS) in XenDesktop 5and a VNX5300 platform with multiprotocol support, which enabled FC block-based storage for the VMware vStorage Virtual Machine File System (VMFS) and CIFS-based storage for user data.

The RA covers the technologies listed below and details why the VNX array with FAST cache enabled is a perfect match for your Citrix VDI deployment.  One other interesting area that is discussed is the use of Citrix Machine Creation Services (MCS) which is a new feature XenDesktop 5 and provides an alternative to Citrix Provisioning Server (PVS).  For those new to MCS I suggest you have a read through the following Citrix blog post as their are some additional design considerations around IOPS that need to be considered.

  • Citrix XenDesktop 5
  • Microsoft Windows 7 enterprise (32-bit)
  • Citrix Machine Creation Services (MCS)
  • VMware vSphere 4.1
  • EMC VNX 5300 – (Fast Cache & VAAI enabled)
  • EMC virtual storage integrator (VSI) – Free on EMC PowerLink

If you are considering XenDesktop 5 and vSphere 4.1 then I suggest you download and have a read through the RA linked below.

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1 and Citrix XenDesktop 5

Citrix, EMC, vSphere , ,

EMC PowerPath/VE – MpxEsxPathClaim failure – LUNz

October 16th, 2010
I stumbled across some interesting error messages the other day in a host’s vmkwarning log file. The errors were related to the deployment of PowerPath/VE and as I dug a little deeper I found this was an issue on all 3 hosts I had recently built.  When I connected via ILO I was seeing the additional error messages on the console as you can see from the screenshot below.
 
ESX_PPathVE_Issue
 
I was a bit concerned as this was my first deployment of ESX 4.1 with the latest PowerPath/VE 5.4 SP2 software.  I was also a little confused as I hadn’t actually presented any LUNs to this ESX cluster. All I had done was zone the HBA’s through to the storage processors and set up a storage group on the CLARiiON and added the hosts to it. So what path was PowerPath/VE trying to claim if there were no LUNs presented?
 
While on the service console of one host I ran the command esxcfg-mpath –l to list all the paths.  As you can see from the screenshot below I came across a fibre channel disk that the HBA’s was attempting to connect to, even though nothing had been presented.  I also noticed that it was using the VMware native multi-path plug-in and not PowerPath/VE and was also showing as dead! So was this the path that PowerPath was failing to claim?
 
ESX_PPathVE_Issue_6
 
Following discussions with EMC Support I was informed that this was what is known as a LUNz. This is a fake logical unit presented to the host by the storage processors to provide a path for host software to send configuration commands to the array when no LUNs have been presented.
 
In ESX 4.x the Navisphere host agent that you would normally install on a CLARiiON attached host is not supported.  Instead the ESX host registers itself with the storage array automatically (requires FLARE 28 or later). In order to do this it needs a communication path to the array, hence the LUN_Z being presented. If a LUN is presented to the host then that would be used instead,  therefore the LUNz disappear when a LUN is added as it is no longer required.
 
So I added a 100GB LUN to the storage group and I immediately received a number of email alerts from vCenter stating that connectivity had been lost to the naa.500 device.  So as expected as soon as LUN 0 is presented the LUN_Z is no longer required.
 
The key identifier that this was a LUNz was the naa.500 shown in the device name (see the screenshot above). A LUN presented from the CLARiiON shows up with a device name of naa.600 as you an see in the screenshot below.  You can also see now that the plug-in is now set to PowerPath which is what I would expect.
 
ESX_PPathVE_Issue_7
 
Looking in vCenter I can now see multiple active paths to the device which tells me that PowerPath/VE has truly kicked in and is working as expected.
ESX_PPathVE_Issue_8
 
There wasn’t a lot of information out there on the particular error messages I was receiving (listed below) hopefully this explanation will allay some fears.  I know I was concerned when I first saw the messages but thankfully it turned out to be by design.
 
EmcpEsxLogEvent: Error :emcp :MpxEsxPathClaim :Could not claim the path. Status : Failure
ScsiPath: 3815: Plugin ‘PowerPath’ had an error (Failure) while claiming path
ScsiDevice: 1726: Full GetDeviceAttributes during registration of device ‘naa.500…’  : failed with I/O error
ScsiDeiceIO: 5172: READ CAPACITY on device “naa.50…’ from Plugin “NMP” failed. I/O error

ESX, Storage, VMware , ,

EMC PowerShell Toolkit released

October 5th, 2010

dWhile pursuing twitter this evening I stumbled across a few tweets from people I follow @sakacc, @scott_lowe and @emccorp about a new EMC PowerShell toolkit.  I’m actually a little surprised that it’s taken EMC this long especially with the success of the VMware PowerCLI. Its worth noting that EMC’s competitors have had offerings in this space for some time now. Compellent’s PowerShell toolkit has been available since late 2008 and NetApp’s PowerShell offering was announced earlier this year. I’m not going to hold it against EMC though, they are the current kings of innovation and who can blame them for dropping the ball slightly on the PowerShell front.

So the story goes that the EMC ® Storio Powershell Toolkit (PSTookit) has been available internally within EMC for a while now.  EMC are now looking to increase it’s exposure by releasing a pre-release version of the EMC PSToolkit for testing and feedback.  At present it only consists of a small subset of commands which you can see in the screenshots below.

EMC_Powershell

There are a few Caveats that you need to be aware of for this pre-release version. The requirements below are taken directly from the EMC community post, I have included links for the downloads to make life a little easier.

  • SMI-S Provider 4.1.2 or later versions – EMC PowerLink logon required
  • PowerShell 2.0 CTP3 – CTP3 is quite old and not available anymore, link to 2.0 provided
  • .Net Framework 3.5 – Advice is to utilise Windows Updates to update .NET Framework
  • Windows XP, Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2 for Array management commands. It is known that the 32-bit cmdlet set does not execute properly on Windows 7 and may not execute properly on Windows Vista.

 

Of course you will also need to download the EMC PSToolkit as well, You can find this over on the EMC Community site – https://community.emc.com/docs/DOC-8561. Remember to provide your feedback to EMC, you have their attention! What do you want to see added in here, what do you want to script, what are your use cases?

VI Toolkit / Powershell, VMware ,

HP Support Matrix – VMware / HP Downloads

September 30th, 2010

I’ve been building a few new HP DL 380 G7 servers as ESX 4.1 hosts and was having the usual nightmare finding anything on HP’s website. I was specifically looking for HP Insight Management Agent downloads and I’d been searching for ages when I came across a page called VMware from HP for Proliant. I’ve still no idea how I stumbled across it but I’m writing this down now so I never loose it.

The screenshot below which is from the bottom of the web page and lists all the HP Insight management agents for V4.x that I could possibly need.

HP_Support_Matrix_1
As you can see from the next screenshot the web page covers all versions of ESX. It also has an extensive list of servers covering the BL, DL, ML and SL Proliant ranges.  Click on the tick and it will take you straight to the appropriate page for that server so you can download everything you may need regardless of end operating system. So it acts as a support matrix and also acts as a collection of links to the correct product download page, can’t ask for much more than that.  Good work HP, not often I say that about your web site.

 

HP_Support_Matrix

ESX, ESXi, Servers, VMware , ,

vCenter disk performance – SCSI LUN ID display

September 26th, 2010

Something has been bugging me for some time now about vCenter disk performance statistics.  Basically vCenter shows each SCSI LUN with a unique ID as per the following screenshot. When viewed through the disk performance view it’s impossible to tell what is what unless of course you know the NAA ID off by heart!?

image 
I was working on a project this weekend putting a Tier 1 SQL server onto our vSphere 4.0 infrastructure, therefore insight into disk performance statistics was key.  So I decided I needed to sort this out and set about identifying each datastore and amending the SCSI LUN ID name, here is how I did it.

Identify the LUN

First of all navigate to the datastore view from the home screen within vCenter

image 

Click on the datastore you want to identify and then select the configuration tab

NAA_Post2 

Click on the datastore properties and then select manage paths

Note down the LUN ID in this case 2 and also note down the capacity

image

Change the SCSI LUN ID

Now navigate to the home screen and select Hosts and Cluster

image

Select a host, change to the configuration tab and then select the connecting HBA

image

At the bottom identify the LUN using ID and capacity noted earlier and rename the start of ID. I chose to leave the unique identifier in their in case it is needed in the future.

image

Now when you look at the vCenter disk performance charts you will see the updated SCSI LUN ID making it much more meaningful and useable.

NAA_Post6

Raw Device Mappings

If you have Raw Device Mappings (RDM) attached to your virtual machine then these to are capable of showing up in the vCenter disk performance stats.  It’s the same process to change the name of the SCSI LUN ID however it’s slightly different when identifying them.  To do so carry out the following.

Edit the settings of the VM, select the RDM file, select Manage Paths and then note down the LUN ID for the RDM.  Use this to identify the LUN under the Storage Adapter configuration and change it accordingly.

image

Following making these changes I can now utilise the vCenter disk counters to compliment ESXTOP and my SAN monitoring tools.  Now I have a full end to end view of exactly what is happening on the storage front, invaluable when virtualising Tier 1 applications like SQL 2008.

There are a plethora of metrics you can look at within vCenter,  if you would like to understand what they all mean mean then check out the following VMware documentation.

VMware vCenter SDK reference guide – Disk Counters

Storage, vCenter, VMware , , , ,

Whats new in vSphere 4.1 Storage

September 2nd, 2010

So I haven’t done a lot of real time blogging at VMworld this year as I’ve been busy trying to see and soak up as much as possible.  It’s not every day that you get access to the likes of Chad Sakacc (VP EMC / VMware alliance) Scott Drummond (EMC – ex VMware performance team) and a whole host of other technology movers and shakers. As you can imagine I took full advantage of these opportunities and blogging became a bit of secondary activity this week.

However, I’ve now had time to reflect and one of the most interesting areas I covered this week which was the new storage features in vSphere 4.1. I had the chance to cover these in multiple sessions, see various demo’s and talk about it with the VMware developers and engineers responsible. There are two main features I want to cover in depth as I feel they are important indicators of the direction that storage for VMware is heading.

SIOC – Storage I/O Control

SIOC had been in the pipeline since VMworld 2009, I wrote an article on it previously called VMware DRS for Storage, slightly presumptuous of me at the time but I was only slightly off the mark. For those of you who are not aware of SIOC, to sum it up again at a very high level let’s start with the following statement from VMware themselves.

SIOC provides a dynamic control mechanism for proportional allocation of shared storage resources to VMs running on multiple hosts

Though you have always been able to add disk shares to VM’s on an ESX host, this only applied to that host, it was incapable of taking account of VM I/O Behaviour of other VMs on other hosts. Storage I/O control is different in that it is enabled on the datastore object itself, disk shares can then be assigned per VM inside that datastore. When a pre-defined latency level is exceeded on a VM it begins to throttle I/O based on the shares assigned to each VM.

How does it do this, what is happening in the background here? Well SIOC is aware of the storage array device level queue slots as well as the latency of workloads.  During periods of contention it decides how it can best keep machines below the predefined latency tolerance by manipulating all the ESX Host I/O Queues that affect that datastore.

In the example below you can see that based on disk share value all VM’s should ideally be making the same demands on the storage array device level queue slots.  Without SIOC enabled that does not happen. With SIOC enabled it begins throttling back the use of the second ESX host’s I/O queue from 24 slots to 12 slots, thus equalising the I/O across the hosts.

Paul Manning (Storage Architect – VMware product marketing) indicated during his session that there was a benefit to turning SIOC on and not even amending default share values.  This configuration would immediately introduce an element of I/O fairness across a datastore as shown in the example described above and shown below.

SIOC_Fairness_Full

So this functionality is now available in vSphere 4.1 for Enterprise Plus licence holders only.  There are a few immediate caveats to be aware of, it’s only supported with block level storage (FC or ISCSI) so NFS datastores are not supported. It also does not support RDM’s or datastores constructed of extents, it only supports a 1:1 LUN to datastore mapping. I was told that extents can cause issues with how the latency and throughput values are calculated,  which could in turn lead to false positive I/O throttling, as a result they are not supported yet.

It’s a powerful feature which I really like the look of. I personally worry about I/O contention and the lack of control I have over what happens to those important mission critical VM’s when that scenario occurs. The “Noisy Neighbour” element can be dealt with at CPU and Memory level with shares but until now you couldn’t at a storage level. I have previously resorted to purchasing EMC PowerPath/VE to double the downstream I/O available from each host and thus reduce the chances of contention.  I may just rethink that one in future because of SIOC!

Further detailed information can be found in the following VMware technical documents

SIOC – Technical Overview and Deployment Considerations

Managing Performance Variance of applications using SIOC

VMware performance engineering – SIOC Performance Study

VAAI – vStorage API for Array Integration

Shortly before the vSphere 4.1 announcement I listened to an EMC webcast run by Chad Sakacc.  In this webcast he described EMC’s integration with the new vStorage API, specifically around offloading tasks to the array. So what does all this mean, what exactly is being offloaded?

VAAI_Features 
So what do these features enable? Let’s take a look at them one by one.

Hardware assisted locking as described above provides improved LUN metadata locking.  This is very important for increasing VM to datastore density.  If we use the example of VDI boot storms, if only the blocks relevant to the VM being powered on are locked then you can have a more VM’s starting per datastore.  The same applies in a dynamic VDI environment where images are being cloned and then spun up; the impact of busy cloning periods, i.e. first thing in the morning is mitigated.

The full copy feature would also have an impact in the dynamic VDI space, cloning of machines taking a fraction of the time as the ESX host is not involved. What I mean by that is when a clone is taken now, the data has to be copied up to the ESX server and then pushed back down to the new VM storage location.  The same occurs when you do a storage vMotion, doing it without VAAI takes up valuable I/O Bandwidth and ESX CPU clock cycles. Offloading this to the array prevents this use of host resource and in tests has resulted in a saving of 99% on I/O traffic and 50% saving on CPU load.

In EMC Labs a test of storage vMotion was carried out with VAAI turned off, it took 2 mins 21 seconds.  The same test was tried again with VAAI enabled, this time the storage vMotion took 27 seconds to complete. That is a 5x improvement, and EMC have indicated that they have had a 10x improvement in some cases. Check out this great video which shows a storage vMotion and the impact on ESX and the underlying array.

There is also a 4th VAAI feature which has been left in the vStorage API but is currently unavailable, Mike Laverick wrote about it here. Its a Thin Provisioning API and Chad Sakacc explained during the group session that its main use is for Thin on Thin storage scenarios. The vStorage API will in the future provide vCenter insight into array level over provisioning as well as the VMware over provisioning.  It will also be used to proactively stun VM’s as opposed to letting them crash as currently happens.

As far as I knew EMC was the only storage vendor offering array compatibility with VAAI. Chad indicated that they are already working on VAAI v2 looking to add additional hardware offload support as well as NFS Support. It would appear that 3Par offer support, so that kind of means HP do to, right? Vaughan Stewart over at NetApp also blogged about their upcoming support of the VAAI, I’m sure all storage vendors will be rushing to make use of this functionality.

Further detailed information can be found at the following locations.

What does VAAI mean to you? – Chad Sakac EMC

EMC VAAI webcast – Chad Sakac EMC

Storage DRS – the future

If you’ve made it this far through the blog post then the fact we are taking about Storage DRS should come as no great surprise.  We’ve talked about managing I/O performance through disk latency monitoring and talked about array offloaded features such as storage vMotion and hardware assisted locking. These features in unison make Storage DRS an achievable reality.

SIOC brings the ability to measure VM latency, thus giving a set of metrics that can be used for storage DRS.  VMware are planning to add capacity to the storage DRS algorithm and then aggregate the two metrics for placement decisions.  This will ensure a storage vMotion of an underperforming VM does not lead to capacity issues and vice versa.

Hardware Assisted Locking in VAAI means we don’t have to be as concerned about the number of VM’s in a datastore, something you have to manage manually at the moment.  This removal of limitation means we can automate better, a storage DRS enabler if you will.

Improved Storage vMotion response due to VAAI hardware offloading means that the impact of storage DRS is minimised at the host level. This is one less thing for the VMware administrator to worry about and hence smoothes the path for storage DRS Adoption.  As you may have seen in the storage vMotion video above the overhead on the backend array also appears to have been reduced, so you’re not just shifting the problem somewhere else.

For more information I suggest checking out the following (VMworld 2010 account needed)

TA7805 – Tech Preview – Storage DRS

Summary

There is so much content to take in across all three of these subjects I feel that I have merely scratched the surface.  What was abundantly clear from the meetings and session I attended at VMworld is that VMware and EMC are working closely to bring us easy storage tiering at the VMware level.  Storage DRS will be used to create graded / tiered data pools at the vCenter level, pools of similar type datastores (RAID, Disk type). Virtual machines will be created in these pools; auto placed and then moved about within that pool of datastores to ensure capacity and performance. 

In my opinion it’s an exciting technology, one I think simplifies life for the VMware administrator but complicates life for the VMware designer. It’s another performance variable to concern yourself with and as I heard someone in the VMworld labs comment “it’s a loaded shotgun for those that don’t know what they’re doing”.  Myself, I’d be happy to use it now that I have taken the time to understand it; hopefully this post has made it a little clearer for you to.

Gestalt-IT, Storage, VMware, vmworld , , ,