Archive for the ‘VMware’ Category

Hadoop virtualisation – Vendor supported or not?

October 27th, 2014

I’ve had a number of discussions recently around the virtualisation of Hadoop,  some of it with customers,  some of it internal and some of it with my colleagues at VMware and Pivotal.  As always these conversations get me thinking and that in turn has spurred me on to come out of hiding and write my first blog post in a while :-)

This train of thought was sparked by a conversation with someone who was looking at introducing Hadoop into their environment. Primary use case was so they could leverage unstructured data alongside traditional legacy EDW solutions. The conversation centred around the desire to virtualise Hadoop to facilitate quicker provisioning for POC / Test purposes, however there was also a desire to potentially do the same with a production Hadoop environment from a cost perspective.  Sounds pretty straightforward if you ask me,  however the main fly in the ointment appears to be the reluctance from some of the Hadoop vendors to give their blessing or indeed commit to supporting a virtualised approach.

So, why are Hadoop vendors so reluctant?

There could be many reasons why a Hadoop vendor would be reluctant to embrace virtualisation or indeed any abstraction (HDFS from an appliance or Linux containers).  It could simply be a case of them wanting to apply the “keep it simple stupid” approach and ensuring success by building it the way it always has in the past. Or it could be that they’ve simply just never tested it in a virtual environment and unknown is bad in their eyes. Perhaps it’s something else I haven’t even thought of!?

I have my own opinion of course,  I firmly believe that reluctance to virtualise Hadoop is because it fundamentally disrupts today’s Hadoop licensing models and as a result has a negative impact on the vendors revenue streams (per CPU or per host licensing).  This is in essence the same kind of pushback that EMC were getting previously around supporting Hadoop with EMC’s Isilon scale out NAS (per host or per TB licensing).

Let me give you a quick example on how both disrupt the Hadoop vendor.

Say I have 100 commodity servers in my Hadoop cluster,  I have built a Hadoop cluster of 100 servers to provide my required 0.5PB of storage for HDFS (e.g. 5TB per server = 0.5PB).  Now despite having 100 servers I actually only utilise approximately 20% of those servers for Hadoop compute tasks like Map Reduce,  if I want to add more storage I have to add more servers including compute even though I’m never going to use that extra compute power. This model doesn’t fundamentally doesn’t scale well from a data centre footprint point of view.

So first disruption,  lets say I was to separate the compute and storage elements of Hadoop.  I introduce 0.5PB of EMC Isilon to provide the HDFS layer.  Instantly I have reduced my server footprint requirement from 100 servers to 20 physical servers + the EMC Isilon for the storage requirement.  First major advantage,  if I need to scale either compute or hardware I can do it independently of the other. Second major advantage,  I go from 33% capacity efficiency (Hadoop does 3 copies of ever file) to up to 80% capacity efficiency (n+1,2,3 or 4 protection)


Second disruption, now lets say I also introduce virtualisation of the Hadoop compute nodes into the solution. If I was to achieve a VM to physical server ratio of 2:1,  that means I now only need 10 physical servers + EMC Isilon HDFS storage. If I was to go for a more aggressive 4:1 VM to physical server ratio I’d only need 5 physical servers + EMC Isilon HDFS storage instead of the original 100 physical servers.  I’ve now got the ability to easily scale my compute and storage layers independently of each other and have vastly shrunk my data centre footprint, which is a huge cost saving in itself.


Theoretically, we’ve just shaved between 90 – 95 servers off the original solution,  not only have we saved on physical footprint but we’ve also taken 90 – 95 servers worth of annual licensing away from a Hadoop vendor!!  I can see why they wouldn’t be 100% happy about a customer implementing virtualisation and EMC Isilon for Hadoop.

now granted, the above is a very simplistic example.  However if you’re looking at deploying Hadoop I’m sure you can see how virtualisation and introducing Isilon for HDFS could help you massively reduce footprint in the data centre and save on licensing costs. You of course need to build out a suitable TCO analysis for the various options,  however I encourage you to do so as I bet it works out to be quite favourable compared to the original 100 nodes.

What about performance?

One of the other main areas the Hadoop vendors appear to be focusing their FUD (FUD = Fear, Uncertainty and Doubt) is virtualisation and the performance of Hadoop.  Now I don’t disagree that depending on what you’re doing with Hadoop, performance may well be the key requirement.  If it is very, very important to you that your Hadoop is performant and that your Hadoop vendor will support it without argument; then physical Hadoop compute nodes may be the best way for you to go.

That said,  if you deem flexibility and ease of scaling Hadoop (storage or compute) as your key requirement then a different approach may be needed. This is where the separation of compute and storage layers adds immediate benefit and where virtualisation of the Hadoop compute nodes drives increased flexibility.

I should add that I personally don’t think you necessarily have to sacrifice compute performance to gain that increased flexibility.  I would highly recommend reading the following VMware white paper on Virtualised Hadoop Performance with vSphere 5.1. In it VMware conduct a very in-depth comparison between the performance of native Hadoop and 100% virtualised Hadoop instances including HDFS storage on the server DAS.  The table below is a comparison of jobs run on the physical nodes and again with a 1:1, 2:1 and 4:1 VM to physical server ratio. Physical to virtual compare

What is apparent from the graph above is that in the various scenarios tested the virtualised instances aren’t too far away from the native performance (depicted by the 1.0 on the Y axis).  In fact in the Terasort test (sorting data generated and writing out the results) the 4:1 configuration actually performed better,  which is quite interesting but may be down to test harness subtleties.

There is only so much you can read into these kind of whitepapers,  in my view it’s a yes, you should definitely consider virtualisation of Hadoop.  On the flip side, it’s essential that you do your own testing to ensure that your virtualised Hadoop solution delivers against your requirements whatever they may be.

What about Vendor support?

Support for virtualised Hadoop is another pushback I’ve heard on a few occasions and I have to say I understand that position.  Before my time at EMC I was a Solutions Architect implementing solutions in a UK based investment fund manager.  One of my team’s main mantra’s when designing solutions was to always ensure supportability,  so when I hear a customer complain about lack of support for a solution I often have to sympathise with the position they find themselves in.  They want to do something,  but can’t because it compromises the supportability of the business application.

Now this lack of support is a situation I’ve seen before,  it always used to be virtualising Oracle that used to get the push back.  Oracle wouldn’t support the virtualisation of their database as consolidation ultimately eroded their license revenues (per CPU licensing).  I personally think we’re simply seeing a similar thing occurring now with Hadoop and customers are therefore wary of building a solution that the Hadoop vendors won’t support.  I get that,  doesn’t mean it’s acceptable though!

What can I do about it though?

We at EMC had seen a bit of this sort of thing recently, specifically with Isilon for HDFS.  Hadoop vendors weren’t keen on losing revenue (as per my second example earlier) and weren’t really willing to support Isilon or sanction it’s usage, but customers were keen to use Isilon instead of the traditional DAS approach.  Now EMC obviously has alliances with all the major Hadoop vendors, that sadly doesn’t necessarily constitute support.

Now some of you may have picked up on the recent joint announcement between Cloudera and EMC around supporting Cloudera Enterprise Hadoop and Isilon for HDFS.  This is a great step forward in my opinion and has primarily come about following concerted pressure from Cloudera customers in the global financial industry.  Their sheer desire to leverage an enterprise grade platform for HDFS in tandem with Cloudera’s Enterprise Hadoop capabilities resulted in Cloudera having to agree to work jointly with EMC to build a supported solution.

Never underestimate your power as a customer,  most of the great things we come up with at EMC come about from our interactions with customers and them telling us what they need or want.  You shouldn’t be afraid to ask your vendors for the support you need for your business.

Cloudera Enterprise Hadoop and EMC Isilon for HDFS

Very briefly, the main plus point from a Cloudera / EMC point of view is around the fact that Isilon supports multi-protocol access of the same data,  eradicating the need to do major Extract, Transfer and Load (ETL) activity.  The same data can be interacted with via SMB, NFS, HTTP, SWIFT, REST and HDFS 1.0 and 2.2, ( 2.2, 2.3 and 2.4 coming very shortly) which allows you to put data into Isilon via one method, e.g. logs from an application and then consume it through another, e.g. HDFS.   My personal plus point is the fact that you can use Isilon HDFS for Cloudera, Pivotal HD, HortonWorks or Apache Hadop as your not tied to a traditional DAS stack :-)


Some of the related Links to that announcement are included below,  with the Cloudera CTO blog post a particularly good read.

Cloudera Enterprise Hadoop and EMC Isilon supported solution
Cloudera’s CTO Blog Post on joint EMC Isilon solution
Cloudera Enterprise Hadoop and EMC Isilon solution overview

In Summary

So for those customers out there who want to virtualise Hadoop but want to ensure they are fully supported, I implore you to put the pressure on your Hadoop vendor and your VMware account team.  I know for a fact that VMware have alliance teams working with the Hadoop vendors,  but it needs real customer pressure on the Hadoop vendors to fundamentally change the game.  It’s what happened with Oracle on VMware,  it’s what happened with Cloudera and Isilon HDFS and it needs to happen for Hadoop on VMware as well.

It’s also worth noting that this isn’t just a Hadoop problem,  we’re going to end up in the same situation with loads of the new popular commercial open source variants .  Think of things like MongoDB (NoSQL) or DataStax (Cassandra DB), it’s only a matter of time before large enterprise customers are going to want to virtualise or use enterprise storage platforms with these technologies.

We at EMC aim to offer choice to customers,  so they are free to mix and match our technologies with whatever systems they want to put on top of them.  However I think we as a mainstream vendor need to do more work partnering and certifying our products to work with the ISV ecosystem.  I’m not saying we do this for all ISV’s,  we’ll need to be selective but I think we’ll reap the benefit of having done the work with Cloudera and Isilon.  We should ensure that it doesn’t stop there,  we should listen to our customers and we should aim to provide supported ISV and EMC stacks where needed.


EMC, Hadoop, Storage, VMware , , , , ,

Blending IaaS, Iaas+ and PaaS to deliver today’s IT

March 2nd, 2014

So,  I have to start off by thanking Brian Gracely (a fellow EMCer) for starting me off on this runaway train of thought.  a few weeks ago I read his article entitled “Will IaaS+ happen before PaaS” the question posed in the title is very apt, a question which only reinforces the fact that we work in such a rapidly changing IT world.  One where the battle for market and mind share continues to alter the landscape on a monthly if not daily basis.

IaaS, PaaS, SaaS…   Reality!!

I remember when I first started at EMC as a vSpecialist,  we were all about building private Infrastructure as a Service (IaaS) capabilities.  Virtualise everything on VMware, utilise vCloud Director to create pools of resource,  securely divide them between tenants, deliver a self service catalogue through a portal and conduct chargeback against all of it.  We also talked about extending that private cloud to the public cloud through a federated hybrid cloud model, but ultimately when you strip away the bells and whistles what we were all talking about from an end user perspective was the basic provisioning of VM’s to the OS layer.

Roll forward 3 years and things have changed up a gear significantly. Platform as a Service (PaaS) has become the new hot topic and the ability to write applications without worrying  about the underlying infrastructure is well and truly upon us.  Pivotal Cloud Foundry, Red Hat OpenShift, AWS Elastic BeanStalk, Microsoft Azure and’s Heroku are some of the key players in today’s PaaS market. Some of these PaaS offerings are open and can sit on top of multiple cloud infrastructures,  others are more proprietary and locked in. All of them however are trying to capture a share of the customers who broadly speaking fall into the following categories.

Innovate or die — PaaS offers a way to “leapfrog” the competition with the ability to quickly integrate the latest innovations in software development and scale them quickly. Customers get that pie-in-the-sky seamless experience, which is a win for everybody.

Agility is key — PaaS is a strong entry point to embracing the DevOps mindset with minimal investment, helping organizations work toward agile development. When you don’t have to worry about the underlying infrastructure, it becomes a lot easier to achieve continuous deployment and quick, responsive feature updates. Developers don’t need to handle operations and operations don’t need to know how to code in order to take advantage of a PaaS.

Build once, deploy anywhere — This is relatively specific to the open source players, but the ability to build an application on a common platform means that you can write it once and deploy it on any infrastructure you’d like. In other words, if you build an application on Cloud Foundry, that application will run the exact same way on any other instance of Cloud Foundry, with the implication that you can ideally move from the public cloud to the private cloud or between public cloud providers with lessened fear of lock-in.

Quoting – Director of Pivotal’s Cloud Foundry Product Team Dekel Tankel in CiteWorld

The reality (in my opinion)… well it’s a mixed bag of course. Today I still see customers / IT teams striving to provide basic IaaS for their internal users.  In truth developers wanted that simple capability from internal IT over 2 years ago, it is one of the main reasons that many public cloud providers, such as AWS are what they are today.  They offered a viable and quick alternative for infrastructure while internal IT teams were either still trying to work out how to do IaaS themselves or were simply asleep at the wheel not realising that some very real competitor existed out there.

PaaS is an exciting shift in the industry, it’s allowing businesses to move up the stack, forget about the infrastructure and concentrate on building the applications that differentiate them from their competitors.  It’s still early days for PaaS,  though the momentum is building, not least with the Cloud Foundry foundation announcement this week which saw some of the  industry heavy hitters commit to developing the Cloud Foundry project.

I certainly don’t think anyone can argue with the concepts of PaaS, or the fact it will rapidly take share in  the coming years as development methods change.  I do however feel that it won’t suit everyone immediately,  It’s great for greenfield start-ups who are not constrained by legacy IT and want to operate in the public cloud, but how will it be adopted into the existing businesses? How quickly will it be adopted into the enterprise?  All enterprise customers should be taking a look at this today,  working out how they can integrate PaaS into their IT function to fundamentally change how they manage the software development lifecycle.

I personally think we’re at another one of those interesting inflexion points.  The business and the developers that work for them want to move ever faster.  Internal IT has maybe only just got to grips with IaaS and can now service them with the VM’s they want quickly.  However the developers have moved on and already want more,  they want to consume services and not just VM’s,  they look to the public cloud and they see rich services being layered on top of cloud infrastructure,  messaging services, Database as a service, in-memory capabilities, etc.  The developers and the business are again demanding more than internal IT are currently delivering,  sounds familiar doesn’t it!  Sounds like the IaaS story from a few years back all over again.

Delivering Iaas, IaaS+ and PaaS

The question I asked myself after reading Brian’s article was what are EMC doing to assist our customers to deliver in this crazy fast moving IT world of ours.  When I say “customers” I mean that in the broadest sense,  it could be assisting small IT departments,  enterprise customers or service providers looking to deliver services back to businesses and public consumers.

At EMC we’ve been talking a lot about the 1st, 2nd and 3rd platform. Some of you may have seen the picture below before,  sums up today’s modern IT world very well I think.  I certainly speak to customers who operate somewhere in all three of these platforms and I can safely say that I don’t see that model changing overnight,  however I do see companies striving hard to leave the legacy behind and leapfrog straight into the 3rd platform.


If we break it down, what businesses are technically looking to achieve today is optimising the 2nd platform and at the same time enabling the 3rd platform.  This is where I think a blended model of IaaS, IaaS+ and PaaS will cover the majority of use cases.  IaaS and IaaS+ will help optimise legacy and new 2nd platform applications and change how they are delivered.  IaaS+ and PaaS will find itself used for new application requirements in both the the 2nd platform and 3rd platform.

A picture speaks a thousand words,  so I’ve attempted to draw out below what’s in my head on this subject. Thanks to Eric Wright for the graphic inspiration in his recent blog post. In theory by mixing traditional IaaS with PaaS and looking to utilise that combined stack to layer services on top of IaaS,  whether that be PaaS delivered (MemDB aaS or Messaging aaS) or more traditionally delivered app blueprints (DBaaS straight on top of IaaS) we eventually come up with a hybrid model that caters to lots of different requirements.


So if we then take that on one step, what does the EMC federation have to offer in this space to help customers achieve this blended model of IaaS, IaaS+ and PaaS?  I came up with the below diagram, pretty busy isn’t it!!


Lets break it out from the top down

Services – there is a plethora of services that theoretically could be consumed,  some you may put together yourself (via application blue prints or custom buildpacks for Pivotal CF) others may be pre-packaged for you (either blueprints or packaged services).  They may be consumed on PaaS (buildpacks) or may be deployed straight onto IaaS (blueprints).  Pivotal services such as GemFire, RabbitMQ and TC Server are some of the offerings that can be deployed on either today.

vCloud Automation Center – vCAC has recently had the vFabric Application Director functionality folded in can be utilised to deploy VM’s, Operation Systems and application blueprints straight onto physical infrastructure, multiple hypervisors and multiple cloud platforms.  I’ve included Puppetlabs on the diagram as the integration with vCAC has greatly expanded the capability for service deployment, with vCAC being able to take advantage of the puppet modules library.   I think once VMware get vCAC fully plugged into Pivotal CF and the vCloud Hybrid Service (hopefully both aren’t too far away) it will make it an exceptionally powerful tool for automation whether that be VMware or a heterogeneous cloud environment.

Pivotal Cloud Foundry – The open source Platform as a Service offering that is today compatible with multiple cloud environments (VMware, OpenStack, AWS and I’m hoping vCloud Hybrid Service soon).  It currently comes in an Open Source and a Pivotal enterprise flavour, other custom variations will undoubtedly appear, IBM Bluemix is a recent PaaS based on CF that I’ve been reading about.  Cloud Foundry is creating a real stir in the IT world at the moment with a lot of the IT heavyweights throwing their backing behind this open source project, prompting comments such as the one below.

“Cloud Foundry is on an absolute tear. The number of companies that have bought into the initiative, the amount of code being contributed, the customer wins that ecosystem members are enjoying suggest that Cloud Foundry is preeminent among all the open source PaaS initiatives.”

— Ben Kepes, Forbes

VMware Software Defined Data Centre
– I think everyone knows the story with this, if you don’t you really should :-)   VMware in combination with it’s partners (EMC included) are working hard on delivering the software defined data centre.  The basic software defined compute layer is where VMware earned it’s stripes, that area of the SDDC needs no introduction.  Software Defined Networking became a mainstream topic with the VMworld 2013 announcement of VMware NSX.  With NSX VMware are bring the same consolidation, control and flexibility benefits to the network world as they did to the server world. Granular network policies that can follow a VM around regardless of it’s location (Private or Public clouds) is a key factor in enabling wide spread SDDC and hybrid cloud adoption. The last element is Software Defined Storage, something VMware and EMC are working very hard on.  EMC announced our ViPR offering at EMC world in 2013,  a fundamental change in thinking around storage.  Abstracting the control plane and enabling a single point of management for EMC storage, other vendors storage, as well as commodity and cloud storage was a major change of direction for EMC.  Providing software only data services is another fundamental shift in mindset, but an essential one as the storage world slowly becomes more commoditised.  Today EMC offer HDFS and Object data services through ViPR,  in the future there will be a lot more as EMC focus efforts on producing more abstracted software features.  VMware have also gone down the commodity route with VSAN,  still in Beta but due for release soon,  it will prove popular for those VMware only shops who want to consume commodity server direct attached storage.

VCE vBlock – The leading converged infrastructure stack company created as a joint venture between EMC, Cisco, VMware and Intel.  Converged infrastructure stacks are the easiest means of quickly deploying private cloud infrastructure within your own data centre. Built from best of breed components, available in multiple T-shirt size offerings, fully integrated, built in the factory, certified and ready to roll in 45 days or less,  it is the perfect way to consume infrastructure and underpin a blended IaaS, IaaS+ and PaaS model.

vCloud Hybrid Cloud (vCHS) – This week saw VMware announce their new vCHS service in Europe,  based in the UK.  This is a key announcement for VMware and one that will sure to get customers excited.  Lots of customers have VMware in their own private cloud deployments today,  enabling those customers to extend their internal VMware environments while use the same automation and monitoring tooling will be very appealing.  The capability to move your VM’s back and forth between on-prem VMware private cloud and vCHS or simply deploying your existing VM templates directly into a public cloud offering without upheaval is a huge plus or vCHS.  With IPSEC VPN or direct connect network capabilities on offer,  vCHS offers the simplest means of extending of your data centre to consume as required.  Once the offering beds in and a few more bells and whistles are added (I’m thinking Pivotal CF PaaS here) it will be an even more compelling offering and a large part of VMware’s future business.


This has been a very long post, written over a number of weeks and written about a moving target (I’m thinking Cloud foundry foundation announcement and vCHS launch here).  It basically comes down to businesses and the developers wanting to constantly innovate and do more.  It’s about IT struggling along to keep up with that insatiable appetite and deliver what they want.  I believe IT departments wants to help them do that, but they have to balance delivering with all the challenges that comes with existing IT Legacy and the requirement to maintain secure and compliant environments as per the rules and regulation that governs their business.

I believe the answer is a mixture of platform 2 and platform 3 solutions,  it’s a mixture of legacy and new world applications, it’s a mix of legacy IT infrastructure, IaaS, IaaS+ and PaaS as I outlined above. With the work going on at EMC + VMware + Pivotal it’s a no brainer that this particular federation of companies is in a perfect position to help the businesses, developers and IT Infrastructure teams with that journey to innovate and change how they do what they do!

EMC, Gestalt-IT, Pivotal, VMware , , , , , , , , , ,

Configuring VASA with EMC arrays – CLARiiON, VNX and VMAX

December 20th, 2011

Since last week I have seen a number of questions in and around VASA and how it is configured for EMC arrays.  I got a couple while doing the Q&A for EMC’s recent VNX best practices with vSphere 5 live WebEx and the day after I was asked by Cormac Hogan over at VMware to take a look at a question asked on the VMware blog site.  So I’ll admit now,  I hadn’t really had a chance to look at VASA in-depth, shame on me!  However I thought that this was as good a chance as any to learn and I thought I would do a post on how to configure it for both EMC’s VNX and VMAX systems. Big thank you to my EMC colleague Garrett Hartney for providing both his time and an environment that we could set this up in.

EMC’s VASA implementation

For those not familiar with VASA I strongly suggest reading this article to familiarise yourself with the what and why around this new VMware API. For those who just want the short version, VASA is essentially an API that allows storage vendors to publish array capabilities up to vCenter.  This allows VMware admins to see characteristic information about the storage underpinning their datastores and also allows them to use VMware storage profiles to enforce VM storage placement policy compliance, e.g. This SQL VM will always sit on performance disks.

The table below shows how EMC currently publishes it’s array capabilities through VASA 1.0 up to vCenter.


An example of how this looks for a datastore when pulled through to vCenter can be seen in the screenshot below.


Core Components and architecture

Regardless of which array you are connecting to, EMC’s implementation of VASA is done using Solution Enabler and something known as the SMI-S provider.  Together these two components act as a middle tier between vCenter and the different arrays being queried.  It’s worth pointing out that the SE \ SMI-S server supports in-band (SAN attached) and out of band (network) connectivity for VNX and CLARiiON arrays and in-band connectivity only for Symmetrix arrays. The architecture of the setup is demonstrated in the diagram below.


SE / SMI-S server deployment

To get started with VASA you will need to download SMI-S version 4.3.1 which already comes pre-bundled with Solution Enabler 7.3.1.  This software can be downloaded from the link below and comes with an option for 32 and 64 bit Windows as well as Linux.  For full details on OS support see the release notes for SMI-S 4.3.1 – (PowerLink Account Required for downloads)

Home > Support > Software Downloads and Licensing > Downloads S > SMI-S Provider

As part of my own deployment I am using a Windows 2008 R2 64 bit server to deploy the core components.  The server has been built as standard with no special configuration required.

  • We first of all need to deploy Solution Enabler and SMI-S on your designated server, locate the installation media and run the install package.


  • When presented with the welcome screen click next.


  • Leave the install location as default and click next.


  • When prompted select the array provider only and click next.


  • Review the installation settings and space requirements and click next to install.


  • Once the install is complete, click finish.


  • Configure the environment variables on the server to include the SYMCLI path


  • Locate the following file and open it for editing


  • Locate the line below and change the value from 100 to 1200, save and exit the file.


  • Navigate to the services console and restart the ECOM service


  • navigate to the location shown below and run testsmiprovider.exe


  • the next step is to connect to the SMI-S provider. I used the defaults which are shown in the square brackets, just hit enter on each line to use the default.


  • Once connected you will see the following at the command prompt


  • type the dv command at the prompt to display version information about the SMI-S provider install.  This basically proves that everything is working as expected


  • that concludes the basic installation and configuration of the SMI-S and Solution Enabler server,  now all we need to do is add in the storage arrays we want displayed to vCenter via the VASA api.


SUPPORTED – CLARiiON Navisphere Release 28, 29 & 30, VNX Unisphere Release 31
(SMI-S supports many earlier CLARiiON releases but vSphere 5 does not)

Earlier I mentioned that the CLARiiON and VNX arrays could be added to SMI-S in-band or out of band.  The most common method and the one I intend to use here is to connect out of band, i.e. across the network.  If you do want to connect in-band with direct SAN connection then check out page 39 of the SMI-S v4.3.1 release notes.

One major pre-requisite for connecting CLARiiON and VNX is that the user account used to connect to the arrays must be an administrator login with global scope. At this point you should hopefully still be connected to the testsmiprovider.exe application used earlier,  if you are not then please repeat the command line steps shown above to reconnect.

  • Once connected successfully type the commands addsys to begin adding the array
  • Enter the IP address / DNS name for Storage Processor A and hit enter.
  • Enter the IP address / DNS name for Storage Processor B and hit enter.
  • You can continue to add additional arrays here or hit enter to move to the next step.
  • Accept the default for the address type, i.e. IP/Nodename by hitting enter.
  • Continue answering this question for each storage processor / array added
  • enter the global scope administration user account for connecting to the arrays.
  • enter the password for the administration account being used
  • You will then see the message +++++ EMCAddSystem ++++


  • After a while you will see the output from the addsys operation, as you can see below the output is 0 which indicates success.  The details of the system added are then listed.


  • If you now run the dv command the arrays added will be listed as connected.


  • Now that the array is registered we now need to add the VASA provider into vCenter. Log into vCenter and navigate to the home screen, locate and click on the storage providers icon.
  • Within the storage provider screen click on the add button as shown below.


  • Enter a name for the provider and enter the URL shown below,  the IP address of the server hosting SE / SMI-S should be entered where it has been blanked out below.  The user name is admin and the password is #1Password.


  • When prompted accept the certificate for the SMI-S provider


  • Once successfully added you will see the provider displayed


  • Highlight the provider and you will see the array that was connected to the SMI-S provider server earlier.


  • To check that VASA is working correctly in vCenter click the VM Storage Profiles icon on the home screen within vCenter.


  • When setting up a new Storage Profile you should be able to see the storage capabilities presented to vCenter,  these are shown below and are marked with system.


  • Job done, VASA successfully deployed and storage capabilities showing in vCenter!


SUPPORTED – Enginuity 5875
(SMI-S supports earlier Enginuity releases but vSphere 5 does not)

Now unfortunately I do not have access to a Symmetrix to complete my testing, however the release notes for SMI-S state the following which makes it sound very easy.

When started, the SMI-S Provider automatically discovers all Symmetrix storage arrays connected to the host on which the Array provider is running. No other action is required, such as running a symcfg discover command.

As mentioned earlier Symmetrix discovery is done in-band through small gatekeeper LUNs presented to the SE / SMI-S server.  If it is a virtual server then ensure that the LUNs are presented to the VM as physical mode RDMs.  The SMI-S release notes has the following to say about best practice.

When using the SMI-S Provider to manage Symmetrix arrays, it is recommended that you configure six gatekeepers for each Symmetrix array accessed by the provider. Only set up these gatekeepers for the host on which the SMI-S Provider is running.

So in theory it should be as simple as completing the following steps

  • Present the gatekeeper LUNs to the server (physical or virtual)
  • Restart the ECOM windows service to restart the SMI-S provider (auto discover arrays)
  • Use testsmiprovider.exe tool,  run the dv command, verify Symmetrix array is attached.
  • Thanks to my colleague Cody Hosterman (who does have a Symm) for the screenshot.


One point to note, if you have SMI-S installed on the same host as the EMC Control Center (ECC) Symmetrix Agent or Symmetrix Management Console (SMC) there are a couple of steps you need to take to avoid some spurious errors.  Check out page 37 of the SMI-S v4.3.1 release notes for further information on the changes required to avoid this.


I think the important thing to remember here is that this is version 1.0 of VASA. It may not be the most elegant solution in the world but it is a start on what I think will become a key feature in years to come. We are fast moving into an age where VMs become objects where we simply check a box to ensure our requirement or service level is delivered.  Imagine a scenario where a VM is created and as part of the creation process you select the storage based on the VASA information passed up to vCenter from the array.  Do I want it on a RAID 5 or RAID 6 protected datastore? Do I want it on a RecoverPoint replicated datastore? Do I want it on a vPlex distributed datastore? Do I want it on a datastore that is SRM protected?  Although it is v1.0 you can see the potential use cases for this feature in the future are going to continue to expand.

Some of you may well have seen Chad Sakac’s blog post back in September entitled Help us make VASA (and EMC’s VASA provider) better! It includes a questionnaire with questions about what you, the end customer wants to see from VASA. This is a great chance to have your say and influence how EMC implement VASA going forward, lets make v2.0 of VASA a feature that delivers on the huge potential V1.0 has shown.

EMC, VMware , , ,

EMC VSI Storage Viewer, Solution Enabler and Symmetrix arrays

September 26th, 2011

I’ve recently been looking at the implementation of EMC’s free Virtual Storage Integrator (VSI) with a few our older Symmetrix customers. Now customers using VMAX and VMAXe have the ability to deploy delegated storage provisioning for their VMware admins. However DMX customers only have the ability to use the read only storage viewer functionality as the DMX is not supported with Storage Pool Manager (SPM) which back ends the storage provisioning.  Some interesting questions came up recently with a customer about how best to deploy the VSI storage viewer with DMX arrays and I thought it would be worth sharing the findings with a wider audience. Basically I’m looking to cover off the different ways the VSI can connect to a Symmetrix array and how some of the options selected affect end to end operations.

VSI to Symmetrix Connectivity

So the VSI tool can be used in two ways with Symmetrix arrays,  you can utilise the local Solution Enabler (SE) installation that comes with the VSI or you can use a dedicated Solution Enabler server. It’s important to remember that Symmetrix arrays can only be discovered in-band, basically this means the SE install needs direct connection with the physical array.  This is achieved through the presentation of small LUNs known as gatekeeper LUNs, something existing Symmetrix storage teams will be very familiar with. So lets look at the two different possible setups.

Local Solution Enabler Deployment


The local deployment model shown above would require a gatekeeper LUN being presented / zoned through to the machine that the VI Client, VSI and local SE install have been deployed on. Communication with the array in this instance flows directly between the client PC and the array. In the majority of instances this isn’t going to be very practical for a number of reasons.

  • Each VMware admin client with VSI deployed would need a direct array connection.
  • Most Symmetrix arrays are FC attached and client PC’s are not.
  • Arrays live in remote data centres and VMware admin PC’s live in the office.
  • Increased security risk, i.e. too many direct array connections to secure


Remote Solution Enabler Deployment

The remote deployment model shown above would require gatekeeper LUNs being zoned through to a dedicated server. VMware admins would then connect through this remote SE server when querying Symmetrix arrays from information with the VSI. Communication flow in this instance always goes through the server, however as you’ll see later results can be returned from the SE server or the array depending on VSI configuration. This kind of setup is more practical for a number of reasons.

  • Remote SE servers are usually already in place for storage management tasks.
  • Available as a virtual appliance for quick deployment if not in place already.
  • Supports connectivity by multiple remote VMware admins using VSI.
  • Manage multiple Symmetrix arrays through one server.
  • Decreases security risk, i.e. single device connection to array.


Mix and Match

The model above is by no means rigid,  you can craft a number of solutions out of the principals shown above.  If your vCenter server sat in the same data Centre as the array then you could present gatekeeper LUNs to it and use this as a management point whenever you want to get information from the array.  Another possible solution is to put a management virtual machine in the datacentre with the VI Client and VSI installed and present a gatekeeper as an RDM,  whenever a VMware admin needs information from the array they connect into that management VM to carry out the work. Basically there is a solution for deploying VSI with Symmetrix arrays no matter what you’re setup looks like.

VSI Discovery Process Flow

One question that did come up recently was what happens when you select the AutoSync option for a symmetrix array and you are using the remote SE server solution. How often does it poll the array?  Well the answer is it doesn’t, which is strange as the term Autosync gives the impression that it syncs with the array on a regular basis. So how does it work?


AutoSync enabled

When AutoSync is enabled each time you request array data,  e.g. clicking on the EMC VSI tab for a datastore.  The request forces the SYMAPI database on the remote SE server to be updated from the array, the up to date array information is then returned to the VSI. There is obviously a slight cost involved in doing this as the remote SE server needs to make the calls to the array in order to update it’s local database before responding.  Typically this would introduce a 10 – 20 second delay but that cost means you guarantee the information received is up to date and valid.

AutoSync Enabled

AutoSync disabled

When Autosync is disabled each time you request array data the request is returned from the cached information in the local SYMAPI database on the remote SE server. This is obviously the fastest method as you don’t have the cost of querying the array directly for an update but the information may be out of date.

AutoSync Disabled

With Autosync disabled it’s up to the VMware administrator to initiate the sync of the array from within the VSI.  Alternatively the storage team can initiate sync with the array directly through the SE server using SYMCLI. To initiate a sync manually go into the VSI tool and select Symmetrix arrays from the list of features, highlight the array and click on Sync Array.



The free EMC VSI Storage Viewer tool can be of great benefit to Symmetrix customers, allowing VMware admins improved visibility of the underlying storage layers.  In larger environments where Symmetrix arrays are traditionally used you tend to find VMware and Storage are managed by separate teams.  Anything that improves the information flow between the two teams during troubleshooting has to be a must have tool. As show above some thought needs to be given to how you set it up. My personal preference would be to always go for the remote SE server solution.  Enable Autosync if your underlying VMware storage environment changes often and if it doesn’t then a manual sync every now and again should suffice.

Additional notes and links

SPC-2 Flags

It’s worth noting that SPC-2 flags need to be set on the FA port or on the initiator of the ESX host for the VSI to work correctly, in fact this is a required setting for ESX generally. This has come up a couple of times recently so I though it worth mentioning to ensure that people have it setup correctly,  the following whitepaper gives you more information.

VSI installation Media

Home > Support > Software Downloads and Licensing > Downloads T-Z > Virtual Storage Integrator (VSI) – (Please Note: PowerLink account required)

Solution Enabler Media

Home > Support > Software Downloads and Licensing > Downloads S > Solutions Enabler(Please Note: PowerLink account required)

Solution Enabler Documentation

Home > Support > Technical Documentation and Advisories > Software > S > Documentation > Solutions Enabler – (Please Note: PowerLink account required)

Storage, VMware ,

EMC Virtual Storage Integrator and the Access Control Utility

September 8th, 2011

At EMC the vSpecialist team often end up talking to a lot of customers about EMC’s FREE Virtual Storage Integrator (VSI) Plug-ins for vCenter Server.  Not only do customers love the fact that it is FREE they also love the features delivered. The ability to accurately view, provision and manipulate EMC storage directly within vCenter empowers VI admins and makes everyone’s life that little bit easier.

When I started writing this article we were on version 4.2 of the VSI plug-ins, following VMworld 2011 we are now up to version 5.0 the fifth generation of this excellent VMware / EMC toolkit. The plug-ins that make up the VSI are listed below, to download use the link below or use the cookie trail to navigate to the page on EMC PowerLink.

  • VSI Storage Viewer Plug-in 5.0
  • VSI unified Storage Management Plug-in 5.0
  • VSI Storage Pool Management Plug-in 5.0
  • VSI Path Management Plug-in 5.0

Home > Support > Software Downloads and Licensing > Downloads T-Z > Virtual Storage Integrator (VSI) – (Please Note: PowerLink account required)

One of the great features that people are drawn to is the ability to allow VI admins to provision storage directly from within vCenter. This is done with the VSI Unified Plug-in for Celerra, CLARiiON and VNX(e) and done with the VSI Storage Pool Management plug-in for the VMAX. One of the first question I often get asked is how is the secured,  how does the storage team ensure that only the right VMware admins are manipulating the underlying storage?

The answer previously was… well to be honest we didn’t really have an answer to this one. Technically if you allowed the VMware admins to provision storage you needed to trust them not to go provisioning crazy and fill up your storage array.  Obviously that response was not really acceptable for any environment and EMC have been working to rectify that.acu_icon

The Access Control Utility is a new part of the VSI framework which allows storage administrators to granularly control availability of storage platforms and storage pools on those platforms.  These security profiles when created can be exported and passed to the VMware administrators and imported into the VSI unified storage management plug-in. The following blog post details the steps involved in completing this process for a VNX array in vSphere 4.1

So we start by double clicking on the shiny padlock icon that will have been added to your desktop when you installed the VSI unified storage management plug-in.  When the ACU starts we are presented with the profile management screen.  This will of course be blank the first time you start the utility, in this screenshot below however you can see a couple of existing access profiles I have created for some VNX arrays in the lab.


To Create a new profile you simply click the Add button, you are then presented with the details screen for the new access profile being created.  Here you enter the name of the profile and a suitable description and click next when finished.


The next step in the wizard is where you define the storage system that will be permissioned as part of the security profile.  You click on Add and then select the system you are going to permission,  as you can see the VSI ACU supports Celerra, CLARiiON, VNX and the VNXe arrays. For VMAX you need to look at Storage Pool Manager (SPM) to control access,  I’ll look to blog about this one at a later date.


The next screen presented very much depends on the storage system you select.  If you chose the Celerra option you’re prompted for the details of the control station, username and password.  Select the CLARiiON and you’re prompted for the Storage Processor details and login credentials. If you select the VNXe then you’re promoted for the management IP and the login credentials.  I’m sure you can see the pattern developing here! Winking smile

In this example we are dealing with a VNX array and as such the option is whether you want to give access to block storage, file storage or both. As both are controlled differently within the VNX, if you select both you will need to enter the IP and credentials for the Storage Processor (Block) and the VNX Control Station.  For the purposes of this example I’m going to use Block only as you can see in the screenshot below.

When you click next you’re prompted to enter the storage processor IP address and log on details as shown below.


Once you are authenticated you get to select the granularity of access you want to provide.  It’s important to note that when the ACU refers to storage pools it means any storage pools and traditional RAID groups that may have been created on the VNX array.  There are 3 options available as you can see in the screenshot below.

  • All storage pools
    This option basically gives a VMware Admin free reign to provision LUNs with the VSI all over the array.  A potential use case for this may be a dedicated development VMware environment with its own dedicated array where the storage team don’t care to much about usage.

  • No Storage Pools
    This option is a complete lockdown and acts as an explicit deny to prevent any accidental provisioning on an array, i.e. the VSI unified storage management feature cannot talk to the array full stop, it won’t even show up as an option.

  • Selected storage pools
    As the name indicates this option allows the selection of certain storage pools for VSI provisioning.  A potential use case here would be a mixed environment where the array is shared between VMware and physical workloads.  As a storage administrator you would grant permission to the VMware storage pools only thus preventing any potential mis-provisioning (not sure that is actually a word but it certainly has its place when we talk about VSI provisioning)


In this example I’ve chosen selected storage pools as I think this is probably the scenario that most people will be looking for the ACU to help them with.  Within the next screen you are presented with a list of all storage pools / RAID groups on the array.  Here you select the storage pools / RAID groups you want to give the VMware admin access to, when your happy with your selection you simply select finish.  Note in the screenshot below that I have select two individual storage pools (one is a RAID group) to be part of this particular storage profile.


Once you’ve completed storage pool selection you are returned to the profile screen,  you can finish your profile creation right here by clicking on finish or you can add additional storage systems if your VMware environment consists of multiple arrays.


Once you have completed the creation of your security profile the next step is to export it so you can pass it over to your VMware admins. To do this simply highlight the Security profile, click on export and save the file


Chose a location to save the file and don’t forget to add a passphrase to the file so that it cannot be misused.

It’s important to remember that the login credentials provided by the storage admin during the ACU profile setup are the ones used when the profile is imported into the VSI.  The VMware admin will see the connection details and username being used but will not see the password. For audit purposes on the array it may be best to setup a dedicated account for use with the VSI and storage profiles. It should also be noted that the full details of the storage profile are encrypted within the profile export file as you can see below.


So now that you’ve finished creating your storage profile you can pass it on to the VMware administrators to import into the VSI.  To do this you go into vCenter and open up the EMC VSI screen from the home screen.  Click on the Unified Storage Management feature,  then click on add and select Import Access Profile before clicking next.


You now select the XML file created by exporting the ACU storage profile, you enter the passphrase you selected and click next.


As you can see below the VNX array has been added to the VSI and provisioning access is marked as Restricted.  This is as expected as we configured the profile to give access to only two storage pools, FAST_Pool_3_Tier and RAID Group 10.


When you use the EMC VSI to provision storage you will be presented with the VNX array that was part of the imported profile.  You select the storage array and as you can see in the screenshot below you can only create storage on the two storage pools that were added to the ACU storage profile.



The EMC Access Control Utility was something I have been looking to write about for a while. Since it’s release I’ve often wondered how exactly it worked, what it could / could not do and how it could better meet customer needs. The steps above show that it is possible for a storage team to delegate control of storage pools so VMware admins can quickly provision the storage that they need. Becoming more efficient is something we as vSpecialists talk about on a daily basis, this tool is one of those first steps that you can take to make life easier.  If you are a VMware admin who is working with EMC storage then I suggest you speak to your storage team about this.  Likewise if you are a storage admin, reach out to your VMware counterparts and discuss how this could save you both time in the long term.


My boss Chad Sakacc put a video together for VMworld 2011 which maybe explains it better (certainly quicker) than I maybe have in this blog post. I left it to the end though so you read the article before discovering it Smile. My step by step approach is simply so I can fully understand how it fits together and as I go deal with the many “what if” or “how does that work” kind of questions.  Hope you find it useful in some way, feel free to comment or ask questions.

EMC, New Products, VMware , , ,

XenDesktop 5, vSphere 4.1 and VNX Reference Architecture

May 23rd, 2011

A common theme that I see coming up up time and time again with customers is VDI using Citrix XenDesktop and vSphere 4.1.  It’s popularity generally stems from the previous success companies have had with the more traditional Citrix products such as Presentation Server / XenApp.  I know when I was looking at VDI solutions I was very much in favour of Citrix due to one thing, the ICA protocol.  It works and it works well over long distances, in a lot of companies it has proven itself over a long period of time, it is a protocol they trust to deliver.

Following a customer meeting recently I was desperately searching for an EMC reference architecture (RA) for a XenDesktop / vSphere deployment.  At the time it turned out we didn’t have a completed one,  we did however have one in draft format that was going through the final stages of review.  That RA has now been completed and released for public consumption, an overview of the documents purpose is below.

The purpose of this reference architecture is to build and demonstrate the functionality, performance and scalability of virtual desktops enabled by the EMC VNX series, VMware vSphere 4.1 and Citrix XenDesktop 5.  This solution is built on Machine Creation Services (MCS) in XenDesktop 5and a VNX5300 platform with multiprotocol support, which enabled FC block-based storage for the VMware vStorage Virtual Machine File System (VMFS) and CIFS-based storage for user data.

The RA covers the technologies listed below and details why the VNX array with FAST cache enabled is a perfect match for your Citrix VDI deployment.  One other interesting area that is discussed is the use of Citrix Machine Creation Services (MCS) which is a new feature XenDesktop 5 and provides an alternative to Citrix Provisioning Server (PVS).  For those new to MCS I suggest you have a read through the following Citrix blog post as their are some additional design considerations around IOPS that need to be considered.

  • Citrix XenDesktop 5
  • Microsoft Windows 7 enterprise (32-bit)
  • Citrix Machine Creation Services (MCS)
  • VMware vSphere 4.1
  • EMC VNX 5300 – (Fast Cache & VAAI enabled)
  • EMC virtual storage integrator (VSI) – Free on EMC PowerLink

If you are considering XenDesktop 5 and vSphere 4.1 then I suggest you download and have a read through the RA linked below.

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1 and Citrix XenDesktop 5

Citrix, EMC, vSphere , ,

EMC PowerPath/VE – MpxEsxPathClaim failure – LUNz

October 16th, 2010
I stumbled across some interesting error messages the other day in a host’s vmkwarning log file. The errors were related to the deployment of PowerPath/VE and as I dug a little deeper I found this was an issue on all 3 hosts I had recently built.  When I connected via ILO I was seeing the additional error messages on the console as you can see from the screenshot below.
I was a bit concerned as this was my first deployment of ESX 4.1 with the latest PowerPath/VE 5.4 SP2 software.  I was also a little confused as I hadn’t actually presented any LUNs to this ESX cluster. All I had done was zone the HBA’s through to the storage processors and set up a storage group on the CLARiiON and added the hosts to it. So what path was PowerPath/VE trying to claim if there were no LUNs presented?
While on the service console of one host I ran the command esxcfg-mpath –l to list all the paths.  As you can see from the screenshot below I came across a fibre channel disk that the HBA’s was attempting to connect to, even though nothing had been presented.  I also noticed that it was using the VMware native multi-path plug-in and not PowerPath/VE and was also showing as dead! So was this the path that PowerPath was failing to claim?
Following discussions with EMC Support I was informed that this was what is known as a LUNz. This is a fake logical unit presented to the host by the storage processors to provide a path for host software to send configuration commands to the array when no LUNs have been presented.
In ESX 4.x the Navisphere host agent that you would normally install on a CLARiiON attached host is not supported.  Instead the ESX host registers itself with the storage array automatically (requires FLARE 28 or later). In order to do this it needs a communication path to the array, hence the LUN_Z being presented. If a LUN is presented to the host then that would be used instead,  therefore the LUNz disappear when a LUN is added as it is no longer required.
So I added a 100GB LUN to the storage group and I immediately received a number of email alerts from vCenter stating that connectivity had been lost to the naa.500 device.  So as expected as soon as LUN 0 is presented the LUN_Z is no longer required.
The key identifier that this was a LUNz was the naa.500 shown in the device name (see the screenshot above). A LUN presented from the CLARiiON shows up with a device name of naa.600 as you an see in the screenshot below.  You can also see now that the plug-in is now set to PowerPath which is what I would expect.
Looking in vCenter I can now see multiple active paths to the device which tells me that PowerPath/VE has truly kicked in and is working as expected.
There wasn’t a lot of information out there on the particular error messages I was receiving (listed below) hopefully this explanation will allay some fears.  I know I was concerned when I first saw the messages but thankfully it turned out to be by design.
EmcpEsxLogEvent: Error :emcp :MpxEsxPathClaim :Could not claim the path. Status : Failure
ScsiPath: 3815: Plugin ‘PowerPath’ had an error (Failure) while claiming path
ScsiDevice: 1726: Full GetDeviceAttributes during registration of device ‘naa.500…’  : failed with I/O error
ScsiDeiceIO: 5172: READ CAPACITY on device “naa.50…’ from Plugin “NMP” failed. I/O error

ESX, Storage, VMware , ,

EMC PowerShell Toolkit released

October 5th, 2010

dWhile pursuing twitter this evening I stumbled across a few tweets from people I follow @sakacc, @scott_lowe and @emccorp about a new EMC PowerShell toolkit.  I’m actually a little surprised that it’s taken EMC this long especially with the success of the VMware PowerCLI. Its worth noting that EMC’s competitors have had offerings in this space for some time now. Compellent’s PowerShell toolkit has been available since late 2008 and NetApp’s PowerShell offering was announced earlier this year. I’m not going to hold it against EMC though, they are the current kings of innovation and who can blame them for dropping the ball slightly on the PowerShell front.

So the story goes that the EMC ® Storio Powershell Toolkit (PSTookit) has been available internally within EMC for a while now.  EMC are now looking to increase it’s exposure by releasing a pre-release version of the EMC PSToolkit for testing and feedback.  At present it only consists of a small subset of commands which you can see in the screenshots below.


There are a few Caveats that you need to be aware of for this pre-release version. The requirements below are taken directly from the EMC community post, I have included links for the downloads to make life a little easier.

  • SMI-S Provider 4.1.2 or later versions – EMC PowerLink logon required
  • PowerShell 2.0 CTP3 – CTP3 is quite old and not available anymore, link to 2.0 provided
  • .Net Framework 3.5 – Advice is to utilise Windows Updates to update .NET Framework
  • Windows XP, Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2 for Array management commands. It is known that the 32-bit cmdlet set does not execute properly on Windows 7 and may not execute properly on Windows Vista.


Of course you will also need to download the EMC PSToolkit as well, You can find this over on the EMC Community site – Remember to provide your feedback to EMC, you have their attention! What do you want to see added in here, what do you want to script, what are your use cases?

VI Toolkit / Powershell, VMware ,

HP Support Matrix – VMware / HP Downloads

September 30th, 2010

I’ve been building a few new HP DL 380 G7 servers as ESX 4.1 hosts and was having the usual nightmare finding anything on HP’s website. I was specifically looking for HP Insight Management Agent downloads and I’d been searching for ages when I came across a page called VMware from HP for Proliant. I’ve still no idea how I stumbled across it but I’m writing this down now so I never loose it.

The screenshot below which is from the bottom of the web page and lists all the HP Insight management agents for V4.x that I could possibly need.

As you can see from the next screenshot the web page covers all versions of ESX. It also has an extensive list of servers covering the BL, DL, ML and SL Proliant ranges.  Click on the tick and it will take you straight to the appropriate page for that server so you can download everything you may need regardless of end operating system. So it acts as a support matrix and also acts as a collection of links to the correct product download page, can’t ask for much more than that.  Good work HP, not often I say that about your web site.



ESX, ESXi, Servers, VMware , ,

vCenter disk performance – SCSI LUN ID display

September 26th, 2010

Something has been bugging me for some time now about vCenter disk performance statistics.  Basically vCenter shows each SCSI LUN with a unique ID as per the following screenshot. When viewed through the disk performance view it’s impossible to tell what is what unless of course you know the NAA ID off by heart!?

I was working on a project this weekend putting a Tier 1 SQL server onto our vSphere 4.0 infrastructure, therefore insight into disk performance statistics was key.  So I decided I needed to sort this out and set about identifying each datastore and amending the SCSI LUN ID name, here is how I did it.

Identify the LUN

First of all navigate to the datastore view from the home screen within vCenter


Click on the datastore you want to identify and then select the configuration tab


Click on the datastore properties and then select manage paths

Note down the LUN ID in this case 2 and also note down the capacity


Change the SCSI LUN ID

Now navigate to the home screen and select Hosts and Cluster


Select a host, change to the configuration tab and then select the connecting HBA


At the bottom identify the LUN using ID and capacity noted earlier and rename the start of ID. I chose to leave the unique identifier in their in case it is needed in the future.


Now when you look at the vCenter disk performance charts you will see the updated SCSI LUN ID making it much more meaningful and useable.


Raw Device Mappings

If you have Raw Device Mappings (RDM) attached to your virtual machine then these to are capable of showing up in the vCenter disk performance stats.  It’s the same process to change the name of the SCSI LUN ID however it’s slightly different when identifying them.  To do so carry out the following.

Edit the settings of the VM, select the RDM file, select Manage Paths and then note down the LUN ID for the RDM.  Use this to identify the LUN under the Storage Adapter configuration and change it accordingly.


Following making these changes I can now utilise the vCenter disk counters to compliment ESXTOP and my SAN monitoring tools.  Now I have a full end to end view of exactly what is happening on the storage front, invaluable when virtualising Tier 1 applications like SQL 2008.

There are a plethora of metrics you can look at within vCenter,  if you would like to understand what they all mean mean then check out the following VMware documentation.

VMware vCenter SDK reference guide – Disk Counters

Storage, vCenter, VMware , , , ,