Archive

Archive for the ‘Storage’ Category

Hadoop virtualisation – Vendor supported or not?

October 27th, 2014

I’ve had a number of discussions recently around the virtualisation of Hadoop,  some of it with customers,  some of it internal and some of it with my colleagues at VMware and Pivotal.  As always these conversations get me thinking and that in turn has spurred me on to come out of hiding and write my first blog post in a while :-)

This train of thought was sparked by a conversation with someone who was looking at introducing Hadoop into their environment. Primary use case was so they could leverage unstructured data alongside traditional legacy EDW solutions. The conversation centred around the desire to virtualise Hadoop to facilitate quicker provisioning for POC / Test purposes, however there was also a desire to potentially do the same with a production Hadoop environment from a cost perspective.  Sounds pretty straightforward if you ask me,  however the main fly in the ointment appears to be the reluctance from some of the Hadoop vendors to give their blessing or indeed commit to supporting a virtualised approach.

So, why are Hadoop vendors so reluctant?

There could be many reasons why a Hadoop vendor would be reluctant to embrace virtualisation or indeed any abstraction (HDFS from an appliance or Linux containers).  It could simply be a case of them wanting to apply the “keep it simple stupid” approach and ensuring success by building it the way it always has in the past. Or it could be that they’ve simply just never tested it in a virtual environment and unknown is bad in their eyes. Perhaps it’s something else I haven’t even thought of!?

I have my own opinion of course,  I firmly believe that reluctance to virtualise Hadoop is because it fundamentally disrupts today’s Hadoop licensing models and as a result has a negative impact on the vendors revenue streams (per CPU or per host licensing).  This is in essence the same kind of pushback that EMC were getting previously around supporting Hadoop with EMC’s Isilon scale out NAS (per host or per TB licensing).

Let me give you a quick example on how both disrupt the Hadoop vendor.

Say I have 100 commodity servers in my Hadoop cluster,  I have built a Hadoop cluster of 100 servers to provide my required 0.5PB of storage for HDFS (e.g. 5TB per server = 0.5PB).  Now despite having 100 servers I actually only utilise approximately 20% of those servers for Hadoop compute tasks like Map Reduce,  if I want to add more storage I have to add more servers including compute even though I’m never going to use that extra compute power. This model doesn’t fundamentally doesn’t scale well from a data centre footprint point of view.

Commodity_100_Servers
So first disruption,  lets say I was to separate the compute and storage elements of Hadoop.  I introduce 0.5PB of EMC Isilon to provide the HDFS layer.  Instantly I have reduced my server footprint requirement from 100 servers to 20 physical servers + the EMC Isilon for the storage requirement.  First major advantage,  if I need to scale either compute or hardware I can do it independently of the other. Second major advantage,  I go from 33% capacity efficiency (Hadoop does 3 copies of ever file) to up to 80% capacity efficiency (n+1,2,3 or 4 protection)

20_Servers_Isilon

Second disruption, now lets say I also introduce virtualisation of the Hadoop compute nodes into the solution. If I was to achieve a VM to physical server ratio of 2:1,  that means I now only need 10 physical servers + EMC Isilon HDFS storage. If I was to go for a more aggressive 4:1 VM to physical server ratio I’d only need 5 physical servers + EMC Isilon HDFS storage instead of the original 100 physical servers.  I’ve now got the ability to easily scale my compute and storage layers independently of each other and have vastly shrunk my data centre footprint, which is a huge cost saving in itself.

10_Server_VMware_Isilon

Theoretically, we’ve just shaved between 90 – 95 servers off the original solution,  not only have we saved on physical footprint but we’ve also taken 90 – 95 servers worth of annual licensing away from a Hadoop vendor!!  I can see why they wouldn’t be 100% happy about a customer implementing virtualisation and EMC Isilon for Hadoop.

now granted, the above is a very simplistic example.  However if you’re looking at deploying Hadoop I’m sure you can see how virtualisation and introducing Isilon for HDFS could help you massively reduce footprint in the data centre and save on licensing costs. You of course need to build out a suitable TCO analysis for the various options,  however I encourage you to do so as I bet it works out to be quite favourable compared to the original 100 nodes.

What about performance?

One of the other main areas the Hadoop vendors appear to be focusing their FUD (FUD = Fear, Uncertainty and Doubt) is virtualisation and the performance of Hadoop.  Now I don’t disagree that depending on what you’re doing with Hadoop, performance may well be the key requirement.  If it is very, very important to you that your Hadoop is performant and that your Hadoop vendor will support it without argument; then physical Hadoop compute nodes may be the best way for you to go.

That said,  if you deem flexibility and ease of scaling Hadoop (storage or compute) as your key requirement then a different approach may be needed. This is where the separation of compute and storage layers adds immediate benefit and where virtualisation of the Hadoop compute nodes drives increased flexibility.

I should add that I personally don’t think you necessarily have to sacrifice compute performance to gain that increased flexibility.  I would highly recommend reading the following VMware white paper on Virtualised Hadoop Performance with vSphere 5.1. In it VMware conduct a very in-depth comparison between the performance of native Hadoop and 100% virtualised Hadoop instances including HDFS storage on the server DAS.  The table below is a comparison of jobs run on the physical nodes and again with a 1:1, 2:1 and 4:1 VM to physical server ratio. Physical to virtual compare

What is apparent from the graph above is that in the various scenarios tested the virtualised instances aren’t too far away from the native performance (depicted by the 1.0 on the Y axis).  In fact in the Terasort test (sorting data generated and writing out the results) the 4:1 configuration actually performed better,  which is quite interesting but may be down to test harness subtleties.

There is only so much you can read into these kind of whitepapers,  in my view it’s a yes, you should definitely consider virtualisation of Hadoop.  On the flip side, it’s essential that you do your own testing to ensure that your virtualised Hadoop solution delivers against your requirements whatever they may be.

What about Vendor support?

Support for virtualised Hadoop is another pushback I’ve heard on a few occasions and I have to say I understand that position.  Before my time at EMC I was a Solutions Architect implementing solutions in a UK based investment fund manager.  One of my team’s main mantra’s when designing solutions was to always ensure supportability,  so when I hear a customer complain about lack of support for a solution I often have to sympathise with the position they find themselves in.  They want to do something,  but can’t because it compromises the supportability of the business application.

Now this lack of support is a situation I’ve seen before,  it always used to be virtualising Oracle that used to get the push back.  Oracle wouldn’t support the virtualisation of their database as consolidation ultimately eroded their license revenues (per CPU licensing).  I personally think we’re simply seeing a similar thing occurring now with Hadoop and customers are therefore wary of building a solution that the Hadoop vendors won’t support.  I get that,  doesn’t mean it’s acceptable though!

What can I do about it though?

We at EMC had seen a bit of this sort of thing recently, specifically with Isilon for HDFS.  Hadoop vendors weren’t keen on losing revenue (as per my second example earlier) and weren’t really willing to support Isilon or sanction it’s usage, but customers were keen to use Isilon instead of the traditional DAS approach.  Now EMC obviously has alliances with all the major Hadoop vendors, that sadly doesn’t necessarily constitute support.

Now some of you may have picked up on the recent joint announcement between Cloudera and EMC around supporting Cloudera Enterprise Hadoop and Isilon for HDFS.  This is a great step forward in my opinion and has primarily come about following concerted pressure from Cloudera customers in the global financial industry.  Their sheer desire to leverage an enterprise grade platform for HDFS in tandem with Cloudera’s Enterprise Hadoop capabilities resulted in Cloudera having to agree to work jointly with EMC to build a supported solution.

Never underestimate your power as a customer,  most of the great things we come up with at EMC come about from our interactions with customers and them telling us what they need or want.  You shouldn’t be afraid to ask your vendors for the support you need for your business.

Cloudera Enterprise Hadoop and EMC Isilon for HDFS

Very briefly, the main plus point from a Cloudera / EMC point of view is around the fact that Isilon supports multi-protocol access of the same data,  eradicating the need to do major Extract, Transfer and Load (ETL) activity.  The same data can be interacted with via SMB, NFS, HTTP, SWIFT, REST and HDFS 1.0 and 2.2, ( 2.2, 2.3 and 2.4 coming very shortly) which allows you to put data into Isilon via one method, e.g. logs from an application and then consume it through another, e.g. HDFS.   My personal plus point is the fact that you can use Isilon HDFS for Cloudera, Pivotal HD, HortonWorks or Apache Hadop as your not tied to a traditional DAS stack :-)

image

Some of the related Links to that announcement are included below,  with the Cloudera CTO blog post a particularly good read.

Cloudera Enterprise Hadoop and EMC Isilon supported solution
Cloudera’s CTO Blog Post on joint EMC Isilon solution
Cloudera Enterprise Hadoop and EMC Isilon solution overview

In Summary

So for those customers out there who want to virtualise Hadoop but want to ensure they are fully supported, I implore you to put the pressure on your Hadoop vendor and your VMware account team.  I know for a fact that VMware have alliance teams working with the Hadoop vendors,  but it needs real customer pressure on the Hadoop vendors to fundamentally change the game.  It’s what happened with Oracle on VMware,  it’s what happened with Cloudera and Isilon HDFS and it needs to happen for Hadoop on VMware as well.

It’s also worth noting that this isn’t just a Hadoop problem,  we’re going to end up in the same situation with loads of the new popular commercial open source variants .  Think of things like MongoDB (NoSQL) or DataStax (Cassandra DB), it’s only a matter of time before large enterprise customers are going to want to virtualise or use enterprise storage platforms with these technologies.

We at EMC aim to offer choice to customers,  so they are free to mix and match our technologies with whatever systems they want to put on top of them.  However I think we as a mainstream vendor need to do more work partnering and certifying our products to work with the ISV ecosystem.  I’m not saying we do this for all ISV’s,  we’ll need to be selective but I think we’ll reap the benefit of having done the work with Cloudera and Isilon.  We should ensure that it doesn’t stop there,  we should listen to our customers and we should aim to provide supported ISV and EMC stacks where needed.

 

EMC, Hadoop, Storage, VMware , , , , ,

Memory Channel Interface Storage & EMC ScaleIO

October 21st, 2013

The storage industry is undergoing a huge change at the moment,  a huge change that is moving at break neck speed and one that I find both exciting and scary all at the same time.  Exciting because it’s new technology being used to solve complex business and IT problems and scary because it’s moving so incredibly fast. I think you can only guess what’s going to happen next, which direction it’ll go in and which trends will win out at the end of the day.  I guess that’s probably why I’m writing a blog post for the first time in almost 2 years!  Yes I know, that is far too long between blog posts and yes I am extremely ashamed of myself! :-)

I’m lucky enough to work for EMC and as part of that I get to see a lot of this change in IT developing momentum from the inside.  That said, in an organisation as big as this I also sometimes don’t find out about some of it until it hits the headlines, either at one of our product mega launches, a big industry event or in one of Chad’s now infamous blog posts!

Today’s Modern Storage Architectures

New storage architectures are appearing thick and fast,  my boss (Chad Sakac) explains this exceptionally well in his recent VMworld 2013 blog post on all things SDS at VMware and EMC.  He calls out 3 separate storage architectures that are fast becoming the norm,  they can be seen below and can be explained from left to right simply as.

Type 1 = Traditional storage model.

Type 2 = Scale out  / node based build out storage architectures.

Type 3 = Next Gen scale our software defined storage with COTS hardware.

Storage-Types 

EMC is very well known for the Type 1 storage,  VNX is a great example of this model in action.  EMC is also heavily into the Type 2 storage,  whether that be scale out NAS in Isilon, scale out block storage using the scale out engines of our enterprise VMAX arrays or even scale out block storage from an All Flash Array such as XtremIO.

It is however the type 3 storage I have been viewing with most interest and it is impossible to ignore the rapid increase in the use of the term “Software Defined Storage”.  I should really correct myself here though,  it’s not being used simply as a term or IT buzzword,  there are actually some very tangible products coming to market and this area of storage will gather huge momentum as we enter 2014. 

A subject that plays in this field and comes up regularly with one of my larger enterprise customers is Cloud Storage and Open Source Software with Commodity Off the Shelf hardware (COTS). It’s an interesting discussion and one that has been born out of both advances in Cloud storage platforms and  open source software stacks (OpenStack, Ceph) and a desire to introduce cost effective, supported  “good enough” solutions to support business demand. 

This stuff is real,  some people (including EMC colleagues) dismiss it as a passing fad,  something that will never take off as it’s not supported,  won’t work,  companies can’t put data in the cloud / on that platform for compliance or security reasons.  It’s seriously time to wake up and smell the coffee,  there may be challenges be they business or technical ones,  but the desire is there and where there is a will, there is always a way!

So where is this blog post going exactly…  :-)

"Good Enough" In-Memory storage alternative with software defined storage

…So all of this leads me to an interesting use case that I was discussing both internally and with a customer recently,  one that covers off three very interesting areas. 

  1. New Memory Channel Interface Flash (think NAND Flash with a DDR3 Memory interface)
  2. Software Defined Storage pooling of that MCI based storage using EMC ScaleIO
  3. A “good enough” performance tier, less than in-memory but more than PCI-E flash cards.

 

It sits firmly in the type 3 storage architecture discussed earlier, it involves new hardware technology combined with Software Defined Storage and could be used to deliver a cost effective, supported, good enough storage offering to deliver an alternative to in-memory databases.  Sound interesting? let’s dig a bit deeper into the 3 areas.

1. Memory Channel Interface (MCI) Storage

MCI storage popped up on my radar about a month ago following some internal distribution list emails.  A little digging shows that this particular technology appears to be in the very early stages of adoption.  In fact some of the tech companies in this field are still hiring ASIC programmers,  so it’s definitely early days.

One of the main companies out there at the moment is Diablo Technologies who are In essence providing non-volatile flash attached to a DDR3 memory interface to deliver up to 400GB of storage with a response time of less than 5μs.  The following is an extract from the Diablo press release which says it all really.

Configuring Memory Channel Storage (MCS) as a traditional block storage device enables new performance levels for applications, all the while reducing latencies by more than 85% over PCI-Express based SSDs and 96% over SATA/SAS based SSDs. MCS latencies are not only significantly lower than any other flash based storage, but also provides deterministic latencies with minimal variability, allowing applications to commit data quickly and consistently.

As well as being used for storage they the MCS DIMMs can also be used as an expansion of system memory,  allowing TB of addressable system memory instead of GB.  That’s another use case for another day though,  right now I’m interested in this as super low latency block storage and it sounds like this could be the next big thing in server based flash solutions.

2. Software Defined Storage Pooling with EMC Scale I/O

At present the problem I see with the above technology is the fact that because it’s basically direct attached storage (DAS) there is no protection of the data.  So you’re looking at stateless deployments or utilising an overlay to provide some host level clustering of the DAS resources.  This is where ScaleIO comes in…

ScaleIO in it’s simplest terms is a software defined storage solution that allows customers to leverage the local disk in applications servers in a pooled manner.  Through the installation of the ScaleIO agent known as the ScaleIO Data Sever (SDS), local storage can be combined together to create large scale elastic pools of storage.  Some of the key benefits are listed below

  • Scale from 100’s to 1000’s of nodes
  • Fast Auto-Rebuild
  • High levels of IO Parallelism
  • Platform Agnostic
  • Auto-rebalance capability
  • SSD/ HDD / PCI-E
  • Add, move, remove disks / nodes on the fly
  • Partitioning /Tiering /Multi-tenancy
  • 2 and 3 way protection mechanism.
  • Snapshots (Writeable)

image

ScaleIO technically converges the storage and application layers and can be used in a number of different configurations.  Hosts are of course interconnected,  this can be achieved using Ethernet or Infiniband (In my example I’m thinking high performance back end here). As ScaleIO deployment is small the SDS (100Mb – 400Mb) and the SDC (20Mb) can be deployed side by side and all the assets in a server can be sweated to deliver a complete solution.

mixed model  

You can however configure things in a traditional 2-tier setup as shown below,  where applications utilising the ScaleIO Data Client (SDC) access storage from dedicated storage nodes using the Scale IO Data Server (SDS) agent.

2 tier model 
In reality,  you can configure your application and ScaleIO in any way you like,  depends on your requirements.  Add into this mix the flexibility of agnostic support for physical and virtual nodes, full protection zones,  quality of service, 2 and 3 way data protection and the upcoming storage pools for performance.  It’s quite a neat little product.

Any Option

So let’s put this back into the context of the use case I was originally discussing.  By taking a number of hosts containing the MCI low latency block storage,  connecting those hosts with Infiniband and adding ScaleIO over the top I’ve got myself a pretty interesting low latency,  highly parallelised IO storage system using commodity off the shelf hardware.  What’s not to like?

3. Cost Effective / Good Enough in-memory alternative using MCI and Scale I/O

So the 3rd and final point,  the reason  that I was discussing this topic with a customer in the first place was cost efficiency.  They were interested in how you could do provide a good enough in-memory alternative,  both in terms of software and hardware.  The following is an interesting extract from a blog,  in this instance they’re looking at using the MCI as a memory extension and layering on software to utilise that memory. In my case I’m looking at it as a storage extension.  Regardless of the use case though, the point is the potential for cost saving and there is one if you put some thought into it vs the traditional models.

http://www.gridgain.com/blog/why-mci-means-rapid-in-memory-computing-adoption/

Another important characteristic of MCI storage is the plug-n-play fashion in which it can be used – no custom hardware, no custom software required. Imagine, for example, an array of 100 micro-servers (ARM-based servers in micro form factor), each with 256GB of MCI-based system memory, drawing less than 10 watts of power, costing less than $1000 each.

You now have a cluster with 25TB in-memory storage, 200 cores of processing power, running standard Linux, drawing around 1000 watts for about the same cost as a fully loaded Tesla Model S.

Summary

This has been a bit of a ramble but I think what I am trying to say is,  the world is changing and there are now so many different ways to achieve your objectives.  The commoditisation of hardware and the major up shift in software defined everything is fundamentally changing the approach in IT.  As I said at the start,  it’s both scary and exciting all at the same time,  the only option is to embrace it if you ask me!

EMC, New Products, Storage , , , , ,

EMC VSI Storage Viewer, Solution Enabler and Symmetrix arrays

September 26th, 2011

I’ve recently been looking at the implementation of EMC’s free Virtual Storage Integrator (VSI) with a few our older Symmetrix customers. Now customers using VMAX and VMAXe have the ability to deploy delegated storage provisioning for their VMware admins. However DMX customers only have the ability to use the read only storage viewer functionality as the DMX is not supported with Storage Pool Manager (SPM) which back ends the storage provisioning.  Some interesting questions came up recently with a customer about how best to deploy the VSI storage viewer with DMX arrays and I thought it would be worth sharing the findings with a wider audience. Basically I’m looking to cover off the different ways the VSI can connect to a Symmetrix array and how some of the options selected affect end to end operations.

VSI to Symmetrix Connectivity

So the VSI tool can be used in two ways with Symmetrix arrays,  you can utilise the local Solution Enabler (SE) installation that comes with the VSI or you can use a dedicated Solution Enabler server. It’s important to remember that Symmetrix arrays can only be discovered in-band, basically this means the SE install needs direct connection with the physical array.  This is achieved through the presentation of small LUNs known as gatekeeper LUNs, something existing Symmetrix storage teams will be very familiar with. So lets look at the two different possible setups.

Local Solution Enabler Deployment

LocalSEDeployment

The local deployment model shown above would require a gatekeeper LUN being presented / zoned through to the machine that the VI Client, VSI and local SE install have been deployed on. Communication with the array in this instance flows directly between the client PC and the array. In the majority of instances this isn’t going to be very practical for a number of reasons.

  • Each VMware admin client with VSI deployed would need a direct array connection.
  • Most Symmetrix arrays are FC attached and client PC’s are not.
  • Arrays live in remote data centres and VMware admin PC’s live in the office.
  • Increased security risk, i.e. too many direct array connections to secure

 

Remote Solution Enabler Deployment
 RemoteSEDeployment

The remote deployment model shown above would require gatekeeper LUNs being zoned through to a dedicated server. VMware admins would then connect through this remote SE server when querying Symmetrix arrays from information with the VSI. Communication flow in this instance always goes through the server, however as you’ll see later results can be returned from the SE server or the array depending on VSI configuration. This kind of setup is more practical for a number of reasons.

  • Remote SE servers are usually already in place for storage management tasks.
  • Available as a virtual appliance for quick deployment if not in place already.
  • Supports connectivity by multiple remote VMware admins using VSI.
  • Manage multiple Symmetrix arrays through one server.
  • Decreases security risk, i.e. single device connection to array.

 

Mix and Match

The model above is by no means rigid,  you can craft a number of solutions out of the principals shown above.  If your vCenter server sat in the same data Centre as the array then you could present gatekeeper LUNs to it and use this as a management point whenever you want to get information from the array.  Another possible solution is to put a management virtual machine in the datacentre with the VI Client and VSI installed and present a gatekeeper as an RDM,  whenever a VMware admin needs information from the array they connect into that management VM to carry out the work. Basically there is a solution for deploying VSI with Symmetrix arrays no matter what you’re setup looks like.

VSI Discovery Process Flow

One question that did come up recently was what happens when you select the AutoSync option for a symmetrix array and you are using the remote SE server solution. How often does it poll the array?  Well the answer is it doesn’t, which is strange as the term Autosync gives the impression that it syncs with the array on a regular basis. So how does it work?

VSI_SYMM_1

AutoSync enabled

When AutoSync is enabled each time you request array data,  e.g. clicking on the EMC VSI tab for a datastore.  The request forces the SYMAPI database on the remote SE server to be updated from the array, the up to date array information is then returned to the VSI. There is obviously a slight cost involved in doing this as the remote SE server needs to make the calls to the array in order to update it’s local database before responding.  Typically this would introduce a 10 – 20 second delay but that cost means you guarantee the information received is up to date and valid.

AutoSync Enabled

AutoSync disabled

When Autosync is disabled each time you request array data the request is returned from the cached information in the local SYMAPI database on the remote SE server. This is obviously the fastest method as you don’t have the cost of querying the array directly for an update but the information may be out of date.

AutoSync Disabled

With Autosync disabled it’s up to the VMware administrator to initiate the sync of the array from within the VSI.  Alternatively the storage team can initiate sync with the array directly through the SE server using SYMCLI. To initiate a sync manually go into the VSI tool and select Symmetrix arrays from the list of features, highlight the array and click on Sync Array.

VSI_SYMM_2

Summary

The free EMC VSI Storage Viewer tool can be of great benefit to Symmetrix customers, allowing VMware admins improved visibility of the underlying storage layers.  In larger environments where Symmetrix arrays are traditionally used you tend to find VMware and Storage are managed by separate teams.  Anything that improves the information flow between the two teams during troubleshooting has to be a must have tool. As show above some thought needs to be given to how you set it up. My personal preference would be to always go for the remote SE server solution.  Enable Autosync if your underlying VMware storage environment changes often and if it doesn’t then a manual sync every now and again should suffice.

Additional notes and links

SPC-2 Flags

It’s worth noting that SPC-2 flags need to be set on the FA port or on the initiator of the ESX host for the VSI to work correctly, in fact this is a required setting for ESX generally. This has come up a couple of times recently so I though it worth mentioning to ensure that people have it setup correctly,  the following whitepaper gives you more information.

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H4116-enabling-spc2-compl-emc-symmetrix-dmx-vmware-envnmt-wp.pdf

VSI installation Media

Home > Support > Software Downloads and Licensing > Downloads T-Z > Virtual Storage Integrator (VSI) – (Please Note: PowerLink account required)

Solution Enabler Media

Home > Support > Software Downloads and Licensing > Downloads S > Solutions Enabler(Please Note: PowerLink account required)

Solution Enabler Documentation

Home > Support > Technical Documentation and Advisories > Software > S > Documentation > Solutions Enabler – (Please Note: PowerLink account required)

Storage, VMware ,

EMC PowerPath/VE – MpxEsxPathClaim failure – LUNz

October 16th, 2010
I stumbled across some interesting error messages the other day in a host’s vmkwarning log file. The errors were related to the deployment of PowerPath/VE and as I dug a little deeper I found this was an issue on all 3 hosts I had recently built.  When I connected via ILO I was seeing the additional error messages on the console as you can see from the screenshot below.
 
ESX_PPathVE_Issue
 
I was a bit concerned as this was my first deployment of ESX 4.1 with the latest PowerPath/VE 5.4 SP2 software.  I was also a little confused as I hadn’t actually presented any LUNs to this ESX cluster. All I had done was zone the HBA’s through to the storage processors and set up a storage group on the CLARiiON and added the hosts to it. So what path was PowerPath/VE trying to claim if there were no LUNs presented?
 
While on the service console of one host I ran the command esxcfg-mpath –l to list all the paths.  As you can see from the screenshot below I came across a fibre channel disk that the HBA’s was attempting to connect to, even though nothing had been presented.  I also noticed that it was using the VMware native multi-path plug-in and not PowerPath/VE and was also showing as dead! So was this the path that PowerPath was failing to claim?
 
ESX_PPathVE_Issue_6
 
Following discussions with EMC Support I was informed that this was what is known as a LUNz. This is a fake logical unit presented to the host by the storage processors to provide a path for host software to send configuration commands to the array when no LUNs have been presented.
 
In ESX 4.x the Navisphere host agent that you would normally install on a CLARiiON attached host is not supported.  Instead the ESX host registers itself with the storage array automatically (requires FLARE 28 or later). In order to do this it needs a communication path to the array, hence the LUN_Z being presented. If a LUN is presented to the host then that would be used instead,  therefore the LUNz disappear when a LUN is added as it is no longer required.
 
So I added a 100GB LUN to the storage group and I immediately received a number of email alerts from vCenter stating that connectivity had been lost to the naa.500 device.  So as expected as soon as LUN 0 is presented the LUN_Z is no longer required.
 
The key identifier that this was a LUNz was the naa.500 shown in the device name (see the screenshot above). A LUN presented from the CLARiiON shows up with a device name of naa.600 as you an see in the screenshot below.  You can also see now that the plug-in is now set to PowerPath which is what I would expect.
 
ESX_PPathVE_Issue_7
 
Looking in vCenter I can now see multiple active paths to the device which tells me that PowerPath/VE has truly kicked in and is working as expected.
ESX_PPathVE_Issue_8
 
There wasn’t a lot of information out there on the particular error messages I was receiving (listed below) hopefully this explanation will allay some fears.  I know I was concerned when I first saw the messages but thankfully it turned out to be by design.
 
EmcpEsxLogEvent: Error :emcp :MpxEsxPathClaim :Could not claim the path. Status : Failure
ScsiPath: 3815: Plugin ‘PowerPath’ had an error (Failure) while claiming path
ScsiDevice: 1726: Full GetDeviceAttributes during registration of device ‘naa.500…’  : failed with I/O error
ScsiDeiceIO: 5172: READ CAPACITY on device “naa.50…’ from Plugin “NMP” failed. I/O error

ESX, Storage, VMware , ,

vCenter disk performance – SCSI LUN ID display

September 26th, 2010

Something has been bugging me for some time now about vCenter disk performance statistics.  Basically vCenter shows each SCSI LUN with a unique ID as per the following screenshot. When viewed through the disk performance view it’s impossible to tell what is what unless of course you know the NAA ID off by heart!?

image 
I was working on a project this weekend putting a Tier 1 SQL server onto our vSphere 4.0 infrastructure, therefore insight into disk performance statistics was key.  So I decided I needed to sort this out and set about identifying each datastore and amending the SCSI LUN ID name, here is how I did it.

Identify the LUN

First of all navigate to the datastore view from the home screen within vCenter

image 

Click on the datastore you want to identify and then select the configuration tab

NAA_Post2 

Click on the datastore properties and then select manage paths

Note down the LUN ID in this case 2 and also note down the capacity

image

Change the SCSI LUN ID

Now navigate to the home screen and select Hosts and Cluster

image

Select a host, change to the configuration tab and then select the connecting HBA

image

At the bottom identify the LUN using ID and capacity noted earlier and rename the start of ID. I chose to leave the unique identifier in their in case it is needed in the future.

image

Now when you look at the vCenter disk performance charts you will see the updated SCSI LUN ID making it much more meaningful and useable.

NAA_Post6

Raw Device Mappings

If you have Raw Device Mappings (RDM) attached to your virtual machine then these to are capable of showing up in the vCenter disk performance stats.  It’s the same process to change the name of the SCSI LUN ID however it’s slightly different when identifying them.  To do so carry out the following.

Edit the settings of the VM, select the RDM file, select Manage Paths and then note down the LUN ID for the RDM.  Use this to identify the LUN under the Storage Adapter configuration and change it accordingly.

image

Following making these changes I can now utilise the vCenter disk counters to compliment ESXTOP and my SAN monitoring tools.  Now I have a full end to end view of exactly what is happening on the storage front, invaluable when virtualising Tier 1 applications like SQL 2008.

There are a plethora of metrics you can look at within vCenter,  if you would like to understand what they all mean mean then check out the following VMware documentation.

VMware vCenter SDK reference guide – Disk Counters

Storage, vCenter, VMware , , , ,

Whats new in vSphere 4.1 Storage

September 2nd, 2010

So I haven’t done a lot of real time blogging at VMworld this year as I’ve been busy trying to see and soak up as much as possible.  It’s not every day that you get access to the likes of Chad Sakacc (VP EMC / VMware alliance) Scott Drummond (EMC – ex VMware performance team) and a whole host of other technology movers and shakers. As you can imagine I took full advantage of these opportunities and blogging became a bit of secondary activity this week.

However, I’ve now had time to reflect and one of the most interesting areas I covered this week which was the new storage features in vSphere 4.1. I had the chance to cover these in multiple sessions, see various demo’s and talk about it with the VMware developers and engineers responsible. There are two main features I want to cover in depth as I feel they are important indicators of the direction that storage for VMware is heading.

SIOC – Storage I/O Control

SIOC had been in the pipeline since VMworld 2009, I wrote an article on it previously called VMware DRS for Storage, slightly presumptuous of me at the time but I was only slightly off the mark. For those of you who are not aware of SIOC, to sum it up again at a very high level let’s start with the following statement from VMware themselves.

SIOC provides a dynamic control mechanism for proportional allocation of shared storage resources to VMs running on multiple hosts

Though you have always been able to add disk shares to VM’s on an ESX host, this only applied to that host, it was incapable of taking account of VM I/O Behaviour of other VMs on other hosts. Storage I/O control is different in that it is enabled on the datastore object itself, disk shares can then be assigned per VM inside that datastore. When a pre-defined latency level is exceeded on a VM it begins to throttle I/O based on the shares assigned to each VM.

How does it do this, what is happening in the background here? Well SIOC is aware of the storage array device level queue slots as well as the latency of workloads.  During periods of contention it decides how it can best keep machines below the predefined latency tolerance by manipulating all the ESX Host I/O Queues that affect that datastore.

In the example below you can see that based on disk share value all VM’s should ideally be making the same demands on the storage array device level queue slots.  Without SIOC enabled that does not happen. With SIOC enabled it begins throttling back the use of the second ESX host’s I/O queue from 24 slots to 12 slots, thus equalising the I/O across the hosts.

Paul Manning (Storage Architect – VMware product marketing) indicated during his session that there was a benefit to turning SIOC on and not even amending default share values.  This configuration would immediately introduce an element of I/O fairness across a datastore as shown in the example described above and shown below.

SIOC_Fairness_Full

So this functionality is now available in vSphere 4.1 for Enterprise Plus licence holders only.  There are a few immediate caveats to be aware of, it’s only supported with block level storage (FC or ISCSI) so NFS datastores are not supported. It also does not support RDM’s or datastores constructed of extents, it only supports a 1:1 LUN to datastore mapping. I was told that extents can cause issues with how the latency and throughput values are calculated,  which could in turn lead to false positive I/O throttling, as a result they are not supported yet.

It’s a powerful feature which I really like the look of. I personally worry about I/O contention and the lack of control I have over what happens to those important mission critical VM’s when that scenario occurs. The “Noisy Neighbour” element can be dealt with at CPU and Memory level with shares but until now you couldn’t at a storage level. I have previously resorted to purchasing EMC PowerPath/VE to double the downstream I/O available from each host and thus reduce the chances of contention.  I may just rethink that one in future because of SIOC!

Further detailed information can be found in the following VMware technical documents

SIOC – Technical Overview and Deployment Considerations

Managing Performance Variance of applications using SIOC

VMware performance engineering – SIOC Performance Study

VAAI – vStorage API for Array Integration

Shortly before the vSphere 4.1 announcement I listened to an EMC webcast run by Chad Sakacc.  In this webcast he described EMC’s integration with the new vStorage API, specifically around offloading tasks to the array. So what does all this mean, what exactly is being offloaded?

VAAI_Features 
So what do these features enable? Let’s take a look at them one by one.

Hardware assisted locking as described above provides improved LUN metadata locking.  This is very important for increasing VM to datastore density.  If we use the example of VDI boot storms, if only the blocks relevant to the VM being powered on are locked then you can have a more VM’s starting per datastore.  The same applies in a dynamic VDI environment where images are being cloned and then spun up; the impact of busy cloning periods, i.e. first thing in the morning is mitigated.

The full copy feature would also have an impact in the dynamic VDI space, cloning of machines taking a fraction of the time as the ESX host is not involved. What I mean by that is when a clone is taken now, the data has to be copied up to the ESX server and then pushed back down to the new VM storage location.  The same occurs when you do a storage vMotion, doing it without VAAI takes up valuable I/O Bandwidth and ESX CPU clock cycles. Offloading this to the array prevents this use of host resource and in tests has resulted in a saving of 99% on I/O traffic and 50% saving on CPU load.

In EMC Labs a test of storage vMotion was carried out with VAAI turned off, it took 2 mins 21 seconds.  The same test was tried again with VAAI enabled, this time the storage vMotion took 27 seconds to complete. That is a 5x improvement, and EMC have indicated that they have had a 10x improvement in some cases. Check out this great video which shows a storage vMotion and the impact on ESX and the underlying array.

There is also a 4th VAAI feature which has been left in the vStorage API but is currently unavailable, Mike Laverick wrote about it here. Its a Thin Provisioning API and Chad Sakacc explained during the group session that its main use is for Thin on Thin storage scenarios. The vStorage API will in the future provide vCenter insight into array level over provisioning as well as the VMware over provisioning.  It will also be used to proactively stun VM’s as opposed to letting them crash as currently happens.

As far as I knew EMC was the only storage vendor offering array compatibility with VAAI. Chad indicated that they are already working on VAAI v2 looking to add additional hardware offload support as well as NFS Support. It would appear that 3Par offer support, so that kind of means HP do to, right? Vaughan Stewart over at NetApp also blogged about their upcoming support of the VAAI, I’m sure all storage vendors will be rushing to make use of this functionality.

Further detailed information can be found at the following locations.

What does VAAI mean to you? – Chad Sakac EMC

EMC VAAI webcast – Chad Sakac EMC

Storage DRS – the future

If you’ve made it this far through the blog post then the fact we are taking about Storage DRS should come as no great surprise.  We’ve talked about managing I/O performance through disk latency monitoring and talked about array offloaded features such as storage vMotion and hardware assisted locking. These features in unison make Storage DRS an achievable reality.

SIOC brings the ability to measure VM latency, thus giving a set of metrics that can be used for storage DRS.  VMware are planning to add capacity to the storage DRS algorithm and then aggregate the two metrics for placement decisions.  This will ensure a storage vMotion of an underperforming VM does not lead to capacity issues and vice versa.

Hardware Assisted Locking in VAAI means we don’t have to be as concerned about the number of VM’s in a datastore, something you have to manage manually at the moment.  This removal of limitation means we can automate better, a storage DRS enabler if you will.

Improved Storage vMotion response due to VAAI hardware offloading means that the impact of storage DRS is minimised at the host level. This is one less thing for the VMware administrator to worry about and hence smoothes the path for storage DRS Adoption.  As you may have seen in the storage vMotion video above the overhead on the backend array also appears to have been reduced, so you’re not just shifting the problem somewhere else.

For more information I suggest checking out the following (VMworld 2010 account needed)

TA7805 – Tech Preview – Storage DRS

Summary

There is so much content to take in across all three of these subjects I feel that I have merely scratched the surface.  What was abundantly clear from the meetings and session I attended at VMworld is that VMware and EMC are working closely to bring us easy storage tiering at the VMware level.  Storage DRS will be used to create graded / tiered data pools at the vCenter level, pools of similar type datastores (RAID, Disk type). Virtual machines will be created in these pools; auto placed and then moved about within that pool of datastores to ensure capacity and performance. 

In my opinion it’s an exciting technology, one I think simplifies life for the VMware administrator but complicates life for the VMware designer. It’s another performance variable to concern yourself with and as I heard someone in the VMworld labs comment “it’s a loaded shotgun for those that don’t know what they’re doing”.  Myself, I’d be happy to use it now that I have taken the time to understand it; hopefully this post has made it a little clearer for you to.

Gestalt-IT, Storage, VMware, vmworld , , ,

Gestalt IT Tech Field Day Seattle – NEC HYDRAstor

July 16th, 2010

Following my return from my first Tech Field Day I have been reading through my notes and reflecting on the vendors I saw when I was in Seattle.  Of the vendors I saw the one that surprised me most was NEC, everyone has heard of them but not everyone actually knows what they do or what products they make.  As we found out during our visit, NEC have a broad technology portfolio and have quite an interesting offering in the storage space.

Here are some basic facts about NEC that you may / may not know

- Founded in 1899
- Fortune 200 company with over 143,000 staff
- Revenues of $43 Billion in 2009
- $3 Billion spent in R&D each year across 12 R&D global labs
- 48,000 patents worldwide.
- Have been in storage since 1950 

So with that little history lesson over, the main focus of our visit was NEC’s HYDRAstor. This is their modular grid storage offering for customers with backup and archive storage in mind. It’s marketed as “Grid storage for the next 100 years” which may sounds a little far fetched, but data growth and data retention periods are ever increasing.   From what I saw and heard the HYDRAstor could very well live up to this bold claim.

There was a lot of content delivered on the day and the session went on for 4 hours, so I’ve tried to wrap up some of the key features below. I have expanded on the key elements of the HYDRAstor that really caught my attention as I think they are worth exploring in more detail.

Key Features

- 2 tier architecture based entirely on best of breed Intel Xeon 5500 based servers

- 2 tier architecture consists of front end accelerator nodes and back end storage nodes

- Shipped as a turnkey solution, though entry level can be bought for self racking.

- Supports a Maximum of 165 Nodes, 55 accelerator nodes and 110 storage nodes

- All interconnects based on 1GB Ethernet Networking (NEC Network switches included)

- Supports old and new node modules in the same Grid for easy node upgrade and retirement.

- Supports volume presentation with NFS and CIFS (SMB Version 1)

- Non-disruptive auto reallocation of data across any additional grid capacity – DynamicStor

- higher levels of resilience than RAID with a reduced capacity overhead (See DRD below)

- WAN optimised grid to grid replication minimises network bandwidth requirements – RepliGrid

- WORM Support for secure retention / compliance governance – HYDRAlock

- Efficient drive rebuilds, only rebuild the actual data not the whole drive.

- Global inline de-duplication across the entire grid – DataRedux™

- Tight backup vendor integration – strips out backup metadata to improve de-dupe ratios

- Mini HYDRAstor appliance available for remote offices or offsite DR replication.

Data Protection – Distributed Resilient Data™ (DRD)  

The resilience provided by HYDRAstor really caught my eye, primarily because it was so different from anything I had ever seen before.  Distributed Resilient Data (DRD) uses something known as erasure coding to provide extremely high levels of resilience. Now you may think that this would come with a considerable storage and performance overhead, but you’d be wrong.

The HYDRAstor provides 6 levels of protection (1 – 6) all with differing levels of protection and capacity overhead. With the default level 3 selected NEC’s implementation of erasure coding splits the data chunks into 12 parts, 9 data and 3 parity. The use of erasure coding means that it only ever needs 9 parts to make up a complete data chunk. So if that data chunk is spread over 12 disks in a single storage node, it can withstand 3 disk failures. if those 12 chunks are spread over 12 storage nodes then you can withstand 3 complete node failures.

This default level 3 protection requires a 25% capacity overhead, much like RAID 5.  However by providing for 3 disk failures it provides 300% more protection than RAID5 and 150% more protection than RAID 6.  If you want to go to the highest level of protection (level 6) then there is a 50% capacity overhead as with RAID 1, however you can withstand the failure of 6 disks or 6 nodes.

The following video describes Distributed Resilient Data™ (DRD) at the default level 3

 

High Performing

The demonstration NEC gave us was based on their lab setup of 20 accelerator nodes and 40 storage nodes.  This was a 4 rack setup, which as you can see from the photo below is not a small setup. What it is though, is a very high performing storage solution.

image

NEC demonstrated a data copy that utilised a full 10GB per second throughput, which worked out at about 540MB throughput per front end accelerator node.  The screenshot from the management GUI below shows the  total throughput achieved.

The maximum HYDRAstor configuration consists of 11 racks and is capable of 25GB per second or 90TB per hour. This works out at roughly 2 PB’s in a 24 hour period, that is an astounding amount of data throughput.  Surely a level of throughput to deal with even the most demanding backup or archiving use case.
 

image

There were a few negative aspects that I picked up on during our visit, thankfully all ones I feel can be addressed by NEC over time.

User Interface

I felt the user interface was a little dated (see screenshot above), it served it’s basic purpose but wasn’t going to win any awards. It was a stark contrast when compared with the very nice and easy to use GUIs we saw from Nimble storage and Compellent.  That said if the HYDRAstor is only being used as a backup and archive storage and not primary storage, does it actually need to have the worlds best GUI, possibly not.

Solution Size

The HYDRAstor came across as a large solution, though I’m not sure why. When I think about it any storage solution that provides 10GB/sec throughput and 480TB of raw storage is likely to take up 4 racks, in some instances probably a lot more.  Maybe it was the sheer number of network interconnects, perhaps some consolidation with 10GB Ethernet could assist in making the solution appear smaller.  NEC could also look at shrinking down the servers sizes, probably only possible with the accelerator node servers as the storage nodes need 12 x 1TB disk so not a lot of scope for size reduction there.

Marketing

A general consensus among delegates was why have NEC marketing not been pushing this harder,  why had so many of us in the room not heard about it? I suppose that was one of the reasons we were there, to hear about it, discuss it and ultimately blog about it as I’m doing now. There are some specific target markets that NEC maybe need to look at for this product, possibly looking at world wide data retention regulations as a means of identifying potential markets and clients.  More noise needs to be made by NEC about there efficient de-dupe integration with enterprise backup products such as CommVault Simpana, Symantec NetBackup, TSM and EMC Networker.  More comments such as the one below wouldn’t hurt.

with the application aware de-duplication for CommVault we’ve optimized storage efficiency with a four times improvement in space reduction.
Pete Chiccino, Chief Information Officer, Bancorp Bank

EMEA availability

NEC told us that this product is not being actively pushed in the EMEA region.  Currently the product is only available for purchase in North America and Japan.  One of the points I made to NEC was that the HYDRAstor appeared to me to be a product that would have a lot of applications in the European market place, possibly more so in the UK.  I made specific reference to FSA regulation changes where Financial companies are now required to keep all electronic communications for up to 7 years.  NEC’s HYDRAstor with it’s high tolerance for failure, global de-duplication across all nodes and grid like extensibility is perfect for storing this kind key critical complaince data.  That is a very specific example, another is insurance companies who have longer retention requirements and museums digitising historical documents / books which have a “keep forever” retention requirement.

NEC contacted me via twitter after the event to say that although not on sale in EMEA if a company has a presence in the US they will be able to explore purchasing the HYDRAstor through NEC America.

Summary

I had no idea what to expect when we arrived at NEC’s offices, sure I knew who they were but I had no idea what they were doing in the storage space. Gideon Senderov at NEC certainly saw to it that we had all the information needed to form an opinion, his knowledge of his product was simply outstanding.

NEC HYDRAstor is a product that is quite unique. It’s easy to scale up and scale out, has high levels of redundancy without the normal capacity penalty and of course exceptional levels of performance. It strikes me as a product that any IT professionals responsible for backup, archiving and long term data retention would be very, very interested in

Note : Tech Field Day is a sponsored event. I receive no direct compensation and take personal leave to attend, however all event expenses are paid by the sponsors via Gestalt IT Media LLC. The views and content expressed here are my own and is in no way influenced by the sponsors of this event.

Events, Gestalt-IT, Storage , ,

Gestalt IT Seattle Tech Field Day – Day 2 Summary

July 16th, 2010

It’s now been a couple of days since the second day of the Gestalt IT Tech Field Day, I’m actually taking the opportunity to write this on the plane on the way back from Seattle. So once again I thought I would do a summary post until I get the chance to write up a detailed post on each vendor.

 image image

Compellent were one of the main sponsors for the Seattle Tech Field Day and were responsible for us getting access to the Microsoft Campus. So a big thank you to Compellent for their support of Tech Field Day.

Compellent are a company I have had dealings with before, I looked at buying one of their storage devices back in 2008 and was very impressed by the product they had on offer at the time.  This was a great chance for me to revisit Compellent two years on and see how things had changed.

Compellent in general still appears to be much the same product that I liked so much back in 2008.  Their pooled storage model, software controlled RAID write down, space efficient snapshots and WAN optimised thin replication are all superb  features. There main differentiator back in 2008 was their ability to do automated storage tiering (Data Progression™), something that others in the industry are starting to catch up to (EMC FAST). Compellent’s Data Progression technology is one that many customers actively use with good results, I was slightly disappointed though to learn that their data movement engine only executes once every 24 hours and cannot be made more frequent.  I’m not sure how that compares to EMC FAST but is something I’ll include in a more expansive post.

A feature I had heard of but didn’t quite understand previously was Compellent’s Live Volume.  It’s another unique feature for Compellent and one of my fellow delegates even described it as “EMC vPlex that you could actually afford”. Compellent implement the Live Volume feature at software level as opposed to a hardware based implementation like EMC vPlex. Compellent are able to present the same volume, with the same identity in two different locations, they do this using the underlying WAN optimised asynchronous replication. One point of note was that this is not an active / active DR like setup,  this is a setup for use in a controlled maintenance scenario, such as SAN fabric maintenance or a DC Power down test.

Compellent also took the opportunity to share some roadmap information. Highlights included the release of the 64 bit, Series 40 Controller base on the Intel Nehalem, encrypted USB device for seeding replication, a move to smaller 2.5” drives and 256 bit full disk encryption among others.

image 
Although we were situated on Microsoft’s Campus for a large part of Tech Field day we were never presented to by Microsoft, which was a shame.  We did however get the chance to visit the Microsoft store which is for employees only.  It gave us all a chance to buy some discounted Microsoft Software and souvenirs of our visit to Redmond which we all took advantage of.

photo

Tech Field Day delegates Kevin Houston, Stephen Foskett and Jason Boche using their iPhones and iPads in the heart of the Microsoft campus. Note Jason Boche using an iPad and wearing his VMware VCDX shirt, brilliant!

image

Our afternoon session was spent a short bus ride away from Microsoft at NEC America’s Seattle office.  We were here to hear about NEC’s storage offering (I had no idea they even did storage) and more specifically the NEC HYDRAstor range. We had a very in depth session on this fascinating product with Gideon Senderov, Director of Product Management for the HYDRAstor range.

NEC have taken an innovative approach with this product, one I was not expecting. They utilise full blown NEC servers to provide a two tier architecture made up of front end accelerator nodes and back end storage nodes.  On top of this they don’t use the traditional RAID model, instead using something known as erasure coding to provide improved data protection. I will deep-dive this particular data protection method in another article but it was a very interesting and different approach to what I’m used to.

The HYDRAstor grid is marketed as “Storage for the next 100 years” and with it’s grid architecture it’s reasonably easy to see how that statement could be realised.  You can add additional nodes into the grid and it will automatically redistribute itself to take advantage of the capacity.  You can also mark nodes for removal,  the system evacuating the data to enable nodes to be removed from the grid.  This combined with the ability to co-exist old and new HYDRAstor nodes shows why it’s a good storage location for data with a very long term retention requirement.

It appeared to me that HYDRAstor was designed specifically as a location for the output of archive or backup data and not a primary data storage solution. The reason I say this is that when we discussed in-line de-duplication the product was already integrated with major backup vendors (Symantec NetBackup, CommVault Simpana, Tivoli Storage Manager and EMC Networker). NEC were getting very clever by stripping out metadata from these backup vendors to improve the level of de-dedupe that could be achieved with the product when storing backup data.

I will revisit the HYDRAstor, once I have had a chance to go over my notes I fully intend to dedicate a full article to it as I was very impressed.

image           Capture

Rodney Haywood and Gideon Senderov white boarding the configuration of the NEC HYDRAstor

Note : Tech Field Day is a sponsored event. I receive no direct compensation and take personal leave to attend, however all event expenses are paid by the sponsors via Gestalt IT Media LLC. The views and content expressed here are my own and is in no way influenced by the sponsors of this event.

Events, Gestalt-IT, Storage

Virtual Storage Integrator 3.0.0 for vSphere and EMC Storage

May 11th, 2010

I have been trying desperately this week to keep up to date with the latest announcements coming out of EMC World 2010.  Problem is they appear to be making them and blogging about them faster than I can read and assimilate them.

One blog post that did catch my attention was a post by EMC’s Chad Sakac. Chad constantly amazes me, he generates a massive amount of super high quality technical content for the EMC and VMware community. His blog post was entitled “EMC’s next generation vCenter Plugins” and details the latest and greatest offerings from EMC’s Free vCenter plugins.

The Virtual Storage Integrator (VSI) V3.0.0 is a renaming of the existing EMC Storage Viewer 2.1 plugin that has been available for a while.  Why the rename? Well EMC are introducing tighter integration by enabling storage provisioning from within vCenter, it’s now surpassed being just a storage viewer.  The storage provisioning integration works with the CLARiiON across all protocols (FC, ISCSI, FCOE) and it also works with NFS on the Celerra. It also adds greater degrees of simplicity and reduces risk by automating all the tasks involved in provisioning and presenting storage to your vSphere cluster.

Chad explains it in much more detail and much better than I ever could in the following video.

I personally feel that the benefits of EMC’s ownership and tight working relationship with vSphere are beginning to shine through.  Such tight levels of integration are now being delivered and future development doesn’t look likely to slow down either. The quote from Chad below show’s how aggressively his team are working to constantly bring new features to the table and best of all, there completely free!

EMC Virtual Storage Integrator (or VSI) is the main EMC vCenter plug-in.  Over time, more and more functions that currently require installing one or more additional plugins will fold into the EMC Virtual Storage Integrator.  We’re beefing up the team behind it, and they have a very aggressive roadmap (wait till you see what’s next!!!)

Click the link below to find out more about what vCenter plugins are available, what they’re designed to do and where you can download them from in EMC Powerlink.

Plugin_Image

Storage, VMware, vSphere , , , ,

Virsto One, Hyper-V and the I/O Blender effect

February 24th, 2010

One of the things I’ve come to love about blogging is the fact that I occasionally get contacted by the odd tech start-up. Keen to demonstrate their latest market leading idea that is going to revolutionise the industry as we know it.  Earlier this month I was contacted by Mindy Anderson who is the Product director at tech start-up Virsto (short for Virtual Storage). Virsto had a new product for Microsoft Hyper-V that they wanted to demonstrate to me in advance of their big product launch. Having looked at Mindy’s background in the storage industry I was very keen to hear more about their new product.

The product is called Virsto One and is aimed solely at Windows 2008 R2 Hyper-V. The product introduces some new features like thin provisioned clones & snapshots, that expand the functionality of the standard Hyper-V product. The most interesting feature in my opinion though is the attempt to tackle the virtualisation / storage problem commonly known as the I/O blender effect.

So what does Virsto One look like?

The software itself installs in the parent partition of each Hyper-V host and consists of the filter driver, a system service and a VSS provider.  The filter driver sits above the raw storage (any block storage) and presents a VHD object to the parent partition.  This setup allows users to configure physical storage once and then use Virsto One to carry out all future provisioning tasks. This includes full support for creating  thin provisioned, high performing, cluster aware snapshots and clones from either the Virsto One Hyper-V MMC snap-in or Powershell.

Virsto_1

So what about the I/O blender effect?

Most storage technologies are not designed for the virtual data centre, most are still designed around the one to one physical server to storage model. Think of a number of virtual machines all with predictable I/O behaviour (if you think of them as physical).  What tends to come out of the physical hypervisor host is a large amount of completely random I/O.  Random I/O has an obvious performance impact when compared with sequential I/O so as you increase the number of VM’s you increase the random I/O from your Hyper-V host.  So as VM density increases performance drops, as we all know low VM density is not your objective when you embark on a virtualisation project.

So Virsto One has an interesting way of dealing with this. Although the “secret sauce” has never been divulged in-depth in its basic form they journal the random I/O that comes down from the Hyper-V host to staging disk.  A staging area is required per physical Hyper-V host and about 20GB / 30GB of disk should support multi-terabyte write downs through use of de-dupe technology. Periodically the data in the staging disks will be flushed / written down to the primary storage location, at this point the Random I/O is laid down sequentially on primary storage to improve read performance. Virsto indicated that in time they would look to support multiple de-stages so that data could be de-staged to another array for business continuity purposes or to the cloud for disaster recovery purposes.

Virsto_2
Are there any performance figures to back this up?

Performance figures from the Virsto test lab show the I/O Blender effect in full effect as VM density increases in the standard Microsoft setup.  With the Virsto software sitting in the middle, staging the data and de-staging it sequentially, there is an obvious improvement in performance.  These test results were from Virsto’s own lab and I stressed the importance of having these independently benchmarked by customers or an external consultancy.  Wendy indicated to me that this was something they were looking into,  I look forward to reading and sharing the whitepaper when it is eventually produced.

Virsto_Graph

So who would be interested in a product like this?

Well ideally the product would benefit Hyper-V customers who require high density, high performing virtual environments.  Hosting companies making use of Hyper-V for selling virtual server instances may well see Virsto as a good way of increasing performance and reducing costs through the use of golden images, snapshots, etc.  Who knows though,  individual companies with an investment in Hyper-V may well see the benefit in this kind of product.  In a way I see it’s not to dissimilar to a company buying PowerPath/VE to increase I/O performance in a vSphere environment.

It is important to note that although this product has been initially built for Microsoft Hyper-V the principals behind it are hypervisor agnostic.  I asked the question “why Hyper-V?” at the start of my chat with Virsto,  the answer was that Hyper-V had functionality gaps and was easier to integrate into.  VMware on the other hand is a more mature product where VMFS has gone some way to deal with the traditional virtualisation storage problems.  Citrix virtualisation customers will be happy to hear that testing has already begun in the lab with a version of Virsto one for XenServer, ETA unknown at this stage.

So how much does all this cost?

At the time of the interview,  which was a good few weeks back the per socket price being talked about was $1,000 – $2,000 USD per socket, again not to dissimilar to the pricing for EMC PowerPath/VE.

Conclusion?

My impression at the time of the demo and interview was that this was an interesting product and very clever idea. The main selling point for me was the increase in performance, if it can be independently verified you would think the product will simply sell itself.  I look forward to hearing more about Virsto in the future and I am particularly interested to see what they can do for other hypervisors especially VMware vSphere with it’s new storage API’s.

Hyper-V, New Products, Storage , ,