Archive

Archive for the ‘New Products’ Category

Memory Channel Interface Storage & EMC ScaleIO

October 21st, 2013

The storage industry is undergoing a huge change at the moment,  a huge change that is moving at break neck speed and one that I find both exciting and scary all at the same time.  Exciting because it’s new technology being used to solve complex business and IT problems and scary because it’s moving so incredibly fast. I think you can only guess what’s going to happen next, which direction it’ll go in and which trends will win out at the end of the day.  I guess that’s probably why I’m writing a blog post for the first time in almost 2 years!  Yes I know, that is far too long between blog posts and yes I am extremely ashamed of myself! :-)

I’m lucky enough to work for EMC and as part of that I get to see a lot of this change in IT developing momentum from the inside.  That said, in an organisation as big as this I also sometimes don’t find out about some of it until it hits the headlines, either at one of our product mega launches, a big industry event or in one of Chad’s now infamous blog posts!

Today’s Modern Storage Architectures

New storage architectures are appearing thick and fast,  my boss (Chad Sakac) explains this exceptionally well in his recent VMworld 2013 blog post on all things SDS at VMware and EMC.  He calls out 3 separate storage architectures that are fast becoming the norm,  they can be seen below and can be explained from left to right simply as.

Type 1 = Traditional storage model.

Type 2 = Scale out  / node based build out storage architectures.

Type 3 = Next Gen scale our software defined storage with COTS hardware.

Storage-Types 

EMC is very well known for the Type 1 storage,  VNX is a great example of this model in action.  EMC is also heavily into the Type 2 storage,  whether that be scale out NAS in Isilon, scale out block storage using the scale out engines of our enterprise VMAX arrays or even scale out block storage from an All Flash Array such as XtremIO.

It is however the type 3 storage I have been viewing with most interest and it is impossible to ignore the rapid increase in the use of the term “Software Defined Storage”.  I should really correct myself here though,  it’s not being used simply as a term or IT buzzword,  there are actually some very tangible products coming to market and this area of storage will gather huge momentum as we enter 2014. 

A subject that plays in this field and comes up regularly with one of my larger enterprise customers is Cloud Storage and Open Source Software with Commodity Off the Shelf hardware (COTS). It’s an interesting discussion and one that has been born out of both advances in Cloud storage platforms and  open source software stacks (OpenStack, Ceph) and a desire to introduce cost effective, supported  “good enough” solutions to support business demand. 

This stuff is real,  some people (including EMC colleagues) dismiss it as a passing fad,  something that will never take off as it’s not supported,  won’t work,  companies can’t put data in the cloud / on that platform for compliance or security reasons.  It’s seriously time to wake up and smell the coffee,  there may be challenges be they business or technical ones,  but the desire is there and where there is a will, there is always a way!

So where is this blog post going exactly…  :-)

"Good Enough" In-Memory storage alternative with software defined storage

…So all of this leads me to an interesting use case that I was discussing both internally and with a customer recently,  one that covers off three very interesting areas. 

  1. New Memory Channel Interface Flash (think NAND Flash with a DDR3 Memory interface)
  2. Software Defined Storage pooling of that MCI based storage using EMC ScaleIO
  3. A “good enough” performance tier, less than in-memory but more than PCI-E flash cards.

 

It sits firmly in the type 3 storage architecture discussed earlier, it involves new hardware technology combined with Software Defined Storage and could be used to deliver a cost effective, supported, good enough storage offering to deliver an alternative to in-memory databases.  Sound interesting? let’s dig a bit deeper into the 3 areas.

1. Memory Channel Interface (MCI) Storage

MCI storage popped up on my radar about a month ago following some internal distribution list emails.  A little digging shows that this particular technology appears to be in the very early stages of adoption.  In fact some of the tech companies in this field are still hiring ASIC programmers,  so it’s definitely early days.

One of the main companies out there at the moment is Diablo Technologies who are In essence providing non-volatile flash attached to a DDR3 memory interface to deliver up to 400GB of storage with a response time of less than 5μs.  The following is an extract from the Diablo press release which says it all really.

Configuring Memory Channel Storage (MCS) as a traditional block storage device enables new performance levels for applications, all the while reducing latencies by more than 85% over PCI-Express based SSDs and 96% over SATA/SAS based SSDs. MCS latencies are not only significantly lower than any other flash based storage, but also provides deterministic latencies with minimal variability, allowing applications to commit data quickly and consistently.

As well as being used for storage they the MCS DIMMs can also be used as an expansion of system memory,  allowing TB of addressable system memory instead of GB.  That’s another use case for another day though,  right now I’m interested in this as super low latency block storage and it sounds like this could be the next big thing in server based flash solutions.

2. Software Defined Storage Pooling with EMC Scale I/O

At present the problem I see with the above technology is the fact that because it’s basically direct attached storage (DAS) there is no protection of the data.  So you’re looking at stateless deployments or utilising an overlay to provide some host level clustering of the DAS resources.  This is where ScaleIO comes in…

ScaleIO in it’s simplest terms is a software defined storage solution that allows customers to leverage the local disk in applications servers in a pooled manner.  Through the installation of the ScaleIO agent known as the ScaleIO Data Sever (SDS), local storage can be combined together to create large scale elastic pools of storage.  Some of the key benefits are listed below

  • Scale from 100’s to 1000’s of nodes
  • Fast Auto-Rebuild
  • High levels of IO Parallelism
  • Platform Agnostic
  • Auto-rebalance capability
  • SSD/ HDD / PCI-E
  • Add, move, remove disks / nodes on the fly
  • Partitioning /Tiering /Multi-tenancy
  • 2 and 3 way protection mechanism.
  • Snapshots (Writeable)

image

ScaleIO technically converges the storage and application layers and can be used in a number of different configurations.  Hosts are of course interconnected,  this can be achieved using Ethernet or Infiniband (In my example I’m thinking high performance back end here). As ScaleIO deployment is small the SDS (100Mb – 400Mb) and the SDC (20Mb) can be deployed side by side and all the assets in a server can be sweated to deliver a complete solution.

mixed model  

You can however configure things in a traditional 2-tier setup as shown below,  where applications utilising the ScaleIO Data Client (SDC) access storage from dedicated storage nodes using the Scale IO Data Server (SDS) agent.

2 tier model 
In reality,  you can configure your application and ScaleIO in any way you like,  depends on your requirements.  Add into this mix the flexibility of agnostic support for physical and virtual nodes, full protection zones,  quality of service, 2 and 3 way data protection and the upcoming storage pools for performance.  It’s quite a neat little product.

Any Option

So let’s put this back into the context of the use case I was originally discussing.  By taking a number of hosts containing the MCI low latency block storage,  connecting those hosts with Infiniband and adding ScaleIO over the top I’ve got myself a pretty interesting low latency,  highly parallelised IO storage system using commodity off the shelf hardware.  What’s not to like?

3. Cost Effective / Good Enough in-memory alternative using MCI and Scale I/O

So the 3rd and final point,  the reason  that I was discussing this topic with a customer in the first place was cost efficiency.  They were interested in how you could do provide a good enough in-memory alternative,  both in terms of software and hardware.  The following is an interesting extract from a blog,  in this instance they’re looking at using the MCI as a memory extension and layering on software to utilise that memory. In my case I’m looking at it as a storage extension.  Regardless of the use case though, the point is the potential for cost saving and there is one if you put some thought into it vs the traditional models.

http://www.gridgain.com/blog/why-mci-means-rapid-in-memory-computing-adoption/

Another important characteristic of MCI storage is the plug-n-play fashion in which it can be used – no custom hardware, no custom software required. Imagine, for example, an array of 100 micro-servers (ARM-based servers in micro form factor), each with 256GB of MCI-based system memory, drawing less than 10 watts of power, costing less than $1000 each.

You now have a cluster with 25TB in-memory storage, 200 cores of processing power, running standard Linux, drawing around 1000 watts for about the same cost as a fully loaded Tesla Model S.

Summary

This has been a bit of a ramble but I think what I am trying to say is,  the world is changing and there are now so many different ways to achieve your objectives.  The commoditisation of hardware and the major up shift in software defined everything is fundamentally changing the approach in IT.  As I said at the start,  it’s both scary and exciting all at the same time,  the only option is to embrace it if you ask me!

EMC, New Products, Storage , , , , ,

EMC Virtual Storage Integrator and the Access Control Utility

September 8th, 2011

At EMC the vSpecialist team often end up talking to a lot of customers about EMC’s FREE Virtual Storage Integrator (VSI) Plug-ins for vCenter Server.  Not only do customers love the fact that it is FREE they also love the features delivered. The ability to accurately view, provision and manipulate EMC storage directly within vCenter empowers VI admins and makes everyone’s life that little bit easier.

When I started writing this article we were on version 4.2 of the VSI plug-ins, following VMworld 2011 we are now up to version 5.0 the fifth generation of this excellent VMware / EMC toolkit. The plug-ins that make up the VSI are listed below, to download use the link below or use the cookie trail to navigate to the page on EMC PowerLink.

  • VSI Storage Viewer Plug-in 5.0
  • VSI unified Storage Management Plug-in 5.0
  • VSI Storage Pool Management Plug-in 5.0
  • VSI Path Management Plug-in 5.0

Home > Support > Software Downloads and Licensing > Downloads T-Z > Virtual Storage Integrator (VSI) – (Please Note: PowerLink account required)

One of the great features that people are drawn to is the ability to allow VI admins to provision storage directly from within vCenter. This is done with the VSI Unified Plug-in for Celerra, CLARiiON and VNX(e) and done with the VSI Storage Pool Management plug-in for the VMAX. One of the first question I often get asked is how is the secured,  how does the storage team ensure that only the right VMware admins are manipulating the underlying storage?

The answer previously was… well to be honest we didn’t really have an answer to this one. Technically if you allowed the VMware admins to provision storage you needed to trust them not to go provisioning crazy and fill up your storage array.  Obviously that response was not really acceptable for any environment and EMC have been working to rectify that.acu_icon

The Access Control Utility is a new part of the VSI framework which allows storage administrators to granularly control availability of storage platforms and storage pools on those platforms.  These security profiles when created can be exported and passed to the VMware administrators and imported into the VSI unified storage management plug-in. The following blog post details the steps involved in completing this process for a VNX array in vSphere 4.1

So we start by double clicking on the shiny padlock icon that will have been added to your desktop when you installed the VSI unified storage management plug-in.  When the ACU starts we are presented with the profile management screen.  This will of course be blank the first time you start the utility, in this screenshot below however you can see a couple of existing access profiles I have created for some VNX arrays in the lab.

Profile_Main

To Create a new profile you simply click the Add button, you are then presented with the details screen for the new access profile being created.  Here you enter the name of the profile and a suitable description and click next when finished.

Profile_New1

The next step in the wizard is where you define the storage system that will be permissioned as part of the security profile.  You click on Add and then select the system you are going to permission,  as you can see the VSI ACU supports Celerra, CLARiiON, VNX and the VNXe arrays. For VMAX you need to look at Storage Pool Manager (SPM) to control access,  I’ll look to blog about this one at a later date.

Profile_New2

The next screen presented very much depends on the storage system you select.  If you chose the Celerra option you’re prompted for the details of the control station, username and password.  Select the CLARiiON and you’re prompted for the Storage Processor details and login credentials. If you select the VNXe then you’re promoted for the management IP and the login credentials.  I’m sure you can see the pattern developing here! Winking smile

In this example we are dealing with a VNX array and as such the option is whether you want to give access to block storage, file storage or both. As both are controlled differently within the VNX, if you select both you will need to enter the IP and credentials for the Storage Processor (Block) and the VNX Control Station.  For the purposes of this example I’m going to use Block only as you can see in the screenshot below.

Profile_New3
When you click next you’re prompted to enter the storage processor IP address and log on details as shown below.

Profile_New4

Once you are authenticated you get to select the granularity of access you want to provide.  It’s important to note that when the ACU refers to storage pools it means any storage pools and traditional RAID groups that may have been created on the VNX array.  There are 3 options available as you can see in the screenshot below.

  • All storage pools
    This option basically gives a VMware Admin free reign to provision LUNs with the VSI all over the array.  A potential use case for this may be a dedicated development VMware environment with its own dedicated array where the storage team don’t care to much about usage.

  • No Storage Pools
    This option is a complete lockdown and acts as an explicit deny to prevent any accidental provisioning on an array, i.e. the VSI unified storage management feature cannot talk to the array full stop, it won’t even show up as an option.

  • Selected storage pools
    As the name indicates this option allows the selection of certain storage pools for VSI provisioning.  A potential use case here would be a mixed environment where the array is shared between VMware and physical workloads.  As a storage administrator you would grant permission to the VMware storage pools only thus preventing any potential mis-provisioning (not sure that is actually a word but it certainly has its place when we talk about VSI provisioning)

Profile_New5

In this example I’ve chosen selected storage pools as I think this is probably the scenario that most people will be looking for the ACU to help them with.  Within the next screen you are presented with a list of all storage pools / RAID groups on the array.  Here you select the storage pools / RAID groups you want to give the VMware admin access to, when your happy with your selection you simply select finish.  Note in the screenshot below that I have select two individual storage pools (one is a RAID group) to be part of this particular storage profile.

Profile_New6_V2

Once you’ve completed storage pool selection you are returned to the profile screen,  you can finish your profile creation right here by clicking on finish or you can add additional storage systems if your VMware environment consists of multiple arrays.

Profile_New7

Once you have completed the creation of your security profile the next step is to export it so you can pass it over to your VMware admins. To do this simply highlight the Security profile, click on export and save the file

Profile_New8

Chose a location to save the file and don’t forget to add a passphrase to the file so that it cannot be misused.

Profile_New9
It’s important to remember that the login credentials provided by the storage admin during the ACU profile setup are the ones used when the profile is imported into the VSI.  The VMware admin will see the connection details and username being used but will not see the password. For audit purposes on the array it may be best to setup a dedicated account for use with the VSI and storage profiles. It should also be noted that the full details of the storage profile are encrypted within the profile export file as you can see below.

Profile_New10

So now that you’ve finished creating your storage profile you can pass it on to the VMware administrators to import into the VSI.  To do this you go into vCenter and open up the EMC VSI screen from the home screen.  Click on the Unified Storage Management feature,  then click on add and select Import Access Profile before clicking next.

vCenter_Import1

You now select the XML file created by exporting the ACU storage profile, you enter the passphrase you selected and click next.

vCenter_Import2

As you can see below the VNX array has been added to the VSI and provisioning access is marked as Restricted.  This is as expected as we configured the profile to give access to only two storage pools, FAST_Pool_3_Tier and RAID Group 10.

vCenter_Import3_MODIFY

When you use the EMC VSI to provision storage you will be presented with the VNX array that was part of the imported profile.  You select the storage array and as you can see in the screenshot below you can only create storage on the two storage pools that were added to the ACU storage profile.

vCenter_Import4

Summary

The EMC Access Control Utility was something I have been looking to write about for a while. Since it’s release I’ve often wondered how exactly it worked, what it could / could not do and how it could better meet customer needs. The steps above show that it is possible for a storage team to delegate control of storage pools so VMware admins can quickly provision the storage that they need. Becoming more efficient is something we as vSpecialists talk about on a daily basis, this tool is one of those first steps that you can take to make life easier.  If you are a VMware admin who is working with EMC storage then I suggest you speak to your storage team about this.  Likewise if you are a storage admin, reach out to your VMware counterparts and discuss how this could save you both time in the long term.

Video

My boss Chad Sakacc put a video together for VMworld 2011 which maybe explains it better (certainly quicker) than I maybe have in this blog post. I left it to the end though so you read the article before discovering it Smile. My step by step approach is simply so I can fully understand how it fits together and as I go deal with the many “what if” or “how does that work” kind of questions.  Hope you find it useful in some way, feel free to comment or ask questions.

EMC, New Products, VMware , , ,

EMC StorReclaim–VMAX thin pool reclaim for Windows

June 3rd, 2011

Since I started at EMC just over 2 months ago I’ve been spending a lot of time getting to grips with the large range of products that EMC has in it’s portfolio.  One key product I’ve been lucky to spend some time learning about it is VMAX / Symmetrix.  A Product range I knew a little about but had never used as a customer or had the chance to deep dive technically. Luckily for me I got the chance to do the deep dive with 3 days of VMAX training with some of the Symm engineering team from the US.

During this VMAX training some of my more “Symmetrix savvy” colleagues (David Robertson and Cody Hosterman) were telling us about something called EMC StorReclaim. At the time I couldn’t say anything about it as it wasn’t due for unveiling until EMC world but I did take notes with the aim of following up after EMC world.  I only found those notes today hence the delay folks!

First things first,  this is an EMC internal tool for EMC Global Services usage. I am publishing this in order to bring it to your attention.  If you feel you have a need for using this EMC product then please speak to your EMC Rep / TC for more details.

So what is EMC StorReclaim? I could explain it myself but this extract from the release document explains it perfectly.

StorReclaim is a Windows command line utility designed to free allocated but unused storage space as part of EMC’s Virtual Provisioning solution.

StorReclaim determines exactly where the allocated but unused space is and then passing that information on to Symmetrix for space reclamation. Once the storage capacity is reclaimed, it is then put back into the device storage pool for consumption. The process is performed in real time and does not require any running application to shutdown.

So what does it support / not support and what can I run it on?

Host operating system requirements
StorReclaim is fully supported on the following operating systems with various Service Packs.

◆ Windows Server 2003 x86/x64
◆ Windows Server 2008 x86/x64
◆ Windows Server 2008 R2
◆ Windows Server 2008 R2 with SP1

StorReclaim also supports Windows Guest OS running on:

◆ Hyper-V Server 2008 or 2008 R2
◆ VMware ESX Server 3.5 or 4.0
◆ VMware vSphere client 4.1.1

Note: For VMware ESX, the physical disks in a virtual environment must be attached to virtual machines using RDM (Raw Device Mapping). For Microsoft Hyper-V, the physical disks must be configured as pass-throughdisks.

File system requirement
Microsoft Windows NTFS
MBR & GPT
Basic Disks
Dynamic Concatenated, Mirrored (excludes striped / RAID 5 Dynamic disks)

Logical volume managers requirement
Microsoft Windows LDM

Storage environment support
StorReclaim supports Symmetrix arrays running Enginuity 5875 and higher.

Supported with EMC Clone and SNAP but only de-allocates storage on the source

Just to clarify one of the points around virtualisation support. This tool does support both physical and virtual windows server workloads.  Key point here is that the virtual machine in question must have RDM attached disks served from a thin pool.

One other key point worth mentioning is that this tool does not require you to install Solution Enabler or any other EMC host based software.  The tool works via a windows filter driver and sends SCSI UNMAP commands directly to the VMAX array in order to return the blocks to the thin pool.

As I mentioned earlier if you want access to this tool you will need to obtain it from EMC Global Services.  I hope that this will be released as a customer tool at some point in the future, this decision may well be based on demand so please ask if it is something you could use.

EMC, New Products , , ,

Microsoft VDI licensing –VDA and Microsoft WinTPC

April 18th, 2011

Some time ago I wrote a blog post about Microsoft Virtual Desktop Access (VDA) licensing that was introduced back in July 2010. For those that don’t want to read the whole article the summary of VDA was as follows.

  • You need to licence the endpoint accessing a windows VDI Desktop.
  • It’s £100 per year per endpoint.
  • Multiple endpoints each need a licence, i.e. home PC, office thin client, iPad
  • VDA included if endpoint is Windows and is Software Assured

I remember at the time thinking that this was going to hinder VDI deployment projects.  The additional on-going cost of licensing every potential endpoint a user may use was going to push TCO up, increase the time for ROI to be realised and generally make VDI a very unappealing prospect. Don’t even get me started on how difficult this makes it for service providers to create a Windows Desktop as a Service offering.

Recently one of my esteemed colleagues at EMC (another vSpecialist by the name of Itzich Reich who’s blog you can find here) sent out an email about Microsoft releasing a customer technology preview (CTP) of a product called Windows Thin PC (WinTPC). In summary this is a slimmed down version of Windows 7 and is designed for the re-purposing of old PC equipment as thin client devices.

Windows_Thin_PC

It has a couple of features worth mentioning for those technically minded people out there.

  • RemoteFX support for a richer, higher fidelity hosted desktop experience.
  • Support for System Center Configuration Manager, to help deploy and manage.
  • Write filter support helps prevent writes to disk, improving end point security.


WinTPC and / or VDA

So how does this new product fit in with the rather expensive VDA licensing? Well the good news is that WinTPC can be used to access a VDI desktop without the need for a VDA licence. On the downside WinTPC will only be available as a benefit of Software assurance for volume licensees. Now seeing as the VDA licence doesn’t apply to an endpoint that is windows based and covered by software assurance it makes no real difference from a licensing point of view which option you go for.  So if you have software assurance the choice is yours,  if you don’t, well coughing up for VDA licences each year is your only option I’m afraid.

What WinTPC does allow companies to do is maximise existing PC hardware investments.  This should allow companies to offset some of that initial upfront cost often associated with VDI projects. Microsoft’s idea is that companies can try out VDI using WinTPC and existing PC assets, when these PC’s become end of life they can swap over to using windows embedded devices without needing to change the management tools. Now VDI is not cheap, capital costs can be high,  savings are usually made in operational and management costs later down the VDI journey.  As I mentioned at the start of this post,  the VDA licence has not helped VDI adoption as it increases both capital and operational costs due to it’s annual subscription cost model.  Will this new release from Microsoft help reduce costs?

Just Saying!

My opinion, I personally think Microsoft are in a tricky position, they’re somewhat behind the curve on the VDI front and I always felt the VDA licence was designed to slow VDI adoption while they gained some ground on the competition.  If anyone chose to forge ahead, regardless, well Microsoft would generate some nice consistent revenue through the VDA licence. So the prospect of a WinTPC release is a nice touch by Microsoft during these hard economic times but not everyone can benefit.  What I would like to see is Microsoft offer this outwith Software Assurance,  sell it as a single one off licence cost as an alternative to the annual subscription model used with the VDA.  Give your customers the choice and let them get on with their VDI journey, be part of it as opposed to being the road block!

Links

If you are interested in learning more,  check out the links below.  To download the CTP version of WinTPC then go to Microsoft Connect and sign up to download it, would love to hear what you think.

http://www.microsoft.com/windows/enterprise/solutions/virtualization/products/thinpc.aspx

https://connect.microsoft.com/

Gestalt-IT, Microsoft, New Products ,

Symantec Application HA for VMware – VMworld 2010

August 16th, 2010

I was lucky enough last week to be involved in a Gestalt IT conference call with Symantec.  The conference call was designed to give us all a sneak preview of what Symantec were planning to announce at VMworld 2010 in a couple of weeks.  Unfortunately it was under embargo, that is until today!

There were a couple of announcements being made, Symantec introduced a new NFS storage product called VirtualStore and made some further announcements about NetBackup 7 and new VMware specific features.  However the most interesting announcement on the call for me was the release of Symantec Application HA for VMware.

Symantec_Virt_AdoptSymantec have been looking at why customers are not going “the last mile” with virtualisation.  Why are customers not deploying their Tier 1 applications on their virtual platforms? Symantec’s view on this was that customers still have issues with application level failure within guest VM’s.  This product has been designed to fill that void and at present is a product with no real competitors.

As the call progressed the current HA options were described by Symantec and discussed by the group. The obvious one is VMware HA which covers a physical host failure event. Within the VMware HA product there is also VM monitoring which covers you in the event of an OS level failure event, such as a  blue screen.  Then you can of course employ other technologies such as OS level clustering, however you then have to take heed of caveats that hinder the ability to use features such as vMotion and DRS.

I’m always sceptical when I see new virtualisation products, one of my fears is that companies are attempting to just jump on the crest of the wave that is virtualisation. Symantec are obviously a bit more established than your average company, but as always the jury is out until we see a final product doing the business for real.  It transpired during the call that the product is actually based on Symantec Veritas Cluster Server,  a product with a long history in application availability.

Veritas Cluster Server has a lot of in built trigger scenarios for common products such as Microsoft SQL Server, Exchange Server and  IIS.  On top of this built in, out of the box support Symantec also have a VCS development kit allowing for custom scenarios to be written.  I like this approach,  it reminds me of F5 Networks use of the customer community to support the writing of custom rules and features for their product.  If a custom rule or feature has enough demand then they spend the time developing it into their product range.    Perhaps Symantec could look at leveraging their customer base and community in this way and improve the support around VCS trigger scenarios.  One other potential use of the VCS SDK that springs to mind is for application vendors who are making specialist software, CRM, ERP, Finance systems, etc.  They could look to build in Application HA into pre-configured virtual appliances, that would be a great selling point for any software vendor.

The deployment of the product itself takes the form of a guest deployment / agent. Technical deep dive information on the exact integration between the Symantec product and VMware was thin on the ground.  However there was mention of Symantec’s integration with the VMware HA API,  something that I don’t think has been announced by VMware just yet.  The description given to us during the call was that if Symantec Application HA failed to restart the application it could send a downstream API call to VMware HA and ask it to restart the VM’s Operating System.  An interesting concept, something I am sure we’ll hear more about at VMworld.

Licensing for this new product is quite competitive, $350 per virtual machine, a small price to pay for ensuring your Tier 1 application recovery is automated.  Symantec have promised full integration with vCenter Server and the screenshot below shows Symantec Application HA in action monitoring a SQL 2008 server, click on the thumbnail to see a full size image.

If you would like to learn more about Application HA, then get along to VMware and Symantec’s break out session at VMworld. – http://www.vmworld.com/docs/DOC-4658

Alternatively you can listen to a Podcast from Symantec’s Niraj Zaveri discussing the new product.  – http://www.symantec.com/podcasts/detail.jsp?podid=ent_application_ha

General, Gestalt-IT, New Products, VMware , ,

Storage I/O control – SIOC – VMware DRS for Storage

May 10th, 2010

Following VMworld in 2009 a number of articles were written about a tech preview session on IO DRS – Providing performance Isolation to VMs in Shared Storage Environments. I personally thought that this particular technology was a long way off, potentially something we would see in ESX 4.5. However I recently read a couple of articles that indicate it might not be as far away as first thought.

I initially came across an article by VMware’s Scott Drummond in my RSS feeds.  For those that don’t follow Scott, he has his own blog called the Pivot Point which I have found to be a invaluable source of VMware performance related content. The next clue was an article entitled ESX 4.1 feature leak article, I’m sure you can probably guess what the very first feature listed was? It was indeed Storage I/O Control.

Most people will be aware of VMware DRS and it’s usage in measuring and reacting to CPU and Memory contention. In essence SIOC is the same feature but for I/O, utilising I/O latency as the measure and device queue management as the contention control. In the same way as the current DRS feature for memory and CPU, I/O resource allocation will be controlled through the use of share values assigned to the VM.

VM_Disk_Shares

I hadn’t realised this until now but you can already control share values for VM disk I/O within the setting of a virtual machine (shown above).  The main problem with this is that it is server centric as you can see from the statement below from the VI3 documentation.

Shares is a value that represents the relative metric for controlling disk bandwidth to all virtual machines. The values Low, Normal, High, and Custom are compared to the sum of all shares of all virtual machines on the server and the service console.

Two main problems exist with this current server centric approach.

A) In a cluster, 5 hosts could be accessing VM’s on a single VMFS volume, there may be no contention at host level but lots of contention at VMFS level. This contention would not be controlled by the VM assigned share values.

B) There isn’t a single pane of glass view of how disk shares have been allocated across a host, it appears to only be manageable on a per VM basis.  This makes things a little trickier to manage.

Storage I/O Control (SOIC) deals with the server centric issue by introducing I/O latency monitoring at a VMFS volume level. SOIC reacts when a VMFS volume’s latency crosses a pre-defined level, at this point access to the host queue is throttled based on share value assigned to the VM.  This prevents a single VM getting an unfair share of queue resources at volume level as shown in the before and after diagrams Scott posted in his article.

   queues_before_sioc              queues_after_sioc

The solution to the single pane of glass issue is pure speculation on my part. I’d personally be hoping that VMware add a disk tab within the resource allocation views you find on clusters and resource groups.  This would allow you to easily set I/O shares for tiered resource groups, i.e. Production, Test, Development. It would also allow you to further control I/O within the resource groups at a virtual machine level.

Obviously none of the above is a silver bullet! You still need to have a storage system with a fit for purpose design at the backend to service your workloads. It’s also worth remembering that shares introduce another level of complexity into your environment.  If share values are not assigned properly you could of course end up with performance problems caused by the very thing meant to prevent them.

Storage I/O Control (SOIC) looks like a powerful tool for VMware administrators.  I know in my own instance, I have a cluster that is a mix of production and testing workloads.  I have them ring fenced with resource groups for memory and CPU but always have this nagging doubt about HBA queue contention.  This is one of the reasons I wanted to get EMC PowerPath/VE implemented, i.e. use both HBA’s and all available paths to increase the total bandwidth.  Implementing SOIC when it arrives will give me a peace of mind that production workloads will always win out when I/O contention occurs.  I look forward to the possible debut of SOIC in ESX 4.1 when it’s released.

**UPDATE**

Duncan Epping over at Yellow Bricks has located a demo video of SOIC in action.  Although a very basic demonstration,  it gives you an idea of the additional control SOIC will bring.

Gestalt-IT, New Products, VMware , , , ,

Virsto One, Hyper-V and the I/O Blender effect

February 24th, 2010

One of the things I’ve come to love about blogging is the fact that I occasionally get contacted by the odd tech start-up. Keen to demonstrate their latest market leading idea that is going to revolutionise the industry as we know it.  Earlier this month I was contacted by Mindy Anderson who is the Product director at tech start-up Virsto (short for Virtual Storage). Virsto had a new product for Microsoft Hyper-V that they wanted to demonstrate to me in advance of their big product launch. Having looked at Mindy’s background in the storage industry I was very keen to hear more about their new product.

The product is called Virsto One and is aimed solely at Windows 2008 R2 Hyper-V. The product introduces some new features like thin provisioned clones & snapshots, that expand the functionality of the standard Hyper-V product. The most interesting feature in my opinion though is the attempt to tackle the virtualisation / storage problem commonly known as the I/O blender effect.

So what does Virsto One look like?

The software itself installs in the parent partition of each Hyper-V host and consists of the filter driver, a system service and a VSS provider.  The filter driver sits above the raw storage (any block storage) and presents a VHD object to the parent partition.  This setup allows users to configure physical storage once and then use Virsto One to carry out all future provisioning tasks. This includes full support for creating  thin provisioned, high performing, cluster aware snapshots and clones from either the Virsto One Hyper-V MMC snap-in or Powershell.

Virsto_1

So what about the I/O blender effect?

Most storage technologies are not designed for the virtual data centre, most are still designed around the one to one physical server to storage model. Think of a number of virtual machines all with predictable I/O behaviour (if you think of them as physical).  What tends to come out of the physical hypervisor host is a large amount of completely random I/O.  Random I/O has an obvious performance impact when compared with sequential I/O so as you increase the number of VM’s you increase the random I/O from your Hyper-V host.  So as VM density increases performance drops, as we all know low VM density is not your objective when you embark on a virtualisation project.

So Virsto One has an interesting way of dealing with this. Although the “secret sauce” has never been divulged in-depth in its basic form they journal the random I/O that comes down from the Hyper-V host to staging disk.  A staging area is required per physical Hyper-V host and about 20GB / 30GB of disk should support multi-terabyte write downs through use of de-dupe technology. Periodically the data in the staging disks will be flushed / written down to the primary storage location, at this point the Random I/O is laid down sequentially on primary storage to improve read performance. Virsto indicated that in time they would look to support multiple de-stages so that data could be de-staged to another array for business continuity purposes or to the cloud for disaster recovery purposes.

Virsto_2
Are there any performance figures to back this up?

Performance figures from the Virsto test lab show the I/O Blender effect in full effect as VM density increases in the standard Microsoft setup.  With the Virsto software sitting in the middle, staging the data and de-staging it sequentially, there is an obvious improvement in performance.  These test results were from Virsto’s own lab and I stressed the importance of having these independently benchmarked by customers or an external consultancy.  Wendy indicated to me that this was something they were looking into,  I look forward to reading and sharing the whitepaper when it is eventually produced.

Virsto_Graph

So who would be interested in a product like this?

Well ideally the product would benefit Hyper-V customers who require high density, high performing virtual environments.  Hosting companies making use of Hyper-V for selling virtual server instances may well see Virsto as a good way of increasing performance and reducing costs through the use of golden images, snapshots, etc.  Who knows though,  individual companies with an investment in Hyper-V may well see the benefit in this kind of product.  In a way I see it’s not to dissimilar to a company buying PowerPath/VE to increase I/O performance in a vSphere environment.

It is important to note that although this product has been initially built for Microsoft Hyper-V the principals behind it are hypervisor agnostic.  I asked the question “why Hyper-V?” at the start of my chat with Virsto,  the answer was that Hyper-V had functionality gaps and was easier to integrate into.  VMware on the other hand is a more mature product where VMFS has gone some way to deal with the traditional virtualisation storage problems.  Citrix virtualisation customers will be happy to hear that testing has already begun in the lab with a version of Virsto one for XenServer, ETA unknown at this stage.

So how much does all this cost?

At the time of the interview,  which was a good few weeks back the per socket price being talked about was $1,000 – $2,000 USD per socket, again not to dissimilar to the pricing for EMC PowerPath/VE.

Conclusion?

My impression at the time of the demo and interview was that this was an interesting product and very clever idea. The main selling point for me was the increase in performance, if it can be independently verified you would think the product will simply sell itself.  I look forward to hearing more about Virsto in the future and I am particularly interested to see what they can do for other hypervisors especially VMware vSphere with it’s new storage API’s.

Hyper-V, New Products, Storage , ,

EMC Configuresoft ESX compliance Checker

July 14th, 2009

I was purusing twitter the other day (as you do) when I came across a link shared by @Stevie_Chambers of VMware.  The link was for a new free tool from EMC called Compliance Checker for VMware ESX, which as you might guess does exactly what  it’s title suggests.  The tool though EMC branded is actually by Configuresoft who EMC bought outright earlier this year following an existing OEM relationship.

Now some of you may be familiar with the Tripwire Config Check tool which allows you to scan your ESX servers security configuration, which in itself is a very handy tool.  This new tool from EMC appears to be a little more comprehensive in that it checks your ESX servers against both the VMware VI3 Security Hardening Guidelines  and the Centre for Internet Security VMware ESX Server 3.x benchmark.  Test results against both are backed with an extensive knowledge base of what the check is and the actions required to rectify the issue.

Included below are some screenshots of the Main interface, the reports returned and the knowledgebase articles you recieve when you click on any item in the report.

maininterface

results1

results2

 knowledgebase

I must admit that these compliance checkers are handy because my Linux experience only began with ESX and sometimes the areas of security being checked are ones I wouldn’t have a clue about.  This is where the knowledgebase is great because it explains a little about what the check actually is as well as the actions required to rectify the issue.  Very handy because it would appear that a standard build of ESX 3.5 U4 is only 73% compliant as far as this piece of software is concerned.

To get your free copy of Compliance Checker for VMware ESX, click the link,  you’ll need to register but it’s a small price to pay for this handy tool.

ESX, New Products, VMware ,

VMware vApps – Where do they fit in the vCloud?

June 30th, 2009

Virtual Appliances have been around for quite some time now and it seems with the recent release of vSphere, VMware are looking to take vApps to the next level. The VMware Virtual Appliance Marketplace (VAM) is the one stop shop providing VMware users with a plethora of pre-configured, ready to download appliances.  There are over a 1000 appliances available in the VAM, all of them capable of being deployed in either the private cloud or in a cloud hosted by a 3rd party.  Now the VAM is also offering on-demand trails of certain vApps through selected vCloud partners.  The vCloud vision is slowly taking shape and and although it may be mostly conceptual now, small steps like this are the real building blocks.

With the release of vSphere 4.0 VMware introduced full support for the OVF 1.0 specification.  What does this mean?  well the Open Virtualisation Format is a platform independent, efficient, extensible, and open packaging and distribution format for virtual machines.  It’s virtual platform independent so if vSphere isn’t your platform of choice the vApp will work with Hyper-V and XenServer. OVF virtual machines are optimised for easy distribution are simple to deploy and support single and multi VM configurations.

Now I spend a lot of my time integrating and supporting applications from 3rd party vendors,  some of them mainstream and some of them more specialist. One of the constant issues I have is around how best to deploy them,  what are the best practices, minimum specifications and correct configurations to ensure successful deployment and reliable operation.  This is something I can see vApps helping all of us with, vApps created using the OVF format will allow vendors to build and maintain pre-packaged systems. Pre-configured to be highly compatible, built to the vendors own best practices and best of all built for rapid deployment to customers regardless of virtualised platform or cloud prefernce. The way I look at it,  it’s not to disimilar to Apple’s approach to controlling the hardware and the software they use. The way Apple operate allows them to guarantee better reliability and compatibility because they know what they’re deploying and what they’re deploying it on. A vendor built vApp has the potential to offer the same benefits and simplify the job of vendors, system integrators and application support teams alike.

So how are VMware assisting vendors in their pursuit of vApp bliss? With the creation of VMware Studio of course,  It was a product I’d never heard of until the beta release of VMware Studio 2.0 was announced earlier this month. The latest features are listed below.

Build vApps and virtual appliances (with in-guest OS and application components) compatible with VMware Infrastructure, VMware vSphere 4.0 and the cloud

- Support for OVF 1.0 and 0.9
- Available as an Eclipse plugin in addition to the standalone version
- Ability to accept existing, Studio-created VM builds as input
- Support for 32 bit and 64 bit versions of Windows 2003 and 2008 Server
- Support  for SLES 10.2, RHEL 5.2 and 5.3, CentOS 5.2 and 5.3 and Ubuntu 8.0.4.1
- Publish patches to update deployed virtual appliances
- Extensible in-guest management framework
- Automatic Dependency resolution
- VMware ESX, ESXi, Vmware Server 1.0.4 – 2.0, VMware Workstation enabled as         provisioning engines.
- Infrastructure enhancements in the GUI and builds

So what does a vApp deployment look like,  well here’s a video that was posted on the VMware vApp developer blog.  In this demo a user deploys a mulit VM, multi-tiered version of SugarCRM in just a few clicks with no need to even start up a VM Console. 

New Products, VMware , ,

VMware vSphere – General Availability

May 21st, 2009

Today is the day, VMware is now available for download and some really good news is that VMware are offering a 60 day evaluation trial of vSphere Enterprise Plus and vCenter Server Standard.

Three links for you, the first is the VMware vSPhere Download page and the second is the free 60 day evaluation download link.  For those who want to use the free version of ESX 4i and have a computer / server capable of supporting 64 bit you can get your free copy at this link.

Documentation for the various vSphere components can be found here http://www.vmware.com/support/pubs/vs_pubs.html

Credit to Michael Hany over at www.hypervizor.com for the links to these awesome video’s below, which should help you get to grips with some of the new vSphere 4 features.

ESX Installation and Configuration
ESXi Installation and Configuration
VMware vCenter Server
VMware vSphere Client
Networking configuration
Storage configuration (iSCSI)
Create and manage virtual machines
VMware Host Profiles
VMware Storage VMotion
VMware vCenter Server Linked Mode
VMware vNetwork Distributed Switch (vDS)

ESX, New Products, VMware, vSphere , ,