Whats new in vSphere 4.1 Storage

September 2nd, 2010

So I haven’t done a lot of real time blogging at VMworld this year as I’ve been busy trying to see and soak up as much as possible.  It’s not every day that you get access to the likes of Chad Sakacc (VP EMC / VMware alliance) Scott Drummond (EMC – ex VMware performance team) and a whole host of other technology movers and shakers. As you can imagine I took full advantage of these opportunities and blogging became a bit of secondary activity this week.

However, I’ve now had time to reflect and one of the most interesting areas I covered this week which was the new storage features in vSphere 4.1. I had the chance to cover these in multiple sessions, see various demo’s and talk about it with the VMware developers and engineers responsible. There are two main features I want to cover in depth as I feel they are important indicators of the direction that storage for VMware is heading.

SIOC – Storage I/O Control

SIOC had been in the pipeline since VMworld 2009, I wrote an article on it previously called VMware DRS for Storage, slightly presumptuous of me at the time but I was only slightly off the mark. For those of you who are not aware of SIOC, to sum it up again at a very high level let’s start with the following statement from VMware themselves.

SIOC provides a dynamic control mechanism for proportional allocation of shared storage resources to VMs running on multiple hosts

Though you have always been able to add disk shares to VM’s on an ESX host, this only applied to that host, it was incapable of taking account of VM I/O Behaviour of other VMs on other hosts. Storage I/O control is different in that it is enabled on the datastore object itself, disk shares can then be assigned per VM inside that datastore. When a pre-defined latency level is exceeded on a VM it begins to throttle I/O based on the shares assigned to each VM.

How does it do this, what is happening in the background here? Well SIOC is aware of the storage array device level queue slots as well as the latency of workloads.  During periods of contention it decides how it can best keep machines below the predefined latency tolerance by manipulating all the ESX Host I/O Queues that affect that datastore.

In the example below you can see that based on disk share value all VM’s should ideally be making the same demands on the storage array device level queue slots.  Without SIOC enabled that does not happen. With SIOC enabled it begins throttling back the use of the second ESX host’s I/O queue from 24 slots to 12 slots, thus equalising the I/O across the hosts.

Paul Manning (Storage Architect – VMware product marketing) indicated during his session that there was a benefit to turning SIOC on and not even amending default share values.  This configuration would immediately introduce an element of I/O fairness across a datastore as shown in the example described above and shown below.


So this functionality is now available in vSphere 4.1 for Enterprise Plus licence holders only.  There are a few immediate caveats to be aware of, it’s only supported with block level storage (FC or ISCSI) so NFS datastores are not supported. It also does not support RDM’s or datastores constructed of extents, it only supports a 1:1 LUN to datastore mapping. I was told that extents can cause issues with how the latency and throughput values are calculated,  which could in turn lead to false positive I/O throttling, as a result they are not supported yet.

It’s a powerful feature which I really like the look of. I personally worry about I/O contention and the lack of control I have over what happens to those important mission critical VM’s when that scenario occurs. The “Noisy Neighbour” element can be dealt with at CPU and Memory level with shares but until now you couldn’t at a storage level. I have previously resorted to purchasing EMC PowerPath/VE to double the downstream I/O available from each host and thus reduce the chances of contention.  I may just rethink that one in future because of SIOC!

Further detailed information can be found in the following VMware technical documents

SIOC – Technical Overview and Deployment Considerations

Managing Performance Variance of applications using SIOC

VMware performance engineering – SIOC Performance Study

VAAI – vStorage API for Array Integration

Shortly before the vSphere 4.1 announcement I listened to an EMC webcast run by Chad Sakacc.  In this webcast he described EMC’s integration with the new vStorage API, specifically around offloading tasks to the array. So what does all this mean, what exactly is being offloaded?

So what do these features enable? Let’s take a look at them one by one.

Hardware assisted locking as described above provides improved LUN metadata locking.  This is very important for increasing VM to datastore density.  If we use the example of VDI boot storms, if only the blocks relevant to the VM being powered on are locked then you can have a more VM’s starting per datastore.  The same applies in a dynamic VDI environment where images are being cloned and then spun up; the impact of busy cloning periods, i.e. first thing in the morning is mitigated.

The full copy feature would also have an impact in the dynamic VDI space, cloning of machines taking a fraction of the time as the ESX host is not involved. What I mean by that is when a clone is taken now, the data has to be copied up to the ESX server and then pushed back down to the new VM storage location.  The same occurs when you do a storage vMotion, doing it without VAAI takes up valuable I/O Bandwidth and ESX CPU clock cycles. Offloading this to the array prevents this use of host resource and in tests has resulted in a saving of 99% on I/O traffic and 50% saving on CPU load.

In EMC Labs a test of storage vMotion was carried out with VAAI turned off, it took 2 mins 21 seconds.  The same test was tried again with VAAI enabled, this time the storage vMotion took 27 seconds to complete. That is a 5x improvement, and EMC have indicated that they have had a 10x improvement in some cases. Check out this great video which shows a storage vMotion and the impact on ESX and the underlying array.

There is also a 4th VAAI feature which has been left in the vStorage API but is currently unavailable, Mike Laverick wrote about it here. Its a Thin Provisioning API and Chad Sakacc explained during the group session that its main use is for Thin on Thin storage scenarios. The vStorage API will in the future provide vCenter insight into array level over provisioning as well as the VMware over provisioning.  It will also be used to proactively stun VM’s as opposed to letting them crash as currently happens.

As far as I knew EMC was the only storage vendor offering array compatibility with VAAI. Chad indicated that they are already working on VAAI v2 looking to add additional hardware offload support as well as NFS Support. It would appear that 3Par offer support, so that kind of means HP do to, right? Vaughan Stewart over at NetApp also blogged about their upcoming support of the VAAI, I’m sure all storage vendors will be rushing to make use of this functionality.

Further detailed information can be found at the following locations.

What does VAAI mean to you? – Chad Sakac EMC

EMC VAAI webcast – Chad Sakac EMC

Storage DRS – the future

If you’ve made it this far through the blog post then the fact we are taking about Storage DRS should come as no great surprise.  We’ve talked about managing I/O performance through disk latency monitoring and talked about array offloaded features such as storage vMotion and hardware assisted locking. These features in unison make Storage DRS an achievable reality.

SIOC brings the ability to measure VM latency, thus giving a set of metrics that can be used for storage DRS.  VMware are planning to add capacity to the storage DRS algorithm and then aggregate the two metrics for placement decisions.  This will ensure a storage vMotion of an underperforming VM does not lead to capacity issues and vice versa.

Hardware Assisted Locking in VAAI means we don’t have to be as concerned about the number of VM’s in a datastore, something you have to manage manually at the moment.  This removal of limitation means we can automate better, a storage DRS enabler if you will.

Improved Storage vMotion response due to VAAI hardware offloading means that the impact of storage DRS is minimised at the host level. This is one less thing for the VMware administrator to worry about and hence smoothes the path for storage DRS Adoption.  As you may have seen in the storage vMotion video above the overhead on the backend array also appears to have been reduced, so you’re not just shifting the problem somewhere else.

For more information I suggest checking out the following (VMworld 2010 account needed)

TA7805 – Tech Preview – Storage DRS


There is so much content to take in across all three of these subjects I feel that I have merely scratched the surface.  What was abundantly clear from the meetings and session I attended at VMworld is that VMware and EMC are working closely to bring us easy storage tiering at the VMware level.  Storage DRS will be used to create graded / tiered data pools at the vCenter level, pools of similar type datastores (RAID, Disk type). Virtual machines will be created in these pools; auto placed and then moved about within that pool of datastores to ensure capacity and performance. 

In my opinion it’s an exciting technology, one I think simplifies life for the VMware administrator but complicates life for the VMware designer. It’s another performance variable to concern yourself with and as I heard someone in the VMworld labs comment “it’s a loaded shotgun for those that don’t know what they’re doing”.  Myself, I’d be happy to use it now that I have taken the time to understand it; hopefully this post has made it a little clearer for you to.

Bookmark and Share

Gestalt-IT, Storage, VMware, vmworld , , ,

VMworld 2010 – Let the fun and games commence

August 30th, 2010

I currently find myself sitting rather bored on the flight across to San Francisco for VMworld.  Just to clarify, it’s certainly not the thought of VMworld that’s boring me, far from it in fact!Instead it’s the rather poor choice of in flight movies that’s got me thinking about what this VMworld is going to bring us and what am I going to take away from it at the end of the week.

It’s now been 3 years since my last VMworld visit and in that time things have moved on significantly in the industry. Back in 2007 VMware really had no competitors, Hyper-V hadn’t even been released yet. The phrase Cloud computing wasn’t being mentioned at every opportunity and the likes of VMware Fault Tolerance and Storage vMotion were still confined to the stage as keynote demo technologies. It’s mind boggling to think how much has changed in those 3 short years, what a very different landscape it now is!

So I’m wondering what can we expect from VMworld 2010?  Well there has been plenty of speculation that VMware will announce View 4.5,  I’m now taking that one as a given as it has been so widely commented on already. I’m also expecting lots on private and public cloud infrastructures and the transformation steps required to take it from concept to the real deal. All the big IT companies in the world are working towards this model, the concept is out there and now very much on the loose. I’m hearing more chatter about it from people who don’t even work in IT!  On the Cloud front I am expecting to hear lots about how to overcome the fears, the risks involved, the security concerns and all those other things that have the sceptics worried and concerned. Oh and you just know that VMware will have a few little extra special announcements hidden up their sleeve, it wouldn’t be a VMworld without them.

From my own perspective I have a few angles to cover this year. My employer is the one funding my attendance and therefore I have a number of specific topics I need to cover off as part of my day job. Deploying Exchange 2010 in a vSphere environment for instance is one key area and another is looking at the transition from ESX to ESXi in advance of the service console retirement.  On top of this I will be looking at the longer term strategic view, where is the industry going and are we pulling in the same direction or  do we need to change tact?

As an aside, I have also been asked to present to the Scotland VMware User Group upon my return to the UK.  The theme for the VMUG being “VMworld looking back, looking forward” which is meant to cover VMworld US and my experience of it, while also encouraging people to attend the VMworld Europe event in Copenhagen.  Hopefully there will be plenty of good content for me to report back on but enough announcements held over to make VMworld Europe in Copenhagen appealing.

It’s shaping up to be a great week, I’m looking forward to getting stuck in tomorrow first thing. I Just hope the jet lag doesn’t hit me to hard as I have a feeling the various vendor parties and ad-hoc beers just might!

Bookmark and Share

Events, VMware, vmworld , ,

HP Server Power Advisor Tool

August 18th, 2010

I recently purchased 3 new DL 380 G7’s for a new ESX deployment and as part of that purchase I had multiple options when it came to choosing power supplies.  I tend to default to the larger power supplies when purchasing servers as usually this means it can support the maximum configuration, i.e. built in scalability.

Recently we had a service provider change their cost model from a flat fee to a metered power cost model.  As a result I decided to take a closer look at the actual power consumption of the DL380 G7’s I was buying.  That’s when I discovered another really useful HP online tool called the HP Power Advisor Tool.

This tool does still appear to be a work in progress, as some features are not available when you click on it, but in general there is still a lot of very useful functionality in there.  You can build a single server, configure it and get your power specifications based on the components within.  This is what I did with my DL 380 G7’s as you can see by clicking the thumbnail below.

HP Power Advisor DL 380 G7 configuration


If your configuration is a little more advanced, you can drop in a rack and then configure it with all manner of HP Server goodness.  You can even drop in blade enclosures and then configure the interconnects and individual blades as you can see by clicking the thumbnails below.

HP Power Advisor Rack configuration

I’m quite impressed with some of the extra tools I’ve been finding on HP’s website,  I blogged recently about the HP Server DDR3 Memory Configuration Tool which helped me out when a reseller was trying to give me the wrong memory configuration.  I’m just wondering what I’m going to find next!!

Bookmark and Share

Servers ,

Scottish VMUG – September 23rd – Glasgow

August 18th, 2010


So it’s that time again, another Scottish VMware User Group meeting is upon us.  This time there is a slight difference, I’m actually presenting which I have to say is not daunting in the slightest?!?!

I’ve been asked to present on my experience at VMworld 2010 in San Francisco, an event I am looking forward to immensely.  I’ll be aiming to tell you all about VMworld, it’s format, what I did there and why you should invest both the time and the money to attend future events.

If you are interested in registering for this VMUG meeting please register soon, I have it on good authority that 50% of the places were snapped up within 24 hours of the invite going out. Hopefully I’ll see you there.

Register Now

Please join us for the upcoming Scottish VMware User Group meeting on Thursday, September 23rd.

This is a great opportunity to meet with your peers to discuss virtualization trends, best practices and the latest technology.





Introduction (Scott Walkingshaw)


VNews (Alistair Sutherland)


VMware Speaker




VMworld 2010 Customer Presentation (Craig Stewart/Martin Currie)



The theme of the meeting will be "VMworld 2010 – Looking Back/Looking Forward." Come to network and share ideas with an innovative group of VMware users as you learn how to get the most out of your attendance at VMworld EMEA, and to get a brief overview of the highlights that will by then have been covered at VMworld US.

Bookmark and Share

Events, VMware ,

Symantec Application HA for VMware – VMworld 2010

August 16th, 2010

I was lucky enough last week to be involved in a Gestalt IT conference call with Symantec.  The conference call was designed to give us all a sneak preview of what Symantec were planning to announce at VMworld 2010 in a couple of weeks.  Unfortunately it was under embargo, that is until today!

There were a couple of announcements being made, Symantec introduced a new NFS storage product called VirtualStore and made some further announcements about NetBackup 7 and new VMware specific features.  However the most interesting announcement on the call for me was the release of Symantec Application HA for VMware.

Symantec_Virt_AdoptSymantec have been looking at why customers are not going “the last mile” with virtualisation.  Why are customers not deploying their Tier 1 applications on their virtual platforms? Symantec’s view on this was that customers still have issues with application level failure within guest VM’s.  This product has been designed to fill that void and at present is a product with no real competitors.

As the call progressed the current HA options were described by Symantec and discussed by the group. The obvious one is VMware HA which covers a physical host failure event. Within the VMware HA product there is also VM monitoring which covers you in the event of an OS level failure event, such as a  blue screen.  Then you can of course employ other technologies such as OS level clustering, however you then have to take heed of caveats that hinder the ability to use features such as vMotion and DRS.

I’m always sceptical when I see new virtualisation products, one of my fears is that companies are attempting to just jump on the crest of the wave that is virtualisation. Symantec are obviously a bit more established than your average company, but as always the jury is out until we see a final product doing the business for real.  It transpired during the call that the product is actually based on Symantec Veritas Cluster Server,  a product with a long history in application availability.

Veritas Cluster Server has a lot of in built trigger scenarios for common products such as Microsoft SQL Server, Exchange Server and  IIS.  On top of this built in, out of the box support Symantec also have a VCS development kit allowing for custom scenarios to be written.  I like this approach,  it reminds me of F5 Networks use of the customer community to support the writing of custom rules and features for their product.  If a custom rule or feature has enough demand then they spend the time developing it into their product range.    Perhaps Symantec could look at leveraging their customer base and community in this way and improve the support around VCS trigger scenarios.  One other potential use of the VCS SDK that springs to mind is for application vendors who are making specialist software, CRM, ERP, Finance systems, etc.  They could look to build in Application HA into pre-configured virtual appliances, that would be a great selling point for any software vendor.

The deployment of the product itself takes the form of a guest deployment / agent. Technical deep dive information on the exact integration between the Symantec product and VMware was thin on the ground.  However there was mention of Symantec’s integration with the VMware HA API,  something that I don’t think has been announced by VMware just yet.  The description given to us during the call was that if Symantec Application HA failed to restart the application it could send a downstream API call to VMware HA and ask it to restart the VM’s Operating System.  An interesting concept, something I am sure we’ll hear more about at VMworld.

Licensing for this new product is quite competitive, $350 per virtual machine, a small price to pay for ensuring your Tier 1 application recovery is automated.  Symantec have promised full integration with vCenter Server and the screenshot below shows Symantec Application HA in action monitoring a SQL 2008 server, click on the thumbnail to see a full size image.

If you would like to learn more about Application HA, then get along to VMware and Symantec’s break out session at VMworld. – http://www.vmworld.com/docs/DOC-4658

Alternatively you can listen to a Podcast from Symantec’s Niraj Zaveri discussing the new product.  – http://www.symantec.com/podcasts/detail.jsp?podid=ent_application_ha

Bookmark and Share

General, Gestalt-IT, New Products, VMware , ,

Win a free trip to VMworld 2010

August 10th, 2010

VMworld 2010 in San Francisco is fast approaching and promises to be a fantastic event. I have two questions for you.

        – If you are not attending already would you like to?

        – Would you like someone else to pay for you to go?

I think the obvious answer to both those questions is a resounding YES!  So how do you take advantage of this very, very special offer?  Well over at Gestalt IT my friend Stephen Foskett has arranged a spectacular competition supported exclusively by the four vendors shown below.


How To Enter

Get yourself over to the VMworld contest extension page on the Gestalt IT web site and get over there quickly.  You will need to fill out the entry form by close of play Thursday the 12th of August, most importantly you need to tell us how you are going to “Pay it forward".  What does this mean,  well the following extract explains what we expect from competition entrants.

There’s always a catch, right? Inspired by winner Greg Stuart‘s desire to contribute to the community, we’re not just going to pick a winner at random. We’re going to pick the person who presents the best case for themselves.

Entrants must explain how they plan to “pay it forward” if they get to go to VMworld. Will you start a blog? Write some tutorials? Contribute to a forum or online community? Present to your local VMUG? Get creative and spread the wealth of knowledge you get from the event!

Our panel of judges is made up of none other than the most-excellent roster of past Tech Field Day delegates! They’ve proven themselves to be independent-minded and knowledgeable, and we’re sure that they will pick the best entries!

It’s pretty simple and at present there are not quite as many entries as you’d think.  Get involved people, it’s not going to take you long and could end on a nice trip to San Francisco for one of the highlights of the IT calendar.  I look forward to reading the entries and helping decide the winner!

Bookmark and Share

Events, General, VMware , ,

HP Server DDR3 Memory Configuration Tool

July 26th, 2010

I’ve recently been lucky enough to be looking at purchasing a few additional ESX servers. I have spent a good few years now working with HP servers so have been looking at the HP Proliant DL380 G7 model for my particular needs.

I tend to spend a bit of time ensuring that the servers are specified correctly using the HP server quick specs. It can take a bit of time but I want to make sure that I am getting the right configuration for my virtualisation solution. To see an example follow the link to see the current quick specs for the HP Proliant DL 380 G7 server.


While reviewing the quick specs for the DL 380 G7 I noticed the following section which I’d not seen before when buying other HP Proliant servers.

NOTE: Depending on the memory configuration and processor model, the memory speed may run at 1333MHz, 1066MHz, or 800MHz. Please see the Online Memory Configuration Tool at: www.hp.com/go/ddr3memory-configurator.

Now I usually buy memory as follows 4GB, 8GB, 16GB, 32GB, 64GB, 128GB, most people probably do I imagine.  However as it turns out that the new Nehalem (Xeon 5500) and Westmere (5600) processors have some specific rules that need to be abided by to ensure optimal performance.  This is where this particular HP tool comes in handy, whether you are configuring a new server or attempting to upgrade a current one.


As it turned 32GB is not an optimal configuration,  something that certain resellers never picked up on.  As it turned out I needed either 24GB or 36GB to ensure my server memory runs at 1333MHz and performs at its best. 


If you are looking at buying a new HP server, or simply looking to add memory to an existing server I strongly suggest taking a look at this tool to ensure you get it right.


Bookmark and Share

General, Servers , ,

Gestalt IT Tech Field Day Seattle – NEC HYDRAstor

July 16th, 2010

Following my return from my first Tech Field Day I have been reading through my notes and reflecting on the vendors I saw when I was in Seattle.  Of the vendors I saw the one that surprised me most was NEC, everyone has heard of them but not everyone actually knows what they do or what products they make.  As we found out during our visit, NEC have a broad technology portfolio and have quite an interesting offering in the storage space.

Here are some basic facts about NEC that you may / may not know

- Founded in 1899
- Fortune 200 company with over 143,000 staff
- Revenues of $43 Billion in 2009
- $3 Billion spent in R&D each year across 12 R&D global labs
- 48,000 patents worldwide.
- Have been in storage since 1950 

So with that little history lesson over, the main focus of our visit was NEC’s HYDRAstor. This is their modular grid storage offering for customers with backup and archive storage in mind. It’s marketed as “Grid storage for the next 100 years” which may sounds a little far fetched, but data growth and data retention periods are ever increasing.   From what I saw and heard the HYDRAstor could very well live up to this bold claim.

There was a lot of content delivered on the day and the session went on for 4 hours, so I’ve tried to wrap up some of the key features below. I have expanded on the key elements of the HYDRAstor that really caught my attention as I think they are worth exploring in more detail.

Key Features

- 2 tier architecture based entirely on best of breed Intel Xeon 5500 based servers

- 2 tier architecture consists of front end accelerator nodes and back end storage nodes

- Shipped as a turnkey solution, though entry level can be bought for self racking.

- Supports a Maximum of 165 Nodes, 55 accelerator nodes and 110 storage nodes

- All interconnects based on 1GB Ethernet Networking (NEC Network switches included)

- Supports old and new node modules in the same Grid for easy node upgrade and retirement.

- Supports volume presentation with NFS and CIFS (SMB Version 1)

- Non-disruptive auto reallocation of data across any additional grid capacity – DynamicStor

- higher levels of resilience than RAID with a reduced capacity overhead (See DRD below)

- WAN optimised grid to grid replication minimises network bandwidth requirements – RepliGrid

- WORM Support for secure retention / compliance governance – HYDRAlock

- Efficient drive rebuilds, only rebuild the actual data not the whole drive.

- Global inline de-duplication across the entire grid – DataRedux™

- Tight backup vendor integration – strips out backup metadata to improve de-dupe ratios

- Mini HYDRAstor appliance available for remote offices or offsite DR replication.

Data Protection – Distributed Resilient Data™ (DRD)  

The resilience provided by HYDRAstor really caught my eye, primarily because it was so different from anything I had ever seen before.  Distributed Resilient Data (DRD) uses something known as erasure coding to provide extremely high levels of resilience. Now you may think that this would come with a considerable storage and performance overhead, but you’d be wrong.

The HYDRAstor provides 6 levels of protection (1 – 6) all with differing levels of protection and capacity overhead. With the default level 3 selected NEC’s implementation of erasure coding splits the data chunks into 12 parts, 9 data and 3 parity. The use of erasure coding means that it only ever needs 9 parts to make up a complete data chunk. So if that data chunk is spread over 12 disks in a single storage node, it can withstand 3 disk failures. if those 12 chunks are spread over 12 storage nodes then you can withstand 3 complete node failures.

This default level 3 protection requires a 25% capacity overhead, much like RAID 5.  However by providing for 3 disk failures it provides 300% more protection than RAID5 and 150% more protection than RAID 6.  If you want to go to the highest level of protection (level 6) then there is a 50% capacity overhead as with RAID 1, however you can withstand the failure of 6 disks or 6 nodes.

The following video describes Distributed Resilient Data™ (DRD) at the default level 3


High Performing

The demonstration NEC gave us was based on their lab setup of 20 accelerator nodes and 40 storage nodes.  This was a 4 rack setup, which as you can see from the photo below is not a small setup. What it is though, is a very high performing storage solution.


NEC demonstrated a data copy that utilised a full 10GB per second throughput, which worked out at about 540MB throughput per front end accelerator node.  The screenshot from the management GUI below shows the  total throughput achieved.

The maximum HYDRAstor configuration consists of 11 racks and is capable of 25GB per second or 90TB per hour. This works out at roughly 2 PB’s in a 24 hour period, that is an astounding amount of data throughput.  Surely a level of throughput to deal with even the most demanding backup or archiving use case.


There were a few negative aspects that I picked up on during our visit, thankfully all ones I feel can be addressed by NEC over time.

User Interface

I felt the user interface was a little dated (see screenshot above), it served it’s basic purpose but wasn’t going to win any awards. It was a stark contrast when compared with the very nice and easy to use GUIs we saw from Nimble storage and Compellent.  That said if the HYDRAstor is only being used as a backup and archive storage and not primary storage, does it actually need to have the worlds best GUI, possibly not.

Solution Size

The HYDRAstor came across as a large solution, though I’m not sure why. When I think about it any storage solution that provides 10GB/sec throughput and 480TB of raw storage is likely to take up 4 racks, in some instances probably a lot more.  Maybe it was the sheer number of network interconnects, perhaps some consolidation with 10GB Ethernet could assist in making the solution appear smaller.  NEC could also look at shrinking down the servers sizes, probably only possible with the accelerator node servers as the storage nodes need 12 x 1TB disk so not a lot of scope for size reduction there.


A general consensus among delegates was why have NEC marketing not been pushing this harder,  why had so many of us in the room not heard about it? I suppose that was one of the reasons we were there, to hear about it, discuss it and ultimately blog about it as I’m doing now. There are some specific target markets that NEC maybe need to look at for this product, possibly looking at world wide data retention regulations as a means of identifying potential markets and clients.  More noise needs to be made by NEC about there efficient de-dupe integration with enterprise backup products such as CommVault Simpana, Symantec NetBackup, TSM and EMC Networker.  More comments such as the one below wouldn’t hurt.

with the application aware de-duplication for CommVault we’ve optimized storage efficiency with a four times improvement in space reduction.
Pete Chiccino, Chief Information Officer, Bancorp Bank

EMEA availability

NEC told us that this product is not being actively pushed in the EMEA region.  Currently the product is only available for purchase in North America and Japan.  One of the points I made to NEC was that the HYDRAstor appeared to me to be a product that would have a lot of applications in the European market place, possibly more so in the UK.  I made specific reference to FSA regulation changes where Financial companies are now required to keep all electronic communications for up to 7 years.  NEC’s HYDRAstor with it’s high tolerance for failure, global de-duplication across all nodes and grid like extensibility is perfect for storing this kind key critical complaince data.  That is a very specific example, another is insurance companies who have longer retention requirements and museums digitising historical documents / books which have a “keep forever” retention requirement.

NEC contacted me via twitter after the event to say that although not on sale in EMEA if a company has a presence in the US they will be able to explore purchasing the HYDRAstor through NEC America.


I had no idea what to expect when we arrived at NEC’s offices, sure I knew who they were but I had no idea what they were doing in the storage space. Gideon Senderov at NEC certainly saw to it that we had all the information needed to form an opinion, his knowledge of his product was simply outstanding.

NEC HYDRAstor is a product that is quite unique. It’s easy to scale up and scale out, has high levels of redundancy without the normal capacity penalty and of course exceptional levels of performance. It strikes me as a product that any IT professionals responsible for backup, archiving and long term data retention would be very, very interested in

Note : Tech Field Day is a sponsored event. I receive no direct compensation and take personal leave to attend, however all event expenses are paid by the sponsors via Gestalt IT Media LLC. The views and content expressed here are my own and is in no way influenced by the sponsors of this event.

Bookmark and Share

Events, Gestalt-IT, Storage , ,

Gestalt IT Seattle Tech Field Day – Day 2 Summary

July 16th, 2010

It’s now been a couple of days since the second day of the Gestalt IT Tech Field Day, I’m actually taking the opportunity to write this on the plane on the way back from Seattle. So once again I thought I would do a summary post until I get the chance to write up a detailed post on each vendor.

 image image

Compellent were one of the main sponsors for the Seattle Tech Field Day and were responsible for us getting access to the Microsoft Campus. So a big thank you to Compellent for their support of Tech Field Day.

Compellent are a company I have had dealings with before, I looked at buying one of their storage devices back in 2008 and was very impressed by the product they had on offer at the time.  This was a great chance for me to revisit Compellent two years on and see how things had changed.

Compellent in general still appears to be much the same product that I liked so much back in 2008.  Their pooled storage model, software controlled RAID write down, space efficient snapshots and WAN optimised thin replication are all superb  features. There main differentiator back in 2008 was their ability to do automated storage tiering (Data Progression™), something that others in the industry are starting to catch up to (EMC FAST). Compellent’s Data Progression technology is one that many customers actively use with good results, I was slightly disappointed though to learn that their data movement engine only executes once every 24 hours and cannot be made more frequent.  I’m not sure how that compares to EMC FAST but is something I’ll include in a more expansive post.

A feature I had heard of but didn’t quite understand previously was Compellent’s Live Volume.  It’s another unique feature for Compellent and one of my fellow delegates even described it as “EMC vPlex that you could actually afford”. Compellent implement the Live Volume feature at software level as opposed to a hardware based implementation like EMC vPlex. Compellent are able to present the same volume, with the same identity in two different locations, they do this using the underlying WAN optimised asynchronous replication. One point of note was that this is not an active / active DR like setup,  this is a setup for use in a controlled maintenance scenario, such as SAN fabric maintenance or a DC Power down test.

Compellent also took the opportunity to share some roadmap information. Highlights included the release of the 64 bit, Series 40 Controller base on the Intel Nehalem, encrypted USB device for seeding replication, a move to smaller 2.5” drives and 256 bit full disk encryption among others.

Although we were situated on Microsoft’s Campus for a large part of Tech Field day we were never presented to by Microsoft, which was a shame.  We did however get the chance to visit the Microsoft store which is for employees only.  It gave us all a chance to buy some discounted Microsoft Software and souvenirs of our visit to Redmond which we all took advantage of.


Tech Field Day delegates Kevin Houston, Stephen Foskett and Jason Boche using their iPhones and iPads in the heart of the Microsoft campus. Note Jason Boche using an iPad and wearing his VMware VCDX shirt, brilliant!


Our afternoon session was spent a short bus ride away from Microsoft at NEC America’s Seattle office.  We were here to hear about NEC’s storage offering (I had no idea they even did storage) and more specifically the NEC HYDRAstor range. We had a very in depth session on this fascinating product with Gideon Senderov, Director of Product Management for the HYDRAstor range.

NEC have taken an innovative approach with this product, one I was not expecting. They utilise full blown NEC servers to provide a two tier architecture made up of front end accelerator nodes and back end storage nodes.  On top of this they don’t use the traditional RAID model, instead using something known as erasure coding to provide improved data protection. I will deep-dive this particular data protection method in another article but it was a very interesting and different approach to what I’m used to.

The HYDRAstor grid is marketed as “Storage for the next 100 years” and with it’s grid architecture it’s reasonably easy to see how that statement could be realised.  You can add additional nodes into the grid and it will automatically redistribute itself to take advantage of the capacity.  You can also mark nodes for removal,  the system evacuating the data to enable nodes to be removed from the grid.  This combined with the ability to co-exist old and new HYDRAstor nodes shows why it’s a good storage location for data with a very long term retention requirement.

It appeared to me that HYDRAstor was designed specifically as a location for the output of archive or backup data and not a primary data storage solution. The reason I say this is that when we discussed in-line de-duplication the product was already integrated with major backup vendors (Symantec NetBackup, CommVault Simpana, Tivoli Storage Manager and EMC Networker). NEC were getting very clever by stripping out metadata from these backup vendors to improve the level of de-dedupe that could be achieved with the product when storing backup data.

I will revisit the HYDRAstor, once I have had a chance to go over my notes I fully intend to dedicate a full article to it as I was very impressed.

image           Capture

Rodney Haywood and Gideon Senderov white boarding the configuration of the NEC HYDRAstor

Note : Tech Field Day is a sponsored event. I receive no direct compensation and take personal leave to attend, however all event expenses are paid by the sponsors via Gestalt IT Media LLC. The views and content expressed here are my own and is in no way influenced by the sponsors of this event.

Bookmark and Share

Events, Gestalt-IT, Storage

Gestalt IT Seattle Tech Field Day – Day 1 Summary

July 15th, 2010

So that is Day 1 of the Seattle Tech Field Day out of the way and what a day it has been.  We’ve been out to Microsoft Redmond HQ, or “the temple” as John Obeto calls it.  We saw some new products from Veeam and were privileged enough to be the first port of call for a new and very exciting storage start-up, Nimble Storage.

There has been a lot of information flowing about today, an awful lot. My plan is to spend some time assimilating all the information and doing more detailed posts on everyone we’ve seen, so for now I think a summary will suffice.


Veeam are a company that needs very little introduction.  They’ve not been around long (3 years to be exact) but they are a well known and well respected brand in the virtualisation space.  Today Veeam were announcing a new product / concept that they have at the development stage, one that got delegates quite excited.

Veeam were introducing vPower a new product made up of 3 products, SureBackup, Instant Restore and CDP (a much debated point).  What stood out most for Tech Field Day delegates was the some of the Instant Restore functionality, the ability to run your VM direct from backup image was well received.  My personal thought at the time was who wouldn’t want to have a mechanism available to test your backups actually work.  The added bonus was that Veeam also provide network isolation and an almost Lab Manager ability to create groups of machines that should be recovered together. The idea of verifying your backups by running them from the back up storage was one thing,  Veeam had however written their own NFS in order do this.  This means that technically in the event of an outage you can run your machine directly from the Veeam backup server NFS datastore.  It’ isn’t going to be fast but it’s running which is the main thing you should be concerned about.  It was all good stuff and general consensus was that it was a step in the right direction and quite a shift in the VM backup space.


Our surprise for the day was a new Tech start-up who were launching themselves and their product for the very first time.  Nimble Storage is a new start company who consist of a number of high pedigree employees with a proven track record at companies such as NetApp and DataDomain.  This is further backed up with an experienced board of directors and top venture capital investment and last but not least, a pretty good product at a good price point.

Without going into to much detail Nimble storage have produced a new array that probably reshapes the way people think about primary and backup storage as well as the use of flash storage within an array. Right at the outset they stated that their aim was to introduce flash storage to the mid size enterprise while also utilising a lot of the features being pioneered by other vendors.  Nimble’s approach is different in that it provides a converged appliance, one that does primary and secondary storage within the same device while also introducing flash caching to provide high performance.  Through the use of inline compression, flash cache, sequential write down to disk, efficient snapshots and replication as well as zero space cloning, Nimble is packing a lot into their product. At the top end you are paying a list price of  $99,000 + $6,000 annual maintenance.  For this you are looking at 18TB of primary storage (not including flash cache) + 15,000 IOPS from a SATA / Flash Mix. They were also looking at 216TB of backup capacity within that same device, driven primarily by their use of space efficient snapshots.  I have a lot of notes on this particular presentation and will be expanding upon this in the coming weeks.


Now F5 was a company I was really interested to see, primarily because I wasn’t entirely sure what they offered.  Sure I knew they were into networking but even then what did they do in the networking space, I had no idea.  We were treated to 4 different presentations that covered the following.

  • WAN optimised geographical vMotion
  • Coding of IRules and IControls for the BIG-IP appliances
  • Intelligent client VPN connectivity via BIG-IP’s Edge gateway module.
  • Data Management and Routing using F5’s ARX appliance, file system virtualisation.


All were very impressive and I will definitely be looking to dig a little deeper and examine in full some of the technology presented and discussed.  I was particularly impressed with F5’s vision for data management / file level virtualisation, as they seem to be one of the only companies in this space that I am aware of.  This vision was demonstrated to us as a mix of onsite primary tier 1 storage and off site cloud storage.  The ARX appliance would sit as a director presenting a unified view of the storage to the end user, while internally keeping a routing table of up to a billion files.  This will allow IT departments to place files across multiple types of storage, whether that be differing internal storage devices or storage in the cloud. The concept sits well with the current cloud strategies being developed by most major IT companies, what’s surprising is that nobody else is doing it.  There is a lot more to be said about F5,  I plan to delve a little deeper and write some more,


It’s been a very busy day,  one however that has been exceptionally rewarding. Tech Field Day has been everything I expected it to be so far,  there has been a wealth of information shared and a lot of feedback given. The biggest win for me though is getting the time to learn more about vendors and their product offerings, that and hearing the comments of my fellow delegates.  There is a good mix of intelligent people from varied backgrounds and that has only added to the experience so far.

We ended the night with a tour of the Boeing museum of flight and a couple of drinks with dinner.  It’s now midnight and after just 6 hours sleep last night and a busy schedule ahead for tomorrow,  I am going to call it a night there.

Note : Tech Field Day is a sponsored event. I receive no direct compensation and take personal leave to attend, however all event expenses are paid by the sponsors via Gestalt IT Media LLC. The views and content expressed here are my own and is in no way influenced by the sponsors of this event.

Bookmark and Share

Events, General, Gestalt-IT, Tech Field Day , , ,