HP Server Power Advisor Tool

August 18th, 2010

I recently purchased 3 new DL 380 G7’s for a new ESX deployment and as part of that purchase I had multiple options when it came to choosing power supplies.  I tend to default to the larger power supplies when purchasing servers as usually this means it can support the maximum configuration, i.e. built in scalability.

Recently we had a service provider change their cost model from a flat fee to a metered power cost model.  As a result I decided to take a closer look at the actual power consumption of the DL380 G7’s I was buying.  That’s when I discovered another really useful HP online tool called the HP Power Advisor Tool.

This tool does still appear to be a work in progress, as some features are not available when you click on it, but in general there is still a lot of very useful functionality in there.  You can build a single server, configure it and get your power specifications based on the components within.  This is what I did with my DL 380 G7’s as you can see by clicking the thumbnail below.

HP Power Advisor DL 380 G7 configuration

 

If your configuration is a little more advanced, you can drop in a rack and then configure it with all manner of HP Server goodness.  You can even drop in blade enclosures and then configure the interconnects and individual blades as you can see by clicking the thumbnails below.

HP Power Advisor Rack configuration

I’m quite impressed with some of the extra tools I’ve been finding on HP’s website,  I blogged recently about the HP Server DDR3 Memory Configuration Tool which helped me out when a reseller was trying to give me the wrong memory configuration.  I’m just wondering what I’m going to find next!!

Bookmark and Share

Servers ,

Scottish VMUG – September 23rd – Glasgow

August 18th, 2010

VMUG_1

So it’s that time again, another Scottish VMware User Group meeting is upon us.  This time there is a slight difference, I’m actually presenting which I have to say is not daunting in the slightest?!?!

I’ve been asked to present on my experience at VMworld 2010 in San Francisco, an event I am looking forward to immensely.  I’ll be aiming to tell you all about VMworld, it’s format, what I did there and why you should invest both the time and the money to attend future events.

If you are interested in registering for this VMUG meeting please register soon, I have it on good authority that 50% of the places were snapped up within 24 hours of the invite going out. Hopefully I’ll see you there.

Register Now

Please join us for the upcoming Scottish VMware User Group meeting on Thursday, September 23rd.

This is a great opportunity to meet with your peers to discuss virtualization trends, best practices and the latest technology.

Agenda

13:00

Registration

13:30

Introduction (Scott Walkingshaw)

13:45

VNews (Alistair Sutherland)

14:00

VMware Speaker

15:00

Coffee/Networking

15:30

VMworld 2010 Customer Presentation (Craig Stewart/Martin Currie)

16:30

Close

The theme of the meeting will be "VMworld 2010 – Looking Back/Looking Forward." Come to network and share ideas with an innovative group of VMware users as you learn how to get the most out of your attendance at VMworld EMEA, and to get a brief overview of the highlights that will by then have been covered at VMworld US.

Bookmark and Share

Events, VMware ,

Symantec Application HA for VMware – VMworld 2010

August 16th, 2010

I was lucky enough last week to be involved in a Gestalt IT conference call with Symantec.  The conference call was designed to give us all a sneak preview of what Symantec were planning to announce at VMworld 2010 in a couple of weeks.  Unfortunately it was under embargo, that is until today!

There were a couple of announcements being made, Symantec introduced a new NFS storage product called VirtualStore and made some further announcements about NetBackup 7 and new VMware specific features.  However the most interesting announcement on the call for me was the release of Symantec Application HA for VMware.

Symantec_Virt_AdoptSymantec have been looking at why customers are not going “the last mile” with virtualisation.  Why are customers not deploying their Tier 1 applications on their virtual platforms? Symantec’s view on this was that customers still have issues with application level failure within guest VM’s.  This product has been designed to fill that void and at present is a product with no real competitors.

As the call progressed the current HA options were described by Symantec and discussed by the group. The obvious one is VMware HA which covers a physical host failure event. Within the VMware HA product there is also VM monitoring which covers you in the event of an OS level failure event, such as a  blue screen.  Then you can of course employ other technologies such as OS level clustering, however you then have to take heed of caveats that hinder the ability to use features such as vMotion and DRS.

I’m always sceptical when I see new virtualisation products, one of my fears is that companies are attempting to just jump on the crest of the wave that is virtualisation. Symantec are obviously a bit more established than your average company, but as always the jury is out until we see a final product doing the business for real.  It transpired during the call that the product is actually based on Symantec Veritas Cluster Server,  a product with a long history in application availability.

Veritas Cluster Server has a lot of in built trigger scenarios for common products such as Microsoft SQL Server, Exchange Server and  IIS.  On top of this built in, out of the box support Symantec also have a VCS development kit allowing for custom scenarios to be written.  I like this approach,  it reminds me of F5 Networks use of the customer community to support the writing of custom rules and features for their product.  If a custom rule or feature has enough demand then they spend the time developing it into their product range.    Perhaps Symantec could look at leveraging their customer base and community in this way and improve the support around VCS trigger scenarios.  One other potential use of the VCS SDK that springs to mind is for application vendors who are making specialist software, CRM, ERP, Finance systems, etc.  They could look to build in Application HA into pre-configured virtual appliances, that would be a great selling point for any software vendor.

The deployment of the product itself takes the form of a guest deployment / agent. Technical deep dive information on the exact integration between the Symantec product and VMware was thin on the ground.  However there was mention of Symantec’s integration with the VMware HA API,  something that I don’t think has been announced by VMware just yet.  The description given to us during the call was that if Symantec Application HA failed to restart the application it could send a downstream API call to VMware HA and ask it to restart the VM’s Operating System.  An interesting concept, something I am sure we’ll hear more about at VMworld.

Licensing for this new product is quite competitive, $350 per virtual machine, a small price to pay for ensuring your Tier 1 application recovery is automated.  Symantec have promised full integration with vCenter Server and the screenshot below shows Symantec Application HA in action monitoring a SQL 2008 server, click on the thumbnail to see a full size image.

If you would like to learn more about Application HA, then get along to VMware and Symantec’s break out session at VMworld. – http://www.vmworld.com/docs/DOC-4658

Alternatively you can listen to a Podcast from Symantec’s Niraj Zaveri discussing the new product.  – http://www.symantec.com/podcasts/detail.jsp?podid=ent_application_ha

Bookmark and Share

General, Gestalt-IT, New Products, VMware , ,

Win a free trip to VMworld 2010

August 10th, 2010

VMworld 2010 in San Francisco is fast approaching and promises to be a fantastic event. I have two questions for you.

        – If you are not attending already would you like to?

        – Would you like someone else to pay for you to go?

I think the obvious answer to both those questions is a resounding YES!  So how do you take advantage of this very, very special offer?  Well over at Gestalt IT my friend Stephen Foskett has arranged a spectacular competition supported exclusively by the four vendors shown below.

                                  

How To Enter

Get yourself over to the VMworld contest extension page on the Gestalt IT web site and get over there quickly.  You will need to fill out the entry form by close of play Thursday the 12th of August, most importantly you need to tell us how you are going to “Pay it forward".  What does this mean,  well the following extract explains what we expect from competition entrants.

There’s always a catch, right? Inspired by winner Greg Stuart‘s desire to contribute to the community, we’re not just going to pick a winner at random. We’re going to pick the person who presents the best case for themselves.

Entrants must explain how they plan to “pay it forward” if they get to go to VMworld. Will you start a blog? Write some tutorials? Contribute to a forum or online community? Present to your local VMUG? Get creative and spread the wealth of knowledge you get from the event!

Our panel of judges is made up of none other than the most-excellent roster of past Tech Field Day delegates! They’ve proven themselves to be independent-minded and knowledgeable, and we’re sure that they will pick the best entries!

It’s pretty simple and at present there are not quite as many entries as you’d think.  Get involved people, it’s not going to take you long and could end on a nice trip to San Francisco for one of the highlights of the IT calendar.  I look forward to reading the entries and helping decide the winner!

Bookmark and Share

Events, General, VMware , ,

HP Server DDR3 Memory Configuration Tool

July 26th, 2010

I’ve recently been lucky enough to be looking at purchasing a few additional ESX servers. I have spent a good few years now working with HP servers so have been looking at the HP Proliant DL380 G7 model for my particular needs.

I tend to spend a bit of time ensuring that the servers are specified correctly using the HP server quick specs. It can take a bit of time but I want to make sure that I am getting the right configuration for my virtualisation solution. To see an example follow the link to see the current quick specs for the HP Proliant DL 380 G7 server.

image

While reviewing the quick specs for the DL 380 G7 I noticed the following section which I’d not seen before when buying other HP Proliant servers.

NOTE: Depending on the memory configuration and processor model, the memory speed may run at 1333MHz, 1066MHz, or 800MHz. Please see the Online Memory Configuration Tool at: www.hp.com/go/ddr3memory-configurator.

Now I usually buy memory as follows 4GB, 8GB, 16GB, 32GB, 64GB, 128GB, most people probably do I imagine.  However as it turns out that the new Nehalem (Xeon 5500) and Westmere (5600) processors have some specific rules that need to be abided by to ensure optimal performance.  This is where this particular HP tool comes in handy, whether you are configuring a new server or attempting to upgrade a current one.

image

 image 
As it turned 32GB is not an optimal configuration,  something that certain resellers never picked up on.  As it turned out I needed either 24GB or 36GB to ensure my server memory runs at 1333MHz and performs at its best. 

image

If you are looking at buying a new HP server, or simply looking to add memory to an existing server I strongly suggest taking a look at this tool to ensure you get it right.

http://h18000.www1.hp.com/products/servers/options/tool/hp_memtool.html

Bookmark and Share

General, Servers , ,

Gestalt IT Tech Field Day Seattle – NEC HYDRAstor

July 16th, 2010

Following my return from my first Tech Field Day I have been reading through my notes and reflecting on the vendors I saw when I was in Seattle.  Of the vendors I saw the one that surprised me most was NEC, everyone has heard of them but not everyone actually knows what they do or what products they make.  As we found out during our visit, NEC have a broad technology portfolio and have quite an interesting offering in the storage space.

Here are some basic facts about NEC that you may / may not know

- Founded in 1899
- Fortune 200 company with over 143,000 staff
- Revenues of $43 Billion in 2009
- $3 Billion spent in R&D each year across 12 R&D global labs
- 48,000 patents worldwide.
- Have been in storage since 1950 

So with that little history lesson over, the main focus of our visit was NEC’s HYDRAstor. This is their modular grid storage offering for customers with backup and archive storage in mind. It’s marketed as “Grid storage for the next 100 years” which may sounds a little far fetched, but data growth and data retention periods are ever increasing.   From what I saw and heard the HYDRAstor could very well live up to this bold claim.

There was a lot of content delivered on the day and the session went on for 4 hours, so I’ve tried to wrap up some of the key features below. I have expanded on the key elements of the HYDRAstor that really caught my attention as I think they are worth exploring in more detail.

Key Features

- 2 tier architecture based entirely on best of breed Intel Xeon 5500 based servers

- 2 tier architecture consists of front end accelerator nodes and back end storage nodes

- Shipped as a turnkey solution, though entry level can be bought for self racking.

- Supports a Maximum of 165 Nodes, 55 accelerator nodes and 110 storage nodes

- All interconnects based on 1GB Ethernet Networking (NEC Network switches included)

- Supports old and new node modules in the same Grid for easy node upgrade and retirement.

- Supports volume presentation with NFS and CIFS (SMB Version 1)

- Non-disruptive auto reallocation of data across any additional grid capacity – DynamicStor

- higher levels of resilience than RAID with a reduced capacity overhead (See DRD below)

- WAN optimised grid to grid replication minimises network bandwidth requirements – RepliGrid

- WORM Support for secure retention / compliance governance – HYDRAlock

- Efficient drive rebuilds, only rebuild the actual data not the whole drive.

- Global inline de-duplication across the entire grid – DataRedux™

- Tight backup vendor integration – strips out backup metadata to improve de-dupe ratios

- Mini HYDRAstor appliance available for remote offices or offsite DR replication.

Data Protection – Distributed Resilient Data™ (DRD)  

The resilience provided by HYDRAstor really caught my eye, primarily because it was so different from anything I had ever seen before.  Distributed Resilient Data (DRD) uses something known as erasure coding to provide extremely high levels of resilience. Now you may think that this would come with a considerable storage and performance overhead, but you’d be wrong.

The HYDRAstor provides 6 levels of protection (1 – 6) all with differing levels of protection and capacity overhead. With the default level 3 selected NEC’s implementation of erasure coding splits the data chunks into 12 parts, 9 data and 3 parity. The use of erasure coding means that it only ever needs 9 parts to make up a complete data chunk. So if that data chunk is spread over 12 disks in a single storage node, it can withstand 3 disk failures. if those 12 chunks are spread over 12 storage nodes then you can withstand 3 complete node failures.

This default level 3 protection requires a 25% capacity overhead, much like RAID 5.  However by providing for 3 disk failures it provides 300% more protection than RAID5 and 150% more protection than RAID 6.  If you want to go to the highest level of protection (level 6) then there is a 50% capacity overhead as with RAID 1, however you can withstand the failure of 6 disks or 6 nodes.

The following video describes Distributed Resilient Data™ (DRD) at the default level 3

 

High Performing

The demonstration NEC gave us was based on their lab setup of 20 accelerator nodes and 40 storage nodes.  This was a 4 rack setup, which as you can see from the photo below is not a small setup. What it is though, is a very high performing storage solution.

image

NEC demonstrated a data copy that utilised a full 10GB per second throughput, which worked out at about 540MB throughput per front end accelerator node.  The screenshot from the management GUI below shows the  total throughput achieved.

The maximum HYDRAstor configuration consists of 11 racks and is capable of 25GB per second or 90TB per hour. This works out at roughly 2 PB’s in a 24 hour period, that is an astounding amount of data throughput.  Surely a level of throughput to deal with even the most demanding backup or archiving use case.
 

image

There were a few negative aspects that I picked up on during our visit, thankfully all ones I feel can be addressed by NEC over time.

User Interface

I felt the user interface was a little dated (see screenshot above), it served it’s basic purpose but wasn’t going to win any awards. It was a stark contrast when compared with the very nice and easy to use GUIs we saw from Nimble storage and Compellent.  That said if the HYDRAstor is only being used as a backup and archive storage and not primary storage, does it actually need to have the worlds best GUI, possibly not.

Solution Size

The HYDRAstor came across as a large solution, though I’m not sure why. When I think about it any storage solution that provides 10GB/sec throughput and 480TB of raw storage is likely to take up 4 racks, in some instances probably a lot more.  Maybe it was the sheer number of network interconnects, perhaps some consolidation with 10GB Ethernet could assist in making the solution appear smaller.  NEC could also look at shrinking down the servers sizes, probably only possible with the accelerator node servers as the storage nodes need 12 x 1TB disk so not a lot of scope for size reduction there.

Marketing

A general consensus among delegates was why have NEC marketing not been pushing this harder,  why had so many of us in the room not heard about it? I suppose that was one of the reasons we were there, to hear about it, discuss it and ultimately blog about it as I’m doing now. There are some specific target markets that NEC maybe need to look at for this product, possibly looking at world wide data retention regulations as a means of identifying potential markets and clients.  More noise needs to be made by NEC about there efficient de-dupe integration with enterprise backup products such as CommVault Simpana, Symantec NetBackup, TSM and EMC Networker.  More comments such as the one below wouldn’t hurt.

with the application aware de-duplication for CommVault we’ve optimized storage efficiency with a four times improvement in space reduction.
Pete Chiccino, Chief Information Officer, Bancorp Bank

EMEA availability

NEC told us that this product is not being actively pushed in the EMEA region.  Currently the product is only available for purchase in North America and Japan.  One of the points I made to NEC was that the HYDRAstor appeared to me to be a product that would have a lot of applications in the European market place, possibly more so in the UK.  I made specific reference to FSA regulation changes where Financial companies are now required to keep all electronic communications for up to 7 years.  NEC’s HYDRAstor with it’s high tolerance for failure, global de-duplication across all nodes and grid like extensibility is perfect for storing this kind key critical complaince data.  That is a very specific example, another is insurance companies who have longer retention requirements and museums digitising historical documents / books which have a “keep forever” retention requirement.

NEC contacted me via twitter after the event to say that although not on sale in EMEA if a company has a presence in the US they will be able to explore purchasing the HYDRAstor through NEC America.

Summary

I had no idea what to expect when we arrived at NEC’s offices, sure I knew who they were but I had no idea what they were doing in the storage space. Gideon Senderov at NEC certainly saw to it that we had all the information needed to form an opinion, his knowledge of his product was simply outstanding.

NEC HYDRAstor is a product that is quite unique. It’s easy to scale up and scale out, has high levels of redundancy without the normal capacity penalty and of course exceptional levels of performance. It strikes me as a product that any IT professionals responsible for backup, archiving and long term data retention would be very, very interested in

Note : Tech Field Day is a sponsored event. I receive no direct compensation and take personal leave to attend, however all event expenses are paid by the sponsors via Gestalt IT Media LLC. The views and content expressed here are my own and is in no way influenced by the sponsors of this event.

Bookmark and Share

Events, Gestalt-IT, Storage , ,

Gestalt IT Seattle Tech Field Day – Day 2 Summary

July 16th, 2010

It’s now been a couple of days since the second day of the Gestalt IT Tech Field Day, I’m actually taking the opportunity to write this on the plane on the way back from Seattle. So once again I thought I would do a summary post until I get the chance to write up a detailed post on each vendor.

 image image

Compellent were one of the main sponsors for the Seattle Tech Field Day and were responsible for us getting access to the Microsoft Campus. So a big thank you to Compellent for their support of Tech Field Day.

Compellent are a company I have had dealings with before, I looked at buying one of their storage devices back in 2008 and was very impressed by the product they had on offer at the time.  This was a great chance for me to revisit Compellent two years on and see how things had changed.

Compellent in general still appears to be much the same product that I liked so much back in 2008.  Their pooled storage model, software controlled RAID write down, space efficient snapshots and WAN optimised thin replication are all superb  features. There main differentiator back in 2008 was their ability to do automated storage tiering (Data Progression™), something that others in the industry are starting to catch up to (EMC FAST). Compellent’s Data Progression technology is one that many customers actively use with good results, I was slightly disappointed though to learn that their data movement engine only executes once every 24 hours and cannot be made more frequent.  I’m not sure how that compares to EMC FAST but is something I’ll include in a more expansive post.

A feature I had heard of but didn’t quite understand previously was Compellent’s Live Volume.  It’s another unique feature for Compellent and one of my fellow delegates even described it as “EMC vPlex that you could actually afford”. Compellent implement the Live Volume feature at software level as opposed to a hardware based implementation like EMC vPlex. Compellent are able to present the same volume, with the same identity in two different locations, they do this using the underlying WAN optimised asynchronous replication. One point of note was that this is not an active / active DR like setup,  this is a setup for use in a controlled maintenance scenario, such as SAN fabric maintenance or a DC Power down test.

Compellent also took the opportunity to share some roadmap information. Highlights included the release of the 64 bit, Series 40 Controller base on the Intel Nehalem, encrypted USB device for seeding replication, a move to smaller 2.5” drives and 256 bit full disk encryption among others.

image 
Although we were situated on Microsoft’s Campus for a large part of Tech Field day we were never presented to by Microsoft, which was a shame.  We did however get the chance to visit the Microsoft store which is for employees only.  It gave us all a chance to buy some discounted Microsoft Software and souvenirs of our visit to Redmond which we all took advantage of.

photo

Tech Field Day delegates Kevin Houston, Stephen Foskett and Jason Boche using their iPhones and iPads in the heart of the Microsoft campus. Note Jason Boche using an iPad and wearing his VMware VCDX shirt, brilliant!

image

Our afternoon session was spent a short bus ride away from Microsoft at NEC America’s Seattle office.  We were here to hear about NEC’s storage offering (I had no idea they even did storage) and more specifically the NEC HYDRAstor range. We had a very in depth session on this fascinating product with Gideon Senderov, Director of Product Management for the HYDRAstor range.

NEC have taken an innovative approach with this product, one I was not expecting. They utilise full blown NEC servers to provide a two tier architecture made up of front end accelerator nodes and back end storage nodes.  On top of this they don’t use the traditional RAID model, instead using something known as erasure coding to provide improved data protection. I will deep-dive this particular data protection method in another article but it was a very interesting and different approach to what I’m used to.

The HYDRAstor grid is marketed as “Storage for the next 100 years” and with it’s grid architecture it’s reasonably easy to see how that statement could be realised.  You can add additional nodes into the grid and it will automatically redistribute itself to take advantage of the capacity.  You can also mark nodes for removal,  the system evacuating the data to enable nodes to be removed from the grid.  This combined with the ability to co-exist old and new HYDRAstor nodes shows why it’s a good storage location for data with a very long term retention requirement.

It appeared to me that HYDRAstor was designed specifically as a location for the output of archive or backup data and not a primary data storage solution. The reason I say this is that when we discussed in-line de-duplication the product was already integrated with major backup vendors (Symantec NetBackup, CommVault Simpana, Tivoli Storage Manager and EMC Networker). NEC were getting very clever by stripping out metadata from these backup vendors to improve the level of de-dedupe that could be achieved with the product when storing backup data.

I will revisit the HYDRAstor, once I have had a chance to go over my notes I fully intend to dedicate a full article to it as I was very impressed.

image           Capture

Rodney Haywood and Gideon Senderov white boarding the configuration of the NEC HYDRAstor

Note : Tech Field Day is a sponsored event. I receive no direct compensation and take personal leave to attend, however all event expenses are paid by the sponsors via Gestalt IT Media LLC. The views and content expressed here are my own and is in no way influenced by the sponsors of this event.

Bookmark and Share

Events, Gestalt-IT, Storage

Gestalt IT Seattle Tech Field Day – Day 1 Summary

July 15th, 2010

So that is Day 1 of the Seattle Tech Field Day out of the way and what a day it has been.  We’ve been out to Microsoft Redmond HQ, or “the temple” as John Obeto calls it.  We saw some new products from Veeam and were privileged enough to be the first port of call for a new and very exciting storage start-up, Nimble Storage.

There has been a lot of information flowing about today, an awful lot. My plan is to spend some time assimilating all the information and doing more detailed posts on everyone we’ve seen, so for now I think a summary will suffice.

image

Veeam are a company that needs very little introduction.  They’ve not been around long (3 years to be exact) but they are a well known and well respected brand in the virtualisation space.  Today Veeam were announcing a new product / concept that they have at the development stage, one that got delegates quite excited.

Veeam were introducing vPower a new product made up of 3 products, SureBackup, Instant Restore and CDP (a much debated point).  What stood out most for Tech Field Day delegates was the some of the Instant Restore functionality, the ability to run your VM direct from backup image was well received.  My personal thought at the time was who wouldn’t want to have a mechanism available to test your backups actually work.  The added bonus was that Veeam also provide network isolation and an almost Lab Manager ability to create groups of machines that should be recovered together. The idea of verifying your backups by running them from the back up storage was one thing,  Veeam had however written their own NFS in order do this.  This means that technically in the event of an outage you can run your machine directly from the Veeam backup server NFS datastore.  It’ isn’t going to be fast but it’s running which is the main thing you should be concerned about.  It was all good stuff and general consensus was that it was a step in the right direction and quite a shift in the VM backup space.

image

Our surprise for the day was a new Tech start-up who were launching themselves and their product for the very first time.  Nimble Storage is a new start company who consist of a number of high pedigree employees with a proven track record at companies such as NetApp and DataDomain.  This is further backed up with an experienced board of directors and top venture capital investment and last but not least, a pretty good product at a good price point.

Without going into to much detail Nimble storage have produced a new array that probably reshapes the way people think about primary and backup storage as well as the use of flash storage within an array. Right at the outset they stated that their aim was to introduce flash storage to the mid size enterprise while also utilising a lot of the features being pioneered by other vendors.  Nimble’s approach is different in that it provides a converged appliance, one that does primary and secondary storage within the same device while also introducing flash caching to provide high performance.  Through the use of inline compression, flash cache, sequential write down to disk, efficient snapshots and replication as well as zero space cloning, Nimble is packing a lot into their product. At the top end you are paying a list price of  $99,000 + $6,000 annual maintenance.  For this you are looking at 18TB of primary storage (not including flash cache) + 15,000 IOPS from a SATA / Flash Mix. They were also looking at 216TB of backup capacity within that same device, driven primarily by their use of space efficient snapshots.  I have a lot of notes on this particular presentation and will be expanding upon this in the coming weeks.

image

Now F5 was a company I was really interested to see, primarily because I wasn’t entirely sure what they offered.  Sure I knew they were into networking but even then what did they do in the networking space, I had no idea.  We were treated to 4 different presentations that covered the following.

  • WAN optimised geographical vMotion
  • Coding of IRules and IControls for the BIG-IP appliances
  • Intelligent client VPN connectivity via BIG-IP’s Edge gateway module.
  • Data Management and Routing using F5’s ARX appliance, file system virtualisation.

 

All were very impressive and I will definitely be looking to dig a little deeper and examine in full some of the technology presented and discussed.  I was particularly impressed with F5’s vision for data management / file level virtualisation, as they seem to be one of the only companies in this space that I am aware of.  This vision was demonstrated to us as a mix of onsite primary tier 1 storage and off site cloud storage.  The ARX appliance would sit as a director presenting a unified view of the storage to the end user, while internally keeping a routing table of up to a billion files.  This will allow IT departments to place files across multiple types of storage, whether that be differing internal storage devices or storage in the cloud. The concept sits well with the current cloud strategies being developed by most major IT companies, what’s surprising is that nobody else is doing it.  There is a lot more to be said about F5,  I plan to delve a little deeper and write some more,

Summary

It’s been a very busy day,  one however that has been exceptionally rewarding. Tech Field Day has been everything I expected it to be so far,  there has been a wealth of information shared and a lot of feedback given. The biggest win for me though is getting the time to learn more about vendors and their product offerings, that and hearing the comments of my fellow delegates.  There is a good mix of intelligent people from varied backgrounds and that has only added to the experience so far.

We ended the night with a tour of the Boeing museum of flight and a couple of drinks with dinner.  It’s now midnight and after just 6 hours sleep last night and a busy schedule ahead for tomorrow,  I am going to call it a night there.

Note : Tech Field Day is a sponsored event. I receive no direct compensation and take personal leave to attend, however all event expenses are paid by the sponsors via Gestalt IT Media LLC. The views and content expressed here are my own and is in no way influenced by the sponsors of this event.

Bookmark and Share

Events, General, Gestalt-IT, Tech Field Day , , ,

Windows Virtual Desktop Access Licensing – What is it?

June 24th, 2010

I try and avoid licensing at all costs, it’s a horrible subject and one that strikes fear in to many.  When you add virtualisation in to the mix it tends to get a little more complicated and you often find that the rules change on a reasonably regular basis. I was involved in a discussion today about Citrix XenDesktop and an interesting point came up when discussing licensing Virtual PCs.  Someone mentioned something called the Microsoft VDA,  I hadn’t a clue what they were talking about so I did a little digging around to find out more.

In summary this is what I found, it’s not pretty reading. As of the 1st of July 2010 Microsoft is changing the way it licences the Windows OS in VDI environments.  The following changes will take place

Windows® Virtual Enterprise Centralized Desktop (Windows VECD) and Windows VECD for Software Assurance (SA) will no longer appear on the price list.

Virtual desktop access rights will become a Windows Client Software Assurance benefit. Customers who intend on using PCs covered under SA will now be able to access their Virtual Desktop Infrastructure (VDI) desktops at no additional charge.

Customers who want to use devices such as thin clients that do not qualify for Windows Client SA would need to license those devices with a new license called Windows Virtual Desktop Access (Windows VDA) to be able to access a Windows VDI desktop.Windows VDA is also applicable to third party devices, such as contractor or employee-owned PCs.

What does it all mean?

In it’s simplest terms you don’t licence the windows virtual machine itself, you instead licence the end point its being accessed from. To further break this down there are two distinct endpoint categories to consider.

1. The end point is a Windows OS covered by Software Assurance (SA)

2. The end point is a non windows device or is a windows device without SA

In the first category you are covered to access a windows virtual machine as Virtual Desktop Access (VDA) is included as a Software Assurance benefit.  In the second category however you need to purchase a VDA subscriptions for each end point device.  Unfortunately this is not a one off purchase either, this is a $100 per year per device subscription cost.

As an example, say you have  a sales person who uses a company laptop and a company smart phone to access their VDI virtual machine.  You would need to have the laptop installed with a software assured copy of windows and buy a VDA subscription for the smart phone.  Alternatively if you have a non SA copy of windows on the laptop you need 2 VDA subscription licences to cover both devices.  This latter example would obviously be the same if the laptop was MAC OS or Linux based.

There is some good news though in that Microsoft have something called extended roaming rights with the windows VDA licence.  In short the primary user of a VDA licensed device can access their VDI desktop from any device that is not owned by the users company.  Examples would be a users home PC, airport kiosk or hotel business centre

There is a lot to take in with licensing, especially in the VDI space. I suggest everyone running or planning to deploy VDI takes a look at the recent changes and considers how they effect existing or planned deployments.  Some people will see this as Microsoft stifling the growth of Virtual Desktop Infrastructure, others will argue that it may actually acts as an enabler.  In truth I’m just not sure. I’m still digesting what it all means and playing through the various scenarios and combinations of VDI access.  On the surface I can see it hindering as opposed to helping this growing virtualisation sector.

For additional information I’d recommend checking out the following Microsoft FAQ article and for those of you who are Gartner customers the linked article below breaks it down quite nicely into simple terms.

Microsoft VDI suites & Windows VDA Frequently Asked Questions PDF

Gartner – Q&A for understanding Microsoft Licensing Requirements before deploying HVDs

Bookmark and Share

General, Gestalt-IT, Microsoft , , , ,

VMware Snapshot Alerting and Reporting

June 21st, 2010

I spotted an issue in my vSphere infrastructure this weekend just past. I noticed that one of the main development boxes was showing the dreaded question, redo log out of space, retry or abort?

hbacommom_outofspace

As it turned out VMware Data Recovery Manager had taken a snapshot as part of it’s back up routine and had failed when trying to remove it.  This coupled with a scheduled SQL maintenance plan caused the delta files for the snapshot to grow to over 250GB in little over 12 hours.

I eventually overcame the issue by adding an extent to the out of space VMFS datastore, this gave me an extra 160GB with which to play the logs back in.  I then used the very handy SnapVMX utility to tell me how much space was required to replay the delta files.  Luckily for me it only required 20 GB as sometimes it can require as much as the size of the original disk. After the snapshot was merged I did a bit of Storage vMotion and reworked the datastore to get rid of the extent (I’m not a fan of using them)

This particular incident was unfortunately unavoidable, it happened at a weekend, was due to VMDR’s failure to remove a snapshot it had created and unfortunately clashed with a disk intensive operation. It did get me thinking though, although I am careful with snapshots and there usage who else in the organisation is not? how do we mitigate this potential risk?

Snapshots are a handy feature, I generally only use them for short periods of time, usually to provide a rollback when patching or changing configurations. Misuse or mismanagement of snapshots can quite quickly lead to problems, something that a recent blog article from VMware Support deals with quite effectively. Entitled ESX Anatomy 101 it’s a must read for anyone trying to gain a good basic understanding of how VMware snapshot work.

I myself have taken a two pronged attack to preventing snapshots causing problems. The first approach is to schedule a very basic PowerShell script that I found on the blog site of Axiom Dynamics.  This simple little script queries your vCenter server for all current snapshots and then sends an email detailing them.  A simple but effective means of keeping an eye on snapshots across the virtual infrastructure.

The second more proactive approach is to use a vCenter alarm at the data centre level to alert when a VM is running from a snapshot. This alarm simply involves emailing a warning when any snapshot is larger than 2GB. This handy video taken from VMware Knowledgebase article 1018029 describes in detail how to set this up, the KB itself also provides step by step instructions.


Alternatives
 

There are a number of alternatives available for reporting on snapshots.

Alan Renouf’s Snapshot Reminder – A Powershell script that integrates with AD to send the creator of a snapshot a little reminder when the snapshot is over 2 weeks old.

Alan Renouf’s vCheck Daily Report – Another Powershell script that reports on a large number of areas within the virtual infrastructure.  One of those areas includes snapshots

RVTools – A very handy .Net application by Rob de Veij  that can be used to query your virtual infrastructure for just about everything. You will notice in the screenshot below the vSnapshot tab which should help you identify those rogue snapshots.

Snapshots

In summary,  everyone who works with snapshots should have an understanding of their usage and limitations.  Obviously you can’t always rely on people to do things right, we are only human after all. As a safeguard ensure you have some level of reporting and alerting in place to help you prevent those annoying and time consuming out of space issues occurring.

Bookmark and Share

VI Toolkit / Powershell, VMware , , ,