Archive

Archive for the ‘General’ Category

Containers, Containers, Containers…

December 18th, 2014

Perhaps the title to this blog post should actually read Docker, Docker, Docker… After all when anyone mentions container technology these days the immediate image that comes to mind is that smiley docker whale!

Docker Whale Long

However I would argue that it should just be containers, containers, containers… The reason I say that is I feel the ecosystem around containers looks significantly broader than just Docker and appears to be diversifying on a weekly basis.  It would be silly however not to acknowledge the fact that the current landscape is dominated by Docker,  as it is the most commonly recognised implementation of containers today. In fact as I go through this article,  you’ll find it ironic that most of the examples I use to explain containers are in fact Docker related, but bear with me I’ll get to the point eventually.

What are containers?

I’m conscious that not everyone reading this will be aware of what a container is… Where have you been for the last year! ;-) It’s worth setting out some of the basics and briefly comparing it with a subject people do understand well,  virtualisation.  So how does it differ?  I’ve tried to give a very quick overview below.

Docker vs Virtualisation_v3
In simple terms,  containers are lighter in weight and have less memory and compute overhead than virtual machines, they make it easy to support applications that consist of hundreds or thousands of small, isolated moving parts, think of the trend we’re seeing around Micro services here.  A properly containerised application is easy to scale and maintain, so it’s understandable why this would appeal to both developers and operations teams alike.

Where did they come from? / Where are they going?

Containers themselves are not a new invention,  in fact they have been around for years,  consider AIX workload partitions and Solaris Containers alongside todays use of Linux Container technology (LXC) and cGroups.  However it’s thanks to Docker that containers have now become mainstream in the last year. The Docker Engine makes it easy to consume with its simple abstraction,  with the ingenious idea of having pre-built, transportable containers in the form of multi-layered Docker images and a registry mechanism for sharing these images with the world via the Docker Hub.

There are of course other approaches to containers appearing, such as Cannonical LXD and FlockPort both building on top of the native Linux LXC technology. Just last week we’ve also seen CoreOS cause a bit of a storm by deviating away from the Docker path they’ve been treading for a while and setting out to create it’s own container technology called Rocket.

Container_Banner
Following their last round of VC funding, it appears that Docker are now moving up the stack and looking to create more of a platform.  With the introduction of Docker Machine (remote management), Docker Swarm (clustering) and Docker Compose (Multi-App configs) Docker has signalled it’s intent to consolidate it’s position as the container technology of choice with smarts wrapped around it.  In my opinion this is very similar to VMware’s move from just selling ESX,  to selling an SDDC platform with all the bells and whistles around automation, monitoring, SDN, SDS, etc.

It’s interesting however that CoreOS feel that container technology still has some fundamental work required (improve security, tooling and openness) and through Rocket are looking to build on the work that Docker have already done .  As CoreOS is a very popular choice for running Docker Containers, I think it’s a great example of coopetition in action.  I also read with interest that Pivotal have expressed a serious interest in getting fully involved in collaborating with CoreOS on the next generation of open containers. Commoditisation of containers anyone!?

Additional Reading

For those that want to dig deeper and do some additional reading on container technology, I would suggest having a little look at the following links for the basics.

What is Docker? – https://www.docker.com/whatisdocker/

Intro to Docker Slide Deck – http://www.slideshare.net/dotCloud/intro-docker-october-2013

Understanding Docker – https://docs.docker.com/introduction/understanding-docker/

Flockport – LXC vs Docker – http://www.flockport.com/lxc-vs-docker/

Wider EcoSystem Momentum

One of the other reasons I think it’s Container, Container, Container and not just Docker is the sheer movement in the wider eco system over the last month alone.

I’ve mentioned the various flavours of containers springing up out there (CoreOS, FlockPort, LXD, etc). On top of that there have been a number of interesting announcements in the platform and orchestration space.

Google announced their Google Container Engine Alpha release, managed by Google’s open source Kubernetes orchestration engine and supporting Docker containers. Google are definitely one of the  the worlds biggest consumer of containers,  spinning up a claimed 2 billion containers per week, now they’re getting into the Container as a Service game. It’ll be interesting to see how they broaden support for other container providers like CoreOS both within the platform and within Kubernetes as an orchestration tool. https://cloud.google.com/container-engine/

Google Container Engine BannerAmazon are obviously keen not to miss out on the buzz around containers, at AWS reinvent they launched their EC2 Container Service with support for Docker. Again though Docker is mentioned specifically day 1,  in theory this service offering again could potentially be expanded to interact with other container technologies.  Again it will be interesting to see what Amazon do next to in this space to broaden support and build their CaaS offering  – http://aws.amazon.com/blogs/aws/cloud-container-management/

Microsoft announced support for Linux Containers on Azure back in June 2014 (20% of Azure workloads are now running Linux, can you believe that!). Microsoft have since announced plans to support Docker on top of Windows Server and integrate the Docker Hub with the Azure Gallery service catalogue. So interesting moves afoot from all the big cloud players in the market – http://azure.microsoft.com/blog/2014/10/15/new-windows-server-containers-and-azure-support-for-docker/

DockerAndAzureEcosystem

 

All in all the ecosystem looks to be making healthy progress, the big players definitely appear to be  getting caught up in the wave of container mania focused around Docker primarily.  The key thing for me is the openness around orchestration engines like Kubernetes, which will ensure that these tools don’t just apply to Docker but will be capable of embracing new flavours of containers as they appear.

Who is using containers?

I’ve heard of lots of examples, Google, NetFlix, Twitter, Hailo and many, many more.  However as I work for EMC and have an interest in Pivotal so thought I would concentrate on those specific examples. This blog post started following a Docker meet-up in Edinburgh a while back and I was asked if we at EMC were using Docker in production, my answer was…

“well yes sort of, we are using various container technologies within products at EMC”,

The guy looked visibly shocked at this point.  He told me he didn’t know anyone taking that kind of approach to using containers in production and had only ever seen containers used in the realms of application development and testing. In EMC’s case it’s about changing up how we deliver software on commodity hardware and drive our Software Defined Storage strategy.

The above might actually come as a surprise to some,  however at EMC and Pivotal we’re actively adopting and using container technology to change both how we deliver data services and Platform as a Service (PaaS).  In EMC we actively use Linux based containers within the VNXe storage array and our Elastic Cloud Storage (ECS) product, my global boss Chad Sakac alluded to this some time back when the VNXe was launched.

Chad Tweet

VNXe / Project Liberty

The release of the VNXe was quite key for EMC and the ongoing future VNX array architecture.  This was the first time we had managed to consolidate the VNX file and Block code base on a single x86 platform. Previously the file part of a VNX ran on separate hardware (Data Movers with X-Blades) in front of the block part of a VNX (x86 based storage processors).

VNX_Architecture

Following “Project Liberty” those code bases were architected onto a single linux based kernel.  Now the VNXe runs both the File and Block code on the storage processors (i.e. an x86 commodity server) without the need for dedicated X-Blades in front of the block part of the VNX. We’re still using FLARE and DART code bases to provide the data services,  however in this instance they are being run within CSX containers,  these containers are part of EMC’s C4 Code Base being used for VNXe arrays,  the picture below gives some insight into that architecture where you can see Data Path code being run on the CSX containers.  This is a great example of successful utilisation of containers to deliver simplicity, for EMC internally around how we develop code and build products and for EMC customers around simpler consumption of our data services on VNX.

VNXe_Linux_Containers_v2

We’re now seeing this same concept being applied in the new VMAX 3 which boasts our HyperMax OS.  Just last week we’ve launched the latest version of the Enginuity code and with that we can now run the VNX file code as Embedded NAS on the VMAX 3 instead of having to put a physical VNX gateway in front of it.  This is a direct result of the work done to abstract the data services in the VNX,  freeing it from the confines of it’s dedicated X-blade hardware.  I’m not 100% sure what the secret sauce is behind this implementation in the HyperMax OS but in one of his most recent blog posts Chad alluded to it it being similar to what we do in VNXe,  I’m reading that as C4 & CSX :-)

Credit for the most of above goes to Chad who blogged about these things here (Project Liberty),  here (C4 / CSX Containers) and here (VMAX 3)

Elastic Cloud Storage

For those not familiar with EMC’s Elastic Cloud Storage (ECS) it’s a combination of commodity servers, ethernet networking, disk shelves, disks and EMC ViPR advanced data services.  The ViPR data services allow us to layer a mixed combinations of Block, Object and HDFS on top of that same commodity hardware and to do so at huge scale.   These advanced data services of course need to run somewhere,  within ECS this is on top of x86 Intel based servers known within EMC as “Phoenix”,  the number of actual servers / racks depends on your configuration,  an example 1 rack config can be seen below with the Phoenix servers deployed.

ECS_COMPLETED

ViPR itself is a distributed scale-out platform, ECS utilises ViPR on Docker at the core of the product. The components of ViPR such as the ViPR Controller, ViPR Object storage engine and ViPR Block services are basically individual scale out applications.  Therefore Docker makes a lot of sense as a means of deploying them within the ECS appliance.

In the example below we’re seeing ScaleIO, which is used to deliver the block data services of the ECS appliance.  Running a docker ps –a command returns the current containers running on the docker engine, running on a single Linux host, running on the phoenix hardware.  As you can see there are components of ScaleIO as well as management components such as zookeeper, the ECS fabric and ECS registry.

ScaleIO_Containers

From a higher level architecture perspective,  what does this look like?  Well below you can see we have 3 ECS appliances operating in a fabric,  with each Linux server running multiple containers and therefore running multiple applications.  So again,  great use of containers to build out a scale out architecture which can deliver exabytes of storage.

Credit for this info & pics go to my EMC Colleague Michael Wehle – ECS Appliance Blog Post

ECS_Fabric 

Pivotal use of Containers

So I best not forget the guys over at Pivotal,  who are also using containers within the Pivotal Cloud Foundry PaaS platform.  Today in Pivotal CF 1.3.0.0 containers are used within what are known as Droplet Execution Agents or DEA’s for short,  this is where application instances run within Pivotal CF.  These DEA’s within Pivotal CF are based on an Ubuntu Linux distribution and these DEA’s run what are known as Warden Containers.  The container technology is based on Linux containers utilising namespaces and cgroups for isolation.  Warden itself is a service for managing these containers and determining how clients interact with those containers, i.e. routing network traffic to an app within a container.

Warden is currently being re-written in the language Go and will go by the name Garden,  this is being done as part of Project Diego.  Project Diego is a re-write of the Cloud Foundry elastic run-time and is designed to facilitate easier development changes to the run-time while also allowing it to be more platform agnostic, i.e. support for execution endpoints such as Docker, Linux Containers, Windows and whatever else is just round the corner.

For those interested in the current implementation of Warden containers and how Pivotal are changing this to be more platform agnostic the I recommend watching the following video.  It explains the vision behind re-writing the CF run-time and how it will support that wider ecosystem of endpoints including containers,  pretty cool stuff if you ask me!

Summary

If I very quickly summarise my thoughts after writing this article (which has evolved over a number of weeks) they would be as follows

  • Containers are becoming increasingly popular for running ‘12 factor apps
  • There is a healthy container ecosystem with good buy-in from the big tech firms.
  • There is a great collaborative open source community bought into containers.
  • The Service Orientated / Micro-service approach plays well with container technology.
  • Tech companies are already finding novel uses for containers,  i.e EMC and Pivotal.
  • Netflix, Twitter and Google are using Containers, effectively, at scale and in production!

 

I also saw the following tweet recently, which resonated quite well with me as ultimately its about choice and again it’s about mixing and matching to meet your needs.  Containers on VM’s seems a logical implementation to me and in fact is how I’ve always viewed deployment of Pivotal CF containers,  warden containers on DEA virtual machines.

Summary_Tweet

So what next for containers?  I’m really interested to see how the different flavours of containers evolve,  the CoreOS Rocket announcement I mentioned earlier caused some big waves in the industry but makes sense to me.  After that I think is how the various toolsets in evolve to account for these new container technologies and not just Docker, how they’re adopted into CaaS offerings will drive container adoption.  The fact that widespread production based use of containers appears to still be in the propeller head realm (thinking serious DevOp shops + the likes of Netflix) means that how ISV’s embrace and utilise containers as they have virtualisation will also be very important to widespread adoption.

To give an EMC software example, I attended a meeting last week  on EMC’s Captiva, XCP and Documentum products,  these are products related to image capture, business workflow and records management.  They are made up of multiple traditional application components all integrating together to provide different capabilities. After writing this article, reading a lot of stuff around loosely coupled service architectures and looking at what NetFlix have been releasing to the Open Source community all I could think was the following

          • Re-develop these components as many smaller micro-services (i.e. 12 factor apps)
        • Run them in containers
        • Run them within Pivotal CF
        • Develop and make changes quickly
        • Scale up and down quickly and on demand
        • Subscribe to external services that aid the apps, i.e. messaging and data services
        • RabbitMQ, GemFire, Hadoop, Object Stores (ECS) to name but a few.

 

Sounds simple doesn’t it,  be interesting to see how software evolves to embrace these concepts, both within ISV’s and EMC alike!

EMC, General , ,

EMC World vLabs Sessions – Virtual and in the Cloud

April 19th, 2011

vLabs_LogoI got an email last night from my very, very busy EMC colleague Simon Seagrave.  He’s been working hard with the rest of the EMC tech enablement team to prepare the vSpecialist vLab sessions for EMC World in Las Vegas.

Let me just say, they have worked wonders and have a superb floor show prepared for all you who will be attending from May 9th onwards.  They have created a 200 seat hands on labs covering the EMC products shown below, something for everyone I’m sure you’d agree.

vLabs_Sessions

vLab_PicAll labs with the exception of the VMAX lab have been virtualised and are hosted in EMC’s cloud, that’s us eating our own dog food as Chad would put it.

To attend one of the vLabs then simply register at the console just outside the vLab room.  Sign up for a lab and when it’s your turn your name will flash up on the big screen and a vSpecialist will take you to your seat.  Nice simple process and you may well find that it’ll be me escorting you to your seat.  C’mon people get involved!

Events, General ,

New beginnings–Customer buy side to Vendor sell side

March 11th, 2011

Some may have picked up on this recently and others may not have even noticed but at the start of this week I started working for EMC.  As of last Monday I am now a technical vSpecialist in Chad’s army covering the UK north region as part of the wider EMEA team of vSpecialists.

So let me start off by saying this is a a huge move for me, one I am so excited about and it is a fantastic opportunity that I fully intend to grab with both hands.  I was previously working on the customer side of the fence and although I enjoyed it immensely and it taught me a lot, the lack of bleeding / cutting edge exposure often left me a little underwhelmed.  To put that in context,  I worked in the investment industry and risk awareness and being risk averse is part and parcel of that industry.  As a result bleeding / cutting edge technologies are eyed with suspicion and only implemented when fully proven or half the industry has implemented it. It always made sense to me from my previous employers perspective but it wasn’t fulfilling my desire to work with the latest and greatest kit.

Cloud_JourneyThis is where EMC and more specifically the vSpecialist team comes in! I now find myself embarking on my own personal journey, moving from the buy side to the sell side which I’m sure will be an eye opener. I’m looking forward to working hard to learn all about EMC and it’s plethora of solutions while also working to support and promote the “journey to the cloud” for EMC customers both new and existing.

What does this mean for the VirtualPro blog? Well of course I fully intend to keep it going and hope to have more to write about in the future now that my focus is primarily technology based.  Although I will no longer be viewed as independent I am a technology lover first and foremost.  If it’s worth writing about I will write about it, after all this blog is my own and has been for a number of years, it is not something EMC asks me to do.  I look forward to sharing some new and exciting content with you as my time in the industry progresses.

General , ,

A year of blogging–Top 10 posts of 2010

December 30th, 2010

So 2010 is already almost over and now that I have a little spare festive holiday time I thought it would be a good chance to reflect on my year of blogging.  I have to say I was also inspired by Eric Gray over at vCritical who had written a top 10 posts article, I thought that might be quite interesting to add in as well.

It’s been a great year for me this year, I maybe haven’t blogged as much as I would have liked to due to work commitments and the small matter of a month long trip to Alaska to indulge in my love of snowboarding. However I have had some tremendous experiences along the way this year while blogging when I could.

Highlights for me included my invite to the GestaltIT Tech Field Day in Seattle, where I got to meet a number of other bloggers from across the world and was given the chance to see and comment on some very interesting existing and new technologies.

My trip to VMworld 2010 in San Francisco was also another great chance to finally meet up with other bloggers.  The Tweet-up was a superb chance to meet the likes of John Troyer, Simon Seagrave, Eric Gray and many others.  VMworld itself was once again a superb educational event, I took part in two great group sessions, one with Chad Sakacc and one with Scott Drummonds.  Great new format for 2010 and one that I will definitely be revisiting if I get the chance to attend again.

On the back of my VMworld trip I was also invited to talk at the Scotland VMUG, quite a daunting experience if I’m honest.  However it turned out to be something I really enjoyed and would be happy to do again.  I even enjoyed the difficult questions thrown at me by Mike Laverick, all of them great conversation starters.

Top 10 posts – 2010

1. vSphere ESX4 on a USB key / Pen Drive 6524 views
2. Virtualisation Visio Stencils – Microsoft, VMware, Citrix 4707 views
3. EMC Navisphere Simulator Download 4689 views
4. VMware Remote Console – Vi3, Vmware Server 2 3702 views
5. Virtualisation Visio stencils 3395 views
6. How to run Citrix XenServer 5.5 on VMware vSphere 3379 views
7. vSphere 4.0 – What’s new in vSphere Storage 1854 views
8. Where to start with your VMWare ESX Whitebox 1720 views
9. VMware Visio Pack available – VIOPS 1672 views
10. vSphere vMotion Processor Compatibility and EVC Clusters 1647 views

Its interesting that that people prefer the practical how to or information linking posts as opposed to the review or opinion pieces. I will keep that in mind for my 2011 blog postings.

A massive thank you to everyone who has been a visitor to this site over 2010. All that remains to be said is have a great New Year and all the best for 2011.

General

Symantec Application HA for VMware – VMworld 2010

August 16th, 2010

I was lucky enough last week to be involved in a Gestalt IT conference call with Symantec.  The conference call was designed to give us all a sneak preview of what Symantec were planning to announce at VMworld 2010 in a couple of weeks.  Unfortunately it was under embargo, that is until today!

There were a couple of announcements being made, Symantec introduced a new NFS storage product called VirtualStore and made some further announcements about NetBackup 7 and new VMware specific features.  However the most interesting announcement on the call for me was the release of Symantec Application HA for VMware.

Symantec_Virt_AdoptSymantec have been looking at why customers are not going “the last mile” with virtualisation.  Why are customers not deploying their Tier 1 applications on their virtual platforms? Symantec’s view on this was that customers still have issues with application level failure within guest VM’s.  This product has been designed to fill that void and at present is a product with no real competitors.

As the call progressed the current HA options were described by Symantec and discussed by the group. The obvious one is VMware HA which covers a physical host failure event. Within the VMware HA product there is also VM monitoring which covers you in the event of an OS level failure event, such as a  blue screen.  Then you can of course employ other technologies such as OS level clustering, however you then have to take heed of caveats that hinder the ability to use features such as vMotion and DRS.

I’m always sceptical when I see new virtualisation products, one of my fears is that companies are attempting to just jump on the crest of the wave that is virtualisation. Symantec are obviously a bit more established than your average company, but as always the jury is out until we see a final product doing the business for real.  It transpired during the call that the product is actually based on Symantec Veritas Cluster Server,  a product with a long history in application availability.

Veritas Cluster Server has a lot of in built trigger scenarios for common products such as Microsoft SQL Server, Exchange Server and  IIS.  On top of this built in, out of the box support Symantec also have a VCS development kit allowing for custom scenarios to be written.  I like this approach,  it reminds me of F5 Networks use of the customer community to support the writing of custom rules and features for their product.  If a custom rule or feature has enough demand then they spend the time developing it into their product range.    Perhaps Symantec could look at leveraging their customer base and community in this way and improve the support around VCS trigger scenarios.  One other potential use of the VCS SDK that springs to mind is for application vendors who are making specialist software, CRM, ERP, Finance systems, etc.  They could look to build in Application HA into pre-configured virtual appliances, that would be a great selling point for any software vendor.

The deployment of the product itself takes the form of a guest deployment / agent. Technical deep dive information on the exact integration between the Symantec product and VMware was thin on the ground.  However there was mention of Symantec’s integration with the VMware HA API,  something that I don’t think has been announced by VMware just yet.  The description given to us during the call was that if Symantec Application HA failed to restart the application it could send a downstream API call to VMware HA and ask it to restart the VM’s Operating System.  An interesting concept, something I am sure we’ll hear more about at VMworld.

Licensing for this new product is quite competitive, $350 per virtual machine, a small price to pay for ensuring your Tier 1 application recovery is automated.  Symantec have promised full integration with vCenter Server and the screenshot below shows Symantec Application HA in action monitoring a SQL 2008 server, click on the thumbnail to see a full size image.

If you would like to learn more about Application HA, then get along to VMware and Symantec’s break out session at VMworld. – http://www.vmworld.com/docs/DOC-4658

Alternatively you can listen to a Podcast from Symantec’s Niraj Zaveri discussing the new product.  – http://www.symantec.com/podcasts/detail.jsp?podid=ent_application_ha

General, Gestalt-IT, New Products, VMware , ,

Win a free trip to VMworld 2010

August 10th, 2010

VMworld 2010 in San Francisco is fast approaching and promises to be a fantastic event. I have two questions for you.

        – If you are not attending already would you like to?

        – Would you like someone else to pay for you to go?

I think the obvious answer to both those questions is a resounding YES!  So how do you take advantage of this very, very special offer?  Well over at Gestalt IT my friend Stephen Foskett has arranged a spectacular competition supported exclusively by the four vendors shown below.

                                  

How To Enter

Get yourself over to the VMworld contest extension page on the Gestalt IT web site and get over there quickly.  You will need to fill out the entry form by close of play Thursday the 12th of August, most importantly you need to tell us how you are going to “Pay it forward".  What does this mean,  well the following extract explains what we expect from competition entrants.

There’s always a catch, right? Inspired by winner Greg Stuart‘s desire to contribute to the community, we’re not just going to pick a winner at random. We’re going to pick the person who presents the best case for themselves.

Entrants must explain how they plan to “pay it forward” if they get to go to VMworld. Will you start a blog? Write some tutorials? Contribute to a forum or online community? Present to your local VMUG? Get creative and spread the wealth of knowledge you get from the event!

Our panel of judges is made up of none other than the most-excellent roster of past Tech Field Day delegates! They’ve proven themselves to be independent-minded and knowledgeable, and we’re sure that they will pick the best entries!

It’s pretty simple and at present there are not quite as many entries as you’d think.  Get involved people, it’s not going to take you long and could end on a nice trip to San Francisco for one of the highlights of the IT calendar.  I look forward to reading the entries and helping decide the winner!

Events, General, VMware , ,

HP Server DDR3 Memory Configuration Tool

July 26th, 2010

I’ve recently been lucky enough to be looking at purchasing a few additional ESX servers. I have spent a good few years now working with HP servers so have been looking at the HP Proliant DL380 G7 model for my particular needs.

I tend to spend a bit of time ensuring that the servers are specified correctly using the HP server quick specs. It can take a bit of time but I want to make sure that I am getting the right configuration for my virtualisation solution. To see an example follow the link to see the current quick specs for the HP Proliant DL 380 G7 server.

image

While reviewing the quick specs for the DL 380 G7 I noticed the following section which I’d not seen before when buying other HP Proliant servers.

NOTE: Depending on the memory configuration and processor model, the memory speed may run at 1333MHz, 1066MHz, or 800MHz. Please see the Online Memory Configuration Tool at: www.hp.com/go/ddr3memory-configurator.

Now I usually buy memory as follows 4GB, 8GB, 16GB, 32GB, 64GB, 128GB, most people probably do I imagine.  However as it turns out that the new Nehalem (Xeon 5500) and Westmere (5600) processors have some specific rules that need to be abided by to ensure optimal performance.  This is where this particular HP tool comes in handy, whether you are configuring a new server or attempting to upgrade a current one.

image

 image 
As it turned 32GB is not an optimal configuration,  something that certain resellers never picked up on.  As it turned out I needed either 24GB or 36GB to ensure my server memory runs at 1333MHz and performs at its best. 

image

If you are looking at buying a new HP server, or simply looking to add memory to an existing server I strongly suggest taking a look at this tool to ensure you get it right.

http://h18000.www1.hp.com/products/servers/options/tool/hp_memtool.html

General, Servers , ,

Gestalt IT Seattle Tech Field Day – Day 1 Summary

July 15th, 2010

So that is Day 1 of the Seattle Tech Field Day out of the way and what a day it has been.  We’ve been out to Microsoft Redmond HQ, or “the temple” as John Obeto calls it.  We saw some new products from Veeam and were privileged enough to be the first port of call for a new and very exciting storage start-up, Nimble Storage.

There has been a lot of information flowing about today, an awful lot. My plan is to spend some time assimilating all the information and doing more detailed posts on everyone we’ve seen, so for now I think a summary will suffice.

image

Veeam are a company that needs very little introduction.  They’ve not been around long (3 years to be exact) but they are a well known and well respected brand in the virtualisation space.  Today Veeam were announcing a new product / concept that they have at the development stage, one that got delegates quite excited.

Veeam were introducing vPower a new product made up of 3 products, SureBackup, Instant Restore and CDP (a much debated point).  What stood out most for Tech Field Day delegates was the some of the Instant Restore functionality, the ability to run your VM direct from backup image was well received.  My personal thought at the time was who wouldn’t want to have a mechanism available to test your backups actually work.  The added bonus was that Veeam also provide network isolation and an almost Lab Manager ability to create groups of machines that should be recovered together. The idea of verifying your backups by running them from the back up storage was one thing,  Veeam had however written their own NFS in order do this.  This means that technically in the event of an outage you can run your machine directly from the Veeam backup server NFS datastore.  It’ isn’t going to be fast but it’s running which is the main thing you should be concerned about.  It was all good stuff and general consensus was that it was a step in the right direction and quite a shift in the VM backup space.

image

Our surprise for the day was a new Tech start-up who were launching themselves and their product for the very first time.  Nimble Storage is a new start company who consist of a number of high pedigree employees with a proven track record at companies such as NetApp and DataDomain.  This is further backed up with an experienced board of directors and top venture capital investment and last but not least, a pretty good product at a good price point.

Without going into to much detail Nimble storage have produced a new array that probably reshapes the way people think about primary and backup storage as well as the use of flash storage within an array. Right at the outset they stated that their aim was to introduce flash storage to the mid size enterprise while also utilising a lot of the features being pioneered by other vendors.  Nimble’s approach is different in that it provides a converged appliance, one that does primary and secondary storage within the same device while also introducing flash caching to provide high performance.  Through the use of inline compression, flash cache, sequential write down to disk, efficient snapshots and replication as well as zero space cloning, Nimble is packing a lot into their product. At the top end you are paying a list price of  $99,000 + $6,000 annual maintenance.  For this you are looking at 18TB of primary storage (not including flash cache) + 15,000 IOPS from a SATA / Flash Mix. They were also looking at 216TB of backup capacity within that same device, driven primarily by their use of space efficient snapshots.  I have a lot of notes on this particular presentation and will be expanding upon this in the coming weeks.

image

Now F5 was a company I was really interested to see, primarily because I wasn’t entirely sure what they offered.  Sure I knew they were into networking but even then what did they do in the networking space, I had no idea.  We were treated to 4 different presentations that covered the following.

  • WAN optimised geographical vMotion
  • Coding of IRules and IControls for the BIG-IP appliances
  • Intelligent client VPN connectivity via BIG-IP’s Edge gateway module.
  • Data Management and Routing using F5’s ARX appliance, file system virtualisation.

 

All were very impressive and I will definitely be looking to dig a little deeper and examine in full some of the technology presented and discussed.  I was particularly impressed with F5’s vision for data management / file level virtualisation, as they seem to be one of the only companies in this space that I am aware of.  This vision was demonstrated to us as a mix of onsite primary tier 1 storage and off site cloud storage.  The ARX appliance would sit as a director presenting a unified view of the storage to the end user, while internally keeping a routing table of up to a billion files.  This will allow IT departments to place files across multiple types of storage, whether that be differing internal storage devices or storage in the cloud. The concept sits well with the current cloud strategies being developed by most major IT companies, what’s surprising is that nobody else is doing it.  There is a lot more to be said about F5,  I plan to delve a little deeper and write some more,

Summary

It’s been a very busy day,  one however that has been exceptionally rewarding. Tech Field Day has been everything I expected it to be so far,  there has been a wealth of information shared and a lot of feedback given. The biggest win for me though is getting the time to learn more about vendors and their product offerings, that and hearing the comments of my fellow delegates.  There is a good mix of intelligent people from varied backgrounds and that has only added to the experience so far.

We ended the night with a tour of the Boeing museum of flight and a couple of drinks with dinner.  It’s now midnight and after just 6 hours sleep last night and a busy schedule ahead for tomorrow,  I am going to call it a night there.

Note : Tech Field Day is a sponsored event. I receive no direct compensation and take personal leave to attend, however all event expenses are paid by the sponsors via Gestalt IT Media LLC. The views and content expressed here are my own and is in no way influenced by the sponsors of this event.

Events, General, Gestalt-IT, Tech Field Day , , ,

Windows Virtual Desktop Access Licensing – What is it?

June 24th, 2010

I try and avoid licensing at all costs, it’s a horrible subject and one that strikes fear in to many.  When you add virtualisation in to the mix it tends to get a little more complicated and you often find that the rules change on a reasonably regular basis. I was involved in a discussion today about Citrix XenDesktop and an interesting point came up when discussing licensing Virtual PCs.  Someone mentioned something called the Microsoft VDA,  I hadn’t a clue what they were talking about so I did a little digging around to find out more.

In summary this is what I found, it’s not pretty reading. As of the 1st of July 2010 Microsoft is changing the way it licences the Windows OS in VDI environments.  The following changes will take place

Windows® Virtual Enterprise Centralized Desktop (Windows VECD) and Windows VECD for Software Assurance (SA) will no longer appear on the price list.

Virtual desktop access rights will become a Windows Client Software Assurance benefit. Customers who intend on using PCs covered under SA will now be able to access their Virtual Desktop Infrastructure (VDI) desktops at no additional charge.

Customers who want to use devices such as thin clients that do not qualify for Windows Client SA would need to license those devices with a new license called Windows Virtual Desktop Access (Windows VDA) to be able to access a Windows VDI desktop.Windows VDA is also applicable to third party devices, such as contractor or employee-owned PCs.

What does it all mean?

In it’s simplest terms you don’t licence the windows virtual machine itself, you instead licence the end point its being accessed from. To further break this down there are two distinct endpoint categories to consider.

1. The end point is a Windows OS covered by Software Assurance (SA)

2. The end point is a non windows device or is a windows device without SA

In the first category you are covered to access a windows virtual machine as Virtual Desktop Access (VDA) is included as a Software Assurance benefit.  In the second category however you need to purchase a VDA subscriptions for each end point device.  Unfortunately this is not a one off purchase either, this is a $100 per year per device subscription cost.

As an example, say you have  a sales person who uses a company laptop and a company smart phone to access their VDI virtual machine.  You would need to have the laptop installed with a software assured copy of windows and buy a VDA subscription for the smart phone.  Alternatively if you have a non SA copy of windows on the laptop you need 2 VDA subscription licences to cover both devices.  This latter example would obviously be the same if the laptop was MAC OS or Linux based.

There is some good news though in that Microsoft have something called extended roaming rights with the windows VDA licence.  In short the primary user of a VDA licensed device can access their VDI desktop from any device that is not owned by the users company.  Examples would be a users home PC, airport kiosk or hotel business centre

There is a lot to take in with licensing, especially in the VDI space. I suggest everyone running or planning to deploy VDI takes a look at the recent changes and considers how they effect existing or planned deployments.  Some people will see this as Microsoft stifling the growth of Virtual Desktop Infrastructure, others will argue that it may actually acts as an enabler.  In truth I’m just not sure. I’m still digesting what it all means and playing through the various scenarios and combinations of VDI access.  On the surface I can see it hindering as opposed to helping this growing virtualisation sector.

For additional information I’d recommend checking out the following Microsoft FAQ article and for those of you who are Gartner customers the linked article below breaks it down quite nicely into simple terms.

Microsoft VDI suites & Windows VDA Frequently Asked Questions PDF

Gartner – Q&A for understanding Microsoft Licensing Requirements before deploying HVDs

General, Gestalt-IT, Microsoft , , , ,

Windows 2008 VM’s losing default gateway

June 17th, 2010

I am currently in the process of deploying a number of new Windows 2008 virtual machines and came across an interesting little issue.  Every time I reboot the server the default gateway blanks itself, meaning I cannot connect to it via RDP as it is in a remote data centre.  Thank goodness for vCenter console access, if this was a physical server it would be a real pain in the you know what.

So how do you fix it? well simply open a command prompt and type the following

netsh int ip reset c:\resetlog.txt

This is basically a reset of the TCP/IP components and as a result all IP information for your network connections will be wiped back to the default dynamic DHCP setting. So now you need to re-enter the static IP information for the server and restart it. It’s worthwhile doing a couple of reboots just so you can be sure that the problem has actually disappeared.

Further information can be found in Microsoft KB article 299357

General, Microsoft