Perhaps the title to this blog post should actually read Docker, Docker, Docker… After all when anyone mentions container technology these days the immediate image that comes to mind is that smiley docker whale!
However I would argue that it should just be containers, containers, containers… The reason I say that is I feel the ecosystem around containers looks significantly broader than just Docker and appears to be diversifying on a weekly basis. It would be silly however not to acknowledge the fact that the current landscape is dominated by Docker, as it is the most commonly recognised implementation of containers today. In fact as I go through this article, you’ll find it ironic that most of the examples I use to explain containers are in fact Docker related, but bear with me I’ll get to the point eventually.
What are containers?
I’m conscious that not everyone reading this will be aware of what a container is… Where have you been for the last year! 😉 It’s worth setting out some of the basics and briefly comparing it with a subject people do understand well, virtualisation. So how does it differ? I’ve tried to give a very quick overview below.
In simple terms, containers are lighter in weight and have less memory and compute overhead than virtual machines, they make it easy to support applications that consist of hundreds or thousands of small, isolated moving parts, think of the trend we’re seeing around Micro services here. A properly containerised application is easy to scale and maintain, so it’s understandable why this would appeal to both developers and operations teams alike.
Where did they come from? / Where are they going?
Containers themselves are not a new invention, in fact they have been around for years, consider AIX workload partitions and Solaris Containers alongside todays use of Linux Container technology (LXC) and cGroups. However it’s thanks to Docker that containers have now become mainstream in the last year. The Docker Engine makes it easy to consume with its simple abstraction, with the ingenious idea of having pre-built, transportable containers in the form of multi-layered Docker images and a registry mechanism for sharing these images with the world via the Docker Hub.
There are of course other approaches to containers appearing, such as Cannonical LXD and FlockPort both building on top of the native Linux LXC technology. Just last week we’ve also seen CoreOS cause a bit of a storm by deviating away from the Docker path they’ve been treading for a while and setting out to create it’s own container technology called Rocket.
Following their last round of VC funding, it appears that Docker are now moving up the stack and looking to create more of a platform. With the introduction of Docker Machine (remote management), Docker Swarm (clustering) and Docker Compose (Multi-App configs) Docker has signalled it’s intent to consolidate it’s position as the container technology of choice with smarts wrapped around it. In my opinion this is very similar to VMware’s move from just selling ESX, to selling an SDDC platform with all the bells and whistles around automation, monitoring, SDN, SDS, etc.
It’s interesting however that CoreOS feel that container technology still has some fundamental work required (improve security, tooling and openness) and through Rocket are looking to build on the work that Docker have already done . As CoreOS is a very popular choice for running Docker Containers, I think it’s a great example of coopetition in action. I also read with interest that Pivotal have expressed a serious interest in getting fully involved in collaborating with CoreOS on the next generation of open containers. Commoditisation of containers anyone!?
Additional Reading
For those that want to dig deeper and do some additional reading on container technology, I would suggest having a little look at the following links for the basics.
What is Docker? – https://www.docker.com/whatisdocker/
Intro to Docker Slide Deck – http://www.slideshare.net/dotCloud/intro-docker-october-2013
Understanding Docker – https://docs.docker.com/introduction/understanding-docker/
Flockport – LXC vs Docker – http://www.flockport.com/lxc-vs-docker/
Wider EcoSystem Momentum
One of the other reasons I think it’s Container, Container, Container and not just Docker is the sheer movement in the wider eco system over the last month alone.
I’ve mentioned the various flavours of containers springing up out there (CoreOS, FlockPort, LXD, etc). On top of that there have been a number of interesting announcements in the platform and orchestration space.
Google announced their Google Container Engine Alpha release, managed by Google’s open source Kubernetes orchestration engine and supporting Docker containers. Google are definitely one of the the worlds biggest consumer of containers, spinning up a claimed 2 billion containers per week, now they’re getting into the Container as a Service game. It’ll be interesting to see how they broaden support for other container providers like CoreOS both within the platform and within Kubernetes as an orchestration tool. https://cloud.google.com/container-engine/
Amazon are obviously keen not to miss out on the buzz around containers, at AWS reinvent they launched their EC2 Container Service with support for Docker. Again though Docker is mentioned specifically day 1, in theory this service offering again could potentially be expanded to interact with other container technologies. Again it will be interesting to see what Amazon do next to in this space to broaden support and build their CaaS offering – http://aws.amazon.com/blogs/aws/cloud-container-management/
Microsoft announced support for Linux Containers on Azure back in June 2014 (20% of Azure workloads are now running Linux, can you believe that!). Microsoft have since announced plans to support Docker on top of Windows Server and integrate the Docker Hub with the Azure Gallery service catalogue. So interesting moves afoot from all the big cloud players in the market – http://azure.microsoft.com/blog/2014/10/15/new-windows-server-containers-and-azure-support-for-docker/
All in all the ecosystem looks to be making healthy progress, the big players definitely appear to be getting caught up in the wave of container mania focused around Docker primarily. The key thing for me is the openness around orchestration engines like Kubernetes, which will ensure that these tools don’t just apply to Docker but will be capable of embracing new flavours of containers as they appear.
Who is using containers?
I’ve heard of lots of examples, Google, NetFlix, Twitter, Hailo and many, many more. However as I work for EMC and have an interest in Pivotal so thought I would concentrate on those specific examples. This blog post started following a Docker meet-up in Edinburgh a while back and I was asked if we at EMC were using Docker in production, my answer was…
“well yes sort of, we are using various container technologies within products at EMC”,
The guy looked visibly shocked at this point. He told me he didn’t know anyone taking that kind of approach to using containers in production and had only ever seen containers used in the realms of application development and testing. In EMC’s case it’s about changing up how we deliver software on commodity hardware and drive our Software Defined Storage strategy.
The above might actually come as a surprise to some, however at EMC and Pivotal we’re actively adopting and using container technology to change both how we deliver data services and Platform as a Service (PaaS). In EMC we actively use Linux based containers within the VNXe storage array and our Elastic Cloud Storage (ECS) product, my global boss Chad Sakac alluded to this some time back when the VNXe was launched.
VNXe / Project Liberty
The release of the VNXe was quite key for EMC and the ongoing future VNX array architecture. This was the first time we had managed to consolidate the VNX file and Block code base on a single x86 platform. Previously the file part of a VNX ran on separate hardware (Data Movers with X-Blades) in front of the block part of a VNX (x86 based storage processors).
Following “Project Liberty” those code bases were architected onto a single linux based kernel. Now the VNXe runs both the File and Block code on the storage processors (i.e. an x86 commodity server) without the need for dedicated X-Blades in front of the block part of the VNX. We’re still using FLARE and DART code bases to provide the data services, however in this instance they are being run within CSX containers, these containers are part of EMC’s C4 Code Base being used for VNXe arrays, the picture below gives some insight into that architecture where you can see Data Path code being run on the CSX containers. This is a great example of successful utilisation of containers to deliver simplicity, for EMC internally around how we develop code and build products and for EMC customers around simpler consumption of our data services on VNX.
We’re now seeing this same concept being applied in the new VMAX 3 which boasts our HyperMax OS. Just last week we’ve launched the latest version of the Enginuity code and with that we can now run the VNX file code as Embedded NAS on the VMAX 3 instead of having to put a physical VNX gateway in front of it. This is a direct result of the work done to abstract the data services in the VNX, freeing it from the confines of it’s dedicated X-blade hardware. I’m not 100% sure what the secret sauce is behind this implementation in the HyperMax OS but in one of his most recent blog posts Chad alluded to it it being similar to what we do in VNXe, I’m reading that as C4 & CSX 🙂
Credit for the most of above goes to Chad who blogged about these things here (Project Liberty), here (C4 / CSX Containers) and here (VMAX 3)
Elastic Cloud Storage
For those not familiar with EMC’s Elastic Cloud Storage (ECS) it’s a combination of commodity servers, ethernet networking, disk shelves, disks and EMC ViPR advanced data services. The ViPR data services allow us to layer a mixed combinations of Block, Object and HDFS on top of that same commodity hardware and to do so at huge scale. These advanced data services of course need to run somewhere, within ECS this is on top of x86 Intel based servers known within EMC as “Phoenix”, the number of actual servers / racks depends on your configuration, an example 1 rack config can be seen below with the Phoenix servers deployed.
ViPR itself is a distributed scale-out platform, ECS utilises ViPR on Docker at the core of the product. The components of ViPR such as the ViPR Controller, ViPR Object storage engine and ViPR Block services are basically individual scale out applications. Therefore Docker makes a lot of sense as a means of deploying them within the ECS appliance.
In the example below we’re seeing ScaleIO, which is used to deliver the block data services of the ECS appliance. Running a docker ps –a command returns the current containers running on the docker engine, running on a single Linux host, running on the phoenix hardware. As you can see there are components of ScaleIO as well as management components such as zookeeper, the ECS fabric and ECS registry.
From a higher level architecture perspective, what does this look like? Well below you can see we have 3 ECS appliances operating in a fabric, with each Linux server running multiple containers and therefore running multiple applications. So again, great use of containers to build out a scale out architecture which can deliver exabytes of storage.
Credit for this info & pics go to my EMC Colleague Michael Wehle – ECS Appliance Blog Post
Pivotal use of Containers
So I best not forget the guys over at Pivotal, who are also using containers within the Pivotal Cloud Foundry PaaS platform. Today in Pivotal CF 1.3.0.0 containers are used within what are known as Droplet Execution Agents or DEA’s for short, this is where application instances run within Pivotal CF. These DEA’s within Pivotal CF are based on an Ubuntu Linux distribution and these DEA’s run what are known as Warden Containers. The container technology is based on Linux containers utilising namespaces and cgroups for isolation. Warden itself is a service for managing these containers and determining how clients interact with those containers, i.e. routing network traffic to an app within a container.
Warden is currently being re-written in the language Go and will go by the name Garden, this is being done as part of Project Diego. Project Diego is a re-write of the Cloud Foundry elastic run-time and is designed to facilitate easier development changes to the run-time while also allowing it to be more platform agnostic, i.e. support for execution endpoints such as Docker, Linux Containers, Windows and whatever else is just round the corner.
For those interested in the current implementation of Warden containers and how Pivotal are changing this to be more platform agnostic the I recommend watching the following video. It explains the vision behind re-writing the CF run-time and how it will support that wider ecosystem of endpoints including containers, pretty cool stuff if you ask me!
Summary
If I very quickly summarise my thoughts after writing this article (which has evolved over a number of weeks) they would be as follows
-
Containers are becoming increasingly popular for running ‘12 factor apps’
-
There is a healthy container ecosystem with good buy-in from the big tech firms.
-
There is a great collaborative open source community bought into containers.
-
The Service Orientated / Micro-service approach plays well with container technology.
-
Tech companies are already finding novel uses for containers, i.e EMC and Pivotal.
- Netflix, Twitter and Google are using Containers, effectively, at scale and in production!
I also saw the following tweet recently, which resonated quite well with me as ultimately its about choice and again it’s about mixing and matching to meet your needs. Containers on VM’s seems a logical implementation to me and in fact is how I’ve always viewed deployment of Pivotal CF containers, warden containers on DEA virtual machines.
So what next for containers? I’m really interested to see how the different flavours of containers evolve, the CoreOS Rocket announcement I mentioned earlier caused some big waves in the industry but makes sense to me. After that I think is how the various toolsets in evolve to account for these new container technologies and not just Docker, how they’re adopted into CaaS offerings will drive container adoption. The fact that widespread production based use of containers appears to still be in the propeller head realm (thinking serious DevOp shops + the likes of Netflix) means that how ISV’s embrace and utilise containers as they have virtualisation will also be very important to widespread adoption.
To give an EMC software example, I attended a meeting last week on EMC’s Captiva, XCP and Documentum products, these are products related to image capture, business workflow and records management. They are made up of multiple traditional application components all integrating together to provide different capabilities. After writing this article, reading a lot of stuff around loosely coupled service architectures and looking at what NetFlix have been releasing to the Open Source community all I could think was the following
-
Re-develop these components as many smaller micro-services (i.e. 12 factor apps)
-
Run them in containers
-
Run them within Pivotal CF
-
Develop and make changes quickly
-
Scale up and down quickly and on demand
-
Subscribe to external services that aid the apps, i.e. messaging and data services
-
RabbitMQ, GemFire, Hadoop, Object Stores (ECS) to name but a few.
Sounds simple doesn’t it, be interesting to see how software evolves to embrace these concepts, both within ISV’s and EMC alike!