DevOps, Microservices and Containers – a High Level Overview

February 8, 2016

A little while ago I put together a short presentation intended to provide a high-level overview of the wild and wacky world of DevOps, Microservices and Containers.  I present this deck both internally and externally to give folks an overview of what is happening in IT today.

For your reference, I have added the speaker notes after the deck.  I’m sure everyone has a different take on the concepts and explanations here.

Feel free to add your thoughts.


Speaker notes

1) Cover

2) Digital Players

  • Digital pioneers have reset customer expectations and disrupted industries resulting in the need for organizations to digitally transform in order to be competitive and ultimately survive (witness Kodak, Borders, Blockbuster, the taxi industry etc).  Additionally there is no time to waste, 5 years after the financial crisis companies who have been in cost cutting mode are all waking up at the same time realizing that they have a lack luster product portfolio and need to innovate.

3) Digital Business = Software (and it has a shelf life)

  • The key enabler for digital businesses is software and that software has a shelf-life.  To be competitive that software needs to reach customers as soon as possible.  To help drive this speed and customer focus, The Agile manifesto of 2001 was created.  The manifesto was a reaction to the long development cycles driven by the “waterfall” method of software development.  Agile turned its focus to the customer and quick iterative turns of development.

4) But that’s only “half” of the equation

  • While agile has sped up software develop and has made it more responsive to customer needs, unless its paired with a greater cooperation with operations, the overall delivery of software to customers remains the same.
  • In the past, Developers have kept their distance from operations.  It is not surprising that these groups have stood apart in light of how vastly different their goals and objectives have been.
    • Developers are goaled to drive innovation and reinvention in order to constantly improve on user experience and deliver new features to stay one step ahead of the competition.
    • Operations on the other hand is focused on providing rock solid stability, never letting the site go down, while at the same time being able to scale at a moment’s notice.

5) Dev + Ops: A Methodology

  • And this is where DevOps comes in.  DevOps is a methodology intended to get developers and operations working together to decrease friction and increase velocity.  You want to be able to get your “product” to customers as quickly as you can, and shorten this time frame as much as possible,  you also want to be able to continuously improve your product via feedback.
  • The gap between developers and operations is often referred to as “the wall of confusion” where code that often isn’t designed for production is lobbed over the wall.  Besides silos, the tools on each side do not fit together and there isn’t a common “tool chain.”  When the site goes down finger pointing results and ops accuses devs of writing bad code and devs accuse ops of not implementing it correctly.  This friction is obviously not productive in a world where “slow is the new down”
  • By tearing down the wall, the former delineation of responsibilities blurs:
    • Developers are asked to put “skin in the game” and for example carry a pager to be notified when an application goes down.
    • Conversely operations will need to learn some basic coding.
  • In this new world order, developers and ops folks who understand and can work with “the other side” are in high demand.

6) DevOps What its all about

  • Double clicking on DevOps, here is how it flows from Tenets to Requirements and then Benefits.   I should say that there are a lot of different interpretations of which components make up the key characteristics of DevOps but in the true spirit of the methodology, you need to move forward with “good enough. ” (“Always ready, never done”)   One factor that is widely agreed upon is that culture is the most important characteristic of DevOps.  Without it, you can have all the great processes and tools you want but they will languish.  All of this underpinned by the foundation of cloud, open source software (which the majority of the tools and platforms are composed of) as well as microservices – which I will expand on in a second.

7 & 8) Tool chain

  • Now while I said tools are not as important as culture, the concept of a tool chain provides a good illustration of the connected nature of DevOps.  DevOps demands a linked tool chain of technologies to facilitate collaborative change.   Interchangeability is key to the success of the DevOps toolchain (via loosely coupled via APIs).   Open Source tool adoption and appetite remain strong; however, large-enterprise clients prefer commercially supported Open Source distributions.   You will see tool chains depicted many different ways with different players and buckets but this example gives a decent overview of the high-level linkage of processes/components.  There are many different tools out in the market that fit into these buckets but I have picked just a couple for each to act as illustrations.
  • It all starts with new code
  • Continuous integration(CI) is the practice in software engineering of merging all developer working copies to a shared mainline several times a day.   Changes are immediately tested and reported on when they are added to the larger code base.
  • Version Control: These changes to the code are tracked in a central repository  –“one source of truth”
  • Code deployment: installs the code across 100s/1000s of servers
  • Measurement and monitoring: continuously measures and monitors the environment to identify bottle necks. This information is then fed back at the front of the process to drive improvements.  This data is then fed back to the front of the chain to drive improvements
  • Across this chain the code travels in the form of Microservices that are conveyed in containers.

9) Microservices: essential to iterate, scale and speed

  • Lets take a closer look at microservices which although they support DevOps, have developed independently over the last few years as a grassroots, developer driven effort.   Microservices is the concept of the decomposing software applications into loosely coupled and recombinable bite-sized processes Eg breaking a “store” component into: order processing, fulfillment, and tracking services .  This decomposition greatly increases the ability to iterate, scale and it increases speed, thereby enabling continuous delivery.  Microservices and cloud go hand-in-hand, where autoscaling can help ensure no service becomes a bottleneck by adding horse power where needed.  Docker and microservices are a perfect fit.

10) Enter the modern container:

  • As I mentioned previously, containers fit well as the conduit to deliver microservices.  While containers have been around for a decade in the form of Solaris Zones, BSD jails as well as at Google where they have used them to run their infrastructure (creating and blowing away 2 billion containers a week).  It has only been in the last year or two that they have come to the fore thanks to Docker who evolved Linux containers in the context of modern applications and made containers easy to use for the general dev/ops person (Docker expertise is currently the second most sought after skill today in the tech world).
  • Containers serve perfectly as vehicles to convey microservices and applications across the tool chain from development through testing, staging and production, much the same way goods in shipping containers can be packaged and sent on a truck from the warehouse the loaded on a ship and then put on a truck waiting on the other side.  Additionally they can be used on public and private clouds as well as bare metal servers.

11) Containers vs VMs.

  • Architecturally VMs and containers differ in that VMs sit on top of hypervisor and each VM contains both a guest OS as well as an app.  Containers on the other hand package an app or service by itself and it sits directly on top of the OS.  Given the maturity of VMs, they are more secure than containers, they also take much longer to spin up.   Containers on the other hand don’t currently have the security of a VM but spin up in milliseconds vs seconds or minutes.  In order to address security concerns, in most cases today organizations are running containers within virtual machines
  • As all new technology, containers are still rough around the edges and if you aren’t an early adopter kind of organization, you may want to play with/pilot them but not implement on a large scale just yet.

12) The landscape: 

  • At this point the container landscape is an ever changing field populated by small and large players.  This space is dominated by open source offerings.
  • Container engines: As the center of gravity for of the landscape are the container engines themselves made up by the 800 pound gorilla, Docker as well as Rocket which was created by CoreOS in response to what CoreOS felt was a lack of security in the Docker container.  This summer the Open Container Initiative was kicked off to bring the two sides together and create a common spec.
  • MicroOS’s: Sitting beneath the containers are the micro OS’s, basically the size of 25 pictures on your cell phone (100 MB) or 1/20th the size of a typical OS.   What makes these so small is that they have been stripped down to the bare necessities eg no fax sw included.  These began with CoreOS and now there are offerings from Red Hat (atomic), Microsoft (nano), VMware (photon) and Rancher etc (others include Intel’s ClearOS and Ubuntu’s Snappy)
  • Container Orchestration: Just like having VM or server sprawl, you  can have container sprawl and need to be able to manage them.  The offering that sits at the center is Google’s Kubernetes built on their own container management platform and which can combined with the other orchestration offerings.   The others include, Rancher, Docker Swarm, CoreOS, Mesosphere (based off of the Apache Mesos project) and Flocker a container data volume manager
  • Clouds with Docker Support: Most clouds are now building docker support from OpenStack to Joyent’s Triton, Google’s container engine, EC2 and Microsoft Azure

13) The DevOps equine continuum

  • Now if we zoom back out and take a look at the implemtation of DevOps it can be illustrated by the analogy of an “Equine continuum.”  Here is a model for classifying companies into three buckets illustrating their position on DevOps journey.
  • In the upper right you have the “Unicorns” (not the billion dollar-valued unicorns of the valley) such as AWS, google, uber etc who have employed devops methodology since their beginnings or soon there after.  This tend to be cloud based companies.
  • Next on the continuum are “Race Horses” often times banks like Goldman Sachs or JP Morgan Chase who are starting to implement DevOps to increase their agility and gain a competitive edge.
  • In lower left are the “Work horses” who have just started looking into how they can improve their competitiveness via digital transformation and what role DevOps may play.

14) Where do I start

  • If you fit into the workhorse classification and you’re looking to get started we are not advocating that you dump all your existing infrastructure and start implementing DevOps, for one thing you would have a mutiny on your hands.   The best place to focus is on those fast changing applications and services on the front end that are customer facing.  You would want to leave stable transaction-oriented systems on the back as they are.

15) What Dell is doing in this space


  • Professional services: Dell’s professional services organization has an array of offerings to enable organizations to implement DevOps practices:
    • Agile/DevOps Advisory services; Agile Delivery Services
    • CI/CD consulting and implementation services
    • DevOps Migration/managed services
    • DevOps focussed test Automation, performance testing services
  • OpenShift: Working with our partner Red Hat, Dell is making the OpenShift Platform as a Service available to our customers.
  • Dell XPS 13 developer edition:  This is an Ubuntu Linux-based developer laptop  that allows developers to create applications/microservices within Docker containers on their laptops and then deploy these containers directly to the cloud.
  • Open Networking OS 10:  This switch OS works with Kubernetes which coordinates the hardware pieces.  OS 10 programs the hardware as containers come and go.


  • Flocker pluginCode that allows ClusterHQ’s Flocker to integrate with the Dell Storage SC Series has been made available on github. What this does is allow developer and operations teams to use existing storage to create portable container-level storage for Docker.  Rather than coming from an internal planning process or committee, the idea for a Flocker plugin came from Dell storage coder Sean McGinnis. Sean was looking for ways to make Dell Storage an infrastructure component in an open source environment.
  • Containerizing an old-school application: There are also several projects going on within the company to develop a greater understanding of containers and their advantages. About a year ago Senior Linux engineer Jose De la Rosa had heard so much Docker and container-mania that he thought he’d find out what the fuss was all about.  Jose started looking around for an app within Dell that he could containerize and came across Dell’s OpenManage Server Administrator (OMSA).  In case you’re wondering, OMSA is an in house application used to manage and monitor Dell’s PowerEdge servers.  Rather than being a micro-service based application, OMSA is an old school legacy app.  Jose succeeded in containerizing the application and learned quite a bit in the process.
  • CTO Lab: Dell’s CTO team has set up Joyent’s elastic container infrastructure, Triton, in our lab running Docker. The idea is to learn from this platform and then work with the Active Systems Manager team to decompose ASM  into microservices and run it on the Triton platform.

Industry Consortia and Internal use of DevOps

  • Open Container Initiative: Dell is a member of the Open Container Initiative which is hosted by the Linux foundation and is chartered to create common specifications for containers to allow for interoperability and increased security.
  • Dell IT:  Within Dell itself, devops is being used to support and internal IT.  Dell’s Active System Manager employees the DevOps methodology in its product development process.

Extra-credit reading

Pau for now…


Dell launches Debian-based Open Networking OS

February 8, 2016

A couple weeks ago when Silicon Valley-based Darius Goodall and Cliff Wichmann made the pilgrimage out to Austin I grabbed some time with them to learn about the recently announced OS 10.  Darius heads up the DevOps and tech partner ecosystem in Dell’s networking group while Cliff is the software architect for OS 10.

Take a listen as they take us through the new OS and where it’s going.

OS10 overview

Some of the ground Darius and Cliff cover

  • A couple of years ago Dell disaggregated the  switch hardware from the software and now we’re disaggregating the SW
  • Think of the switch itself as a Debian-based server with a bunch of ethernet ports
  •  It will allow you to orchestrate, automate and integrate Linux-based apps into your switching environment
  • Timeline: Base version coming out in March – a DevOps friendly server environment
  • Timeline: In June/July the premium applications will be released which will be the switching packages to use on top of the Linux base+ a fancy routing suite (if you want to get going before hand you can use Quagga on top of the base)
  • CPS: programatic interface we’ve added into the base in order to enable developers

Extra-credit reading

  • Dell serves up its own disaggregated OS – NetworkWorld
  • Dell drops next network OS on the waiting world – The Register
  • Dell’s OS10 aims to open up networks, then whole data centers – PCWorld,

Pau for now…

Working on Triton in the lab, what’s on the horizon

January 27, 2016

As we’ve talked about before, a few of us in Dell’s CTO group have recently been working with our friends at Joyent.   This effort is a part of the consideration of platforms capable of intelligently deploying workloads to all major infrastructure flavors – bare-metal, virtual machine, and container.

Today’s post on this topic comes to us complements of Glen Campbell — no, not that one, this one:

Glen has recently come from the field to join our merry band in the Office of the CTO.  He will be a part of the Open Source Cloud team looking at viable upstream OSS technologies across infrastructure, OS, applications, and operations.

Here is what Glen had to say:

What’s a Triton?

Joyent’s Triton Elastic Container Infrastructure, a Private Cloud variant of the Joyent Elastic Container Service PublicTriton slide

Cloud, allows customers to take advantage of the technologies and scale Joyent leverages in their Public Cloud.

On the Triton Elastic Container Infrastructure (which I’ll call “Triton” from now on) bare-metal workloads are intelligently sequestered via the use of the “Zones” capabilities of SmartOS.   Virtual machines are deployed via the leveraged KVM hypervisor in SmartOS, and Docker containers are deployed via the Docker Remote API Implementation for Triton and the use of the Docker or Docker Compose CLIs.

What’s the Dell/Joyent team doing?

As part of interacting with Triton we are working to deploy a Dell application, our Active System Manager (ASM), as a series of connected containers.

The work with Triton will encompass both Administrative and Operative efforts:


  • Investigate user password-based authentication via LDAP/Active Directory
    • in conjunction with SSH key-based authentication for CLI work


  • Use of:
    • Admin web UI and User Portal to deploy single/multi-tier applications
    • Joyent Smart Data Center (SDC) node.js client to deploy from remote CLI
      • Newer Triton node client to see next-gen of “sdc-X” tools
  • Docker Compose
    • build a multi-tier Docker application via Docker Compose, deploy on Triton via its Docker Remote API endpoint
  • Triton Trident…
    • deploy a 3-tier application composed of:
      • Zone-controlled bare-metal tier (db – MySQL)
      • Docker-controlled container tier (app – Tomcat)
      • VM-based tier (presentation – nginx)
    • Dell Active System Manager — a work in progress
      • aligning with Dell’s internal development and product group to establish a container architecture for the application

Stay tuned

Our test environment has been created and the Triton platform has been deployed.  Follow-on blog posts will cover basic architecture of the environment and the work to accomplish the Admin and Ops tasks above.  Stay tuned!

Extra-credit reading

Pau for now…

Mark Shuttleworth talks 16.04 LTS, Snaps & Charms

January 26, 2016

Last week I flew out to sunny California to participate in SCaLE 14x and the UbuCon summit.  As the name implies this was the 14th annual SCaLE (Southern California Linux Expo) and, as always, it didn’t disappoint.  Within SCaLE was the UbuCon summit which focused on what’s going on within the Ubuntu community and how to better the community.

While there I got to deliver a talk on Project Spuntik The Sputnik story: innovation at a large company, I also got to hang out with some of the key folks within the Ubuntu and Linux communities.  One such person is Mark Shuttleworth, Ubuntu and Canonical founder.  I grabbed some time with Mark between sessions and got to learn about the upcoming 16.04 LTS release (aka Xenial Xerus) due out on April 21st.

Take a gander:

Some of the ground Mark covers

The big stories for 16.04 LTS

  • LXD — ultralight VMs that operate like containers and give you the ability to run 100s of VMs on a laptop.   Mark’s belief is that this will fundamentally change the way people use their laptops to do distributed development for the cloud.
  • Snappy — a very tight packaging format, for Ubuntu desktop and server distros.  It provides a much better way of sharing packages than PPAs and Snaps provide a cleaner, faster way of creating packages.

Juju and charms

  • Where do Juju charms and snappy intersect? (hint: They’re orthogonal but work well together, charms can use snaps)

OS and services

  • The idea is to have the operating system fade into the background so that users can focus instead on services in the cloud eg “give me this service in the cloud” (which juju will allow) or “deliver this set of bits to a whole set of machines ala snappy”

Pau for now…

Installation details for Joyent’s Triton — Dell CTO lab

January 20, 2016

Here is our third and final post walking through the setting up of the Joyent Triton platform in the Dell CTO lab.  In the first post, Don Walker of the CTO office gave an overview of what we were doing and why.  The second laid out the actual components and configuration of the platform.

Today’s video is a walk-through of the installation process where Don shares his experience in setting up the Triton Platform.

When we pick this series up again it will focus on containerizing Dell’s Active System Manager and then loading it on Triton.  Not sure how long this work will take so stay tuned!

Some of the ground Don covers:

  • Before installing Triton, you need networking set up and working.  Don double clicks on the network configuration and what we did to make sure it was working.
  • Step one in installing Triton, is to create a bootable USB key and install the head node.  There is a scripted set up which is dead simple. Lays down SmartOS and Triton services
  • Compute node install is also scripted which contains a lot of the info you entered during the head node configuration.  After this you run acceptance tests
  • Great support from Joyent with a couple of small issues we had
    • Unacceptable character in pswd. This info was fed back to the devs and is now fixed.
    • We forgot to disable the SATA port and kept getting error messages. Once we disabled it, it worked.
  • Reference: Installing Triton Elastic Container Infrastructure — Joyent website

Extra credit reading

Pau for now…

The platform supporting Joyent’s Triton — Dell CTO lab

January 19, 2016

Continuing from the previous post, here is a more detailed explanation of the Joyent Triton platform we set up in the CTO lab.  Triton is Joyent’s elastic container infrastructure that runs on their cloud, a private cloud or both.

The idea behind setting up this instance is, working with Joyent, to learn about the platform.  The next step is to work with the Dell Active System Manager (ASM) team to decompose ASM into microservices and then run it on the Triton platform.

Take a listen as Don walks through the actual layout of the instance.

Some of the ground Don covers

  • Our minimalist set-up featuring two Dell R730 servers (the schematic only shows one for simplicity. An R730 contains two 520s).  Don explains how they are configured and how ZFS affects the set up.
  • The two Dell Force 10 S6000 switches.
  • A double-click on the networking set up
  • The roles the compute and head nodes (the head node acts as the admin into the system).
  • Reference: Installing Triton Elastic Container Infrastructure — Joyent website

Extra credit reading

Pau for now…

Intro: Setting up Joyent’s Triton in Dell’s CTO lab

January 18, 2016

A while back I tweeted how we had begun setting up a mini-instance of Joyent’s Triton in our Dell CTO lab.  Triton is Joyent’s elastic container infrastructure that runs on their cloud, a private cloud or both.  This cloud platform includes OS and machine virtualization (e.g. Docker with regards to the former and typical VMs under KVM for the latter).

About a week ago we got the platform set up about and I grabbed sometime with Don Walker of Dell’s enterprise CTO office to tell us about it.

In this first of three videos, Don gives an overview of the work Dell is doing with Joyent.  He describes what we’ve set up in the lab and talks about where we hope to take it.

Some of the ground Don covers

  • Don’s focus on Open Source Cloud eg Open Stack, containers, cloud networking and storage solutions
  • What the enterprise CTO office does
  • What we’re doing with Joyent: evaluating Triton and the process of taking existing products and put them into microservices and containers.
  • Looking at Dell’s ASM (Active System Manager) and what it means to refactor for microservices and containers
  • Overview of what was set up in the lab: a minimalist 2 node instance consisting of head and compute nodes.

Extra credit reading

Pau for now…


Get every new post delivered to your Inbox.

Join 139 other followers

%d bloggers like this: