April 26, 2016
Yesterday here at the OpenStack summit here in Austin I caught a few of the sessions in the track that Canonical was hosting. One of the sessions dealt with Canonical’s LXD and where it fits into the whole virtualization/container space.
The talk was given by Dustin Kirkland and after he had finished, I grabbed him to explain the basics of LXD and the landscape it fits within.
Have a listen
Some of the ground Dustin covers:
- What is LXD and how is it different from virtual machines and containers
- How LXD acts like a hypervisor but is fundamentally a container
- Application containers vs Machine containers
- Applications containers like Docker host a single proccess on a filesystem
- Machine containers from LXD boot a full OS on their filesystems
- Where do microservices fit in this model
- How Docker and LXD are complementary
- 16.04LTS ships with LXD
Pau for now…
January 8, 2012
Here is the last in a series of three short videos around cloud computing put together by Dell and Intel. As I mentioned in the last two entries, these videos are part of larger series around key topics like IT reinvention, the consumerization of IT, social media etc.
This last video features myself, Dell’s former CIO Robin Johnson, VP of Dell’s Enterprise Solutions and Strategy, Praveen Asthana and Donna Troy, VP and GM of Solutions Marketing and Sales at Dell.
Some of the ground we cover
- How we define cloud computing
- How quickly can you evolve to cloud?
- How do you balance your current environment with cloud
- Starting your cloud building from a basis of virtualization
Extra credit reading
Pau for now…
February 4, 2011
Dell’s Data Center Solutions (DCS) group has some pretty colorful folks. One of the more interesting members is Jimmy Pike, the man IDG New’s James Niccolai refered to as the “Willy Wonka of servers.” Jimmy, the self-proclaimed “chief geek” of the DCS team is the consummate tinkerer whether that involves constructing a data center in a brief case or thinking of new ways of driving down data center power consumption by leveraging alternative forms energy.
Last Spring I visited Jimmy’s home to check out what he was working on in his “free time.” Here’s what I saw (he keeps telling me he’s got much cooler stuff since I shot this so I may have to do a “geekquel”)
Some of the things Jimmy show us:
- The low-power chips he’s playing with
- His experimentation with user interfaces and superman glasses
- His mini rack of servers
- The various forms of desktop virtualization and OS’s he uses
- Laying out and designing boards by mail
- His micro recording studio
Pau for now…
January 17, 2011
Earlier this month an interview I did with Robert Duffner, Director of Product management for Windows Azure, went live on the Windows Azure team blog. Robert asked me a variety of questions about Cloud security, how I see the Cloud evolving, the pitfalls of the cloud, where Dell plays etc.
I was pleasantly surprised to see that my ramblings actually turned out coherent :) Here is a section from the interview (you can check out the whole piece here):
Cloud computing is a very exciting place to be right now, whether you’re a customer, an IT organization, or a vendor. As I mentioned before, we are in the very days of this technology, and we’re going to see a lot happening going forward.
In much the same way that we really focused on distinctions between Internet, intranet, and extranet in the early days of those technologies, there is perhaps an artificial level of distinction between virtualization, private cloud, and public cloud. As we move forward, these differences are going to melt away, to a large extent.
That doesn’t mean that we’re not going to still have private cloud or public cloud, but we will think of them as less distinct from one another. It’s similar to the way that today, we keep certain things inside our firewalls on the Internet, but we don’t make a huge deal of it or regard those resources inside or outside as being all that distinct from each other.
I think that in general, as the principles of cloud grab hold, the whole concept of cloud computing as a separate and distinct entity is going to go away, and it will just become computing as we know it.
Pau for now…
September 10, 2010
Light weight servers have been gathering steam recently. Targeted at focused markets like hosting and Web 2.0 they feature the old school architecture of placing one CPU per server and running one OS/application on that server. The new twist here is that they can pack up to 12 servers per one 3U enclosure.
Below, Dell Data Center Solutions chief architect Jimmy Pike takes us through a short whiteboard discussion on how Moore’s law has driven us to multi-core architectures and virtualization and how, in the case of very focused applications, that same law is bringing us back to the future.
Some of the points Jimmy makes:
- Given Moore’s law its implausible to continue to drive higher and higher clock rates. This has given rise to multi core architecture.
- Native demand of applications on servers hasn’t kept paced with Moore’s law. This has resulted in virtualizaton, allowing you in effect to run multiple servers on a single system.
- This same law is also driving us in the opposite direction, to light weight servers which feature a simple one server/one OS architecture in a very energy efficient, cost effective manner targeted at focused applications.
Extra-credit reading (more Jimmy Pike):
Pau for now…
March 22, 2010
Whether you believe in the Cloud or not, it’s coming. That being said it’s not a phenomenon that will fill skies of IT departments tomorrow, but rather it is starting out as another tool in IT’s bag of tricks. As time passes, cloud computing will increasingly become a greater part of the portfolio of compute models that IT departments manage, sitting alongside Traditional computing and Virtualization.
Cloud Computing Today
If you were to graph the distribution of compute models being used today by IT departments in large enterprises, it would look something like the chart below. Today, traditional computing and virtualization are where most of the distribution lies with a little bit of flirting with the Public Cloud in the case of SaaS applications for areas like HR, CRM, email etc. Private cloud is presently negligible.
Over the next three to five years
Over the next three to five years the above distribution will flatten out and shift to the right and will resemble the graph below. Private cloud will represent the largest compute model utilized but it will be equally flanked by virtualization and public cloud. You’ll notice there will still be a decent amount of resources that remain in the traditional compute bucket representing applications that are not worth the effort of rewriting or converting to a cloud platform.
Evolutionary Vs. Revolutionary
One of the things to note with this new distribution is that the lines between Virtualization and Private Cloud will start to blur (there will also be a blurring between Private and Public clouds as hybrid clouds become more of a reality in the future, but that’s another story for another time). There are two ways to go about setting up private clouds, evolutionary and revolutionary.
Tune in tomorrow and learn more about these two approaches and how they differ.
Pau for now…
January 7, 2010
Here is the second in my three part series on Virtualization and the cloud. Today’s entry focuses on the 800 pound gorilla in the virtualization space, VMware.
At last month’s Gartner’s Data Center conference, right after his standing room only presentation, I grabbed some time with VMware’s Mr. Cloud, Dan Chu . Hear what he had to say:
Some of the topics Dan tackles:
- What VMware is seeing customers actually doing to take advantage of the cloud today both with regards to public and private clouds.
- Some polling data he collected during his talk based on the ~300 folks who attended: 90-95% were virtualizing, 15% had an active private cloud project, 5-10% had a public cloud project. (This is pretty representative of what Dan’s generally seeing.)
- The three phases of cloud:
- Phase I: Standardizing and virtualizing an environment.
- Phase II: Adopting private cloud from a management stand point: getting to self service and automation in terms of provisioning a new service/collapsing the time it takes to get a new image out to an end user or developer from weeks to minutes/ implementing charge back, dynamic capacity planning and management.
- Phase III: Thinking about or planning how to leverage the public cloud in a fully compatible way.
- A short history of VMware: how they’ve moved from desktop and server virtualization to VM management and optimization to enabling their platform for private clouds and public cloud providers.
- Their “recent” acquisition of Spring Source and how it fits in.
Stay tuned next time for a summary of Gartner’s virtualization presentation from their data center conference.
Pau for now…