XPS 13 Developer Edition launches in US, Ubuntu-based Workstations available worldwide

March 10, 2016

[ Update– April 7: i5 config now available]

[Update March 28: Precision 7510 and 7710 now available]

Today I am excited to announce the worldwide launch of the Precision line of Ubuntu-based workstations along with the US launch of the 5th generation of the XPS 13 developer edition.

Part of Project Sputnik, these systems began as an open-ended exploratory project to identify what developers wanted in an ideal laptop. With the community’s input, Project Sputnik became an official product and continues to evolve.  For more of the Sputnik story, including why this has become the perfect platform for Docker, see below.

Before getting into today’s details I would like to thank the entire community for their patience and support as we’ve made our way to launch.

The 5th gen XPS 13 developer editionDino

  • Preloaded with Ubuntu 14.04 LTS
  • Augmented with the necessary hardware drivers, tools and utilities
  • 6th Generation Intel® Core™ Processors
  • InfinityEdge™ display, FHD and QHD+ versions available
  • Availability: United Sates (Both Canada and Europe are being readied for launch as we speak, stay tuned for more details.)

Configurations: We are starting out with three i7 configs and plan to add an i5 option.  The i5 configuration will come with 8GB RAM, a 256GB SSD and FHD NT.  The timing of the i5 config is dependent on the depletion of the current inventory on hand. — Update: i5 config available as of April 7

All of these  XPS 13 developer edition configurations come with the Intel wireless card.

i7/8GB

  • 256GB, QHD+T, Intel 8260

i7/16GB

  • 512GB, QHD+T, Intel 8260
  • 1TB, QHD+T, Intel 8260

i5/8GB

  • 256GB, FHD NT, Intel 8260

The Ubuntu-based Precision mobile workstation line

22029-smb-laptop-precision-5510t-262x157The Precision mobile workstation line up is composed of four systems.  Joining the Precision 5510, successor to the M3800, we have added the Precision 3510, 7510 and 7710 mobile workstations.

This represents our complete Precision mobile workstation portfolio.  All of the systems below are fully configurable.

Dell™ Precision 5510, mobile workstation

  • Preloaded with Ubuntu 14.04 LTS
  • Next generation of the world’s thinnest and lightest true 15” mobile workstation
  • PremierColor™ 4K InfinityEdge™ display
  • Starting weight of just 3.93lbs (1.78kg) and a form factor that is less than 0.44” (11.1mm) thick
  • Up to: 6th generation Intel Xeon mobile quad-core processor, professional grade NVIDIA Quadro M1000M graphics, and 32GB of memory
  • Thunderbolt 3
  • Availability: worldwide

Dell™ Precision 3510, mobile workstation

  • Preloaded with Ubuntu 14.04 LTS
  • Affordable, fully configurable 15” mobile workstation
  • Up to: 6th generation Intel Xeon mobile quad-core processor, professional grade graphics, and up to 32GB of memory
  • FullHD (1920×1080) anti-glare matte screen option plus optional touchscreen
  • Availability: worldwide

Dell™ Precision 7510, mobile workstation 

  • Preloaded with Ubuntu 14.04 LTS
  • World’s most powerful 15” mobile workstation
  • Up to: 6th generation Intel Xeon mobile quad-core processor, professional grade graphics, 3TB of storage and 64GB of memory
  • PremierColor™ UltraSharp™ 4K UltraHD (3840×2160) screen option
  • Availability: worldwide

Dell™ Precision 7710, mobile workstation 

  • Preloaded with Ubuntu 14.04 LTS
  • World’s most powerful 17” mobile workstation
  • Up to: 6th generation Intel Xeon mobile quad-core processor, professional grade graphics, 4TB of storage and 64GB of memory
  • PremierColor™ UltraSharp™ 4K UltraHD (3840×2160) anti-glare screen option
  • Availability: worldwide

Ordering a Precision:  To get to the Ubuntu option, click on the “Customize & Buy” button on the system landing page.  Select Ubuntu Linux in the Operating System section and away you go!

Towers and racks too:  In case you didn’t know, we also have a portfolio of fixed Precision workstations — tower and rack — that are available with Ubuntu.

OTA (Over-The-Air) Fixes

There were several minor fixes that were not available in time for launch but have been made available as over the air updates so make sure to run all Ubuntu updates.  These fixes pertain to both the XPS 13 and Precisions.

In addition to the OTA fixes,  there is a wireless OOBE issue that will be fixed in the factory in the coming weeks.   Until then, please follow the following directions http://www.dell.com/support/article/SLN301251

16.04LTS

With regards to updates, although 16.04LTS will be shipping next month, we don’t have a date for when factory installation will become available.  That being said, we do plan to support 16.04 LTS for those who choose to upgrade.

To upgrade to the latest LTS, please follow the instructions at http://www.ubuntu.com/download/desktop/upgrade

Project Sputnik — A quick history

How we turned a $40K investment into 10’s of millions of dollars in revenue by focusing on developers.

From humble beginnings

As many of you may know Project Sputnik, as this effort is called, originated with a pitch made to an internal innovation fund four years SputnikScientist2ago.  The fund provided a small pot of money (the $40K mentioned above) and six months to see if the idea of a developer laptop would fly.  A couple months after we had been given the green light, on May 7, 2012 we announced the project publically asking the community what they would like to see in a developer laptop.

A rough ISO was provided for people to kick the tires and folks were told that if we got enough interest we might be able to turn this project into a product.  10 weeks later, thanks to the amazing interest we received around our beta program, we got the OK to turn project Sputnik into an actual product and in November of 2012 the XPS 13 developer edition became available.

You say you want an evolution

As the project has evolved we have continued to solicit and incorporate feedback.  Thanks to your support the XPS 13 developer edition has gone from one, to multiple configs.

On the higher end, we kept getting requests to add a larger system to the lineup.  OS architect Jared Dominguez took note of this and spent a bunch of late nights putting together instructions on how to get Ubuntu running on the Precision M3800.  From here interest kept mounting and a year later the Ubuntu-based M3800 became an official product.  As of today this original workstation offering has expanded to four systems.

DevOps, Cloud launcher and Docker

One of the big ideas we had when we first kicked off project Sputnik was that it would be a DevOps platform.  A key piece of this platformSputnik+Docker
would be a “cloud launcher” that would allow developers to create apps within “micro clouds” on their laptops and then deploy said apps to a public or private cloud.  Unfortunately this turned out to be a lot more difficult than we had hoped and we put it on hold.

As luck would have it however, a couple of years later Docker serendipitously came along.  Docker containers provide the functionality of our envisioned cloud launcher, allowing applications created locally to be pushed, as is, to the cloud.  Because Docker containers run on Linux, with our Ubuntu-based systems, developers can run the containers natively rather than within a virtualized environment like they would on other platforms.

Forward march

Stay tuned to see how, with your support and input, project Sputnik will continue to evolve.  Once again, thanks for all the support and stay tuned for the Canadian and European roll outs!

Extra-Credit reading

Pau for now

 


The Project Sputnik story: Innovation at a large company?

February 23, 2016

As we get ready to launch the 5th generation of the XPS 13 developer edition and our expanded line of Ubuntu-based Precision workstations, I wanted to provide a look back.

Below is a video of a presentation I delivered last month at the UbuCon summit.  UbuCon was a part of  the the Southern California Linux Expo (SCaLE) and the presentation covers the genesis of project Sputnik and the lessons learned along the way.

Enjoy!

[Note there is a minute or two of dead air at the beginning before we start.  The audio kicks in at 2:48:13]

Reference: The 5 lessons we learned

  1. Get a champion, be a champion – You need someone high up to go to bat for you. You must be ever vigilant
  2. Leverage, execute – Doesn’t matter if it’s not your idea, delivery is what counts
  3. Start small – Don’t over promise, err on the side of caution
  4. Be human/humble – Speak directly and be transparent.  Don’t write anyone off too soon
  5. No one is perfect – It’s not if you’re going to screw up, it’s how you recover when you do

Pau for now…


Pivotal Labs: Teaching Clients to Fish Agilely

February 11, 2016

I’ve been in New York the last couple of days and this morning, before I left, I was able to check out Pivotal Labs’ NYC offices.  Pivotal’s New York office is one of 17 labs around the world, a number which will be growing to ~25 by the end of the year.

At the Labs, rather than simply developing software for the clients, Pivotal works the client on a small project in order to teach them new methods of development. East Coast managing director Graham Siener showed me around and gave me the low down on what Pivotal Labs is all about.

Some of the ground Graham covers

  • Helping folks with early stage product development thru XP (extreme programming) and Agile. Working hand in hand with clients so that they build the skills they need to carry on once they leave Pivotal’s offices.
  • Kicked off back in ’89 with the idea of helping change the way people write software.
  • Working with clients from a wide range of verticals: “clients bring the domain expertise, Pivotal supplies the process expertise.”
  • Beyond the labs, what else makes up Pivotal: their big data suite and Cloud Foundry (and how Cloud Foundry fits well to support the skills and methodologies clients pick up from working with Pivotal).
  • Where Graham sees Pivotal Labs going over the next year.

Pau for now…


DevOps, Microservices and Containers – a High Level Overview

February 8, 2016

A little while ago I put together a short presentation intended to provide a high-level overview of the wild and wacky world of DevOps, Microservices and Containers.  I present this deck both internally and externally to give folks an overview of what is happening in IT today.

For your reference, I have added the speaker notes after the deck.  I’m sure everyone has a different take on the concepts and explanations here.

Feel free to add your thoughts.

DevOps, Microservices and containers – a high level overview from Barton George

 

Speaker notes

1) Cover

2) Digital Players

  • Digital pioneers have reset customer expectations and disrupted industries resulting in the need for organizations to digitally transform in order to be competitive and ultimately survive (witness Kodak, Borders, Blockbuster, the taxi industry etc).  Additionally there is no time to waste, 5 years after the financial crisis companies who have been in cost cutting mode are all waking up at the same time realizing that they have a lack luster product portfolio and need to innovate.

3) Digital Business = Software (and it has a shelf life)

  • The key enabler for digital businesses is software and that software has a shelf-life.  To be competitive that software needs to reach customers as soon as possible.  To help drive this speed and customer focus, The Agile manifesto of 2001 was created.  The manifesto was a reaction to the long development cycles driven by the “waterfall” method of software development.  Agile turned its focus to the customer and quick iterative turns of development.

4) But that’s only “half” of the equation

  • While agile has sped up software develop and has made it more responsive to customer needs, unless its paired with a greater cooperation with operations, the overall delivery of software to customers remains the same.
  • In the past, Developers have kept their distance from operations.  It is not surprising that these groups have stood apart in light of how vastly different their goals and objectives have been.
    • Developers are goaled to drive innovation and reinvention in order to constantly improve on user experience and deliver new features to stay one step ahead of the competition.
    • Operations on the other hand is focused on providing rock solid stability, never letting the site go down, while at the same time being able to scale at a moment’s notice.

5) Dev + Ops: A Methodology

  • And this is where DevOps comes in.  DevOps is a methodology intended to get developers and operations working together to decrease friction and increase velocity.  You want to be able to get your “product” to customers as quickly as you can, and shorten this time frame as much as possible,  you also want to be able to continuously improve your product via feedback.
  • The gap between developers and operations is often referred to as “the wall of confusion” where code that often isn’t designed for production is lobbed over the wall.  Besides silos, the tools on each side do not fit together and there isn’t a common “tool chain.”  When the site goes down finger pointing results and ops accuses devs of writing bad code and devs accuse ops of not implementing it correctly.  This friction is obviously not productive in a world where “slow is the new down”
  • By tearing down the wall, the former delineation of responsibilities blurs:
    • Developers are asked to put “skin in the game” and for example carry a pager to be notified when an application goes down.
    • Conversely operations will need to learn some basic coding.
  • In this new world order, developers and ops folks who understand and can work with “the other side” are in high demand.

6) DevOps What its all about

  • Double clicking on DevOps, here is how it flows from Tenets to Requirements and then Benefits.   I should say that there are a lot of different interpretations of which components make up the key characteristics of DevOps but in the true spirit of the methodology, you need to move forward with “good enough. ” (“Always ready, never done”)   One factor that is widely agreed upon is that culture is the most important characteristic of DevOps.  Without it, you can have all the great processes and tools you want but they will languish.  All of this underpinned by the foundation of cloud, open source software (which the majority of the tools and platforms are composed of) as well as microservices – which I will expand on in a second.

7 & 8) Tool chain

  • Now while I said tools are not as important as culture, the concept of a tool chain provides a good illustration of the connected nature of DevOps.  DevOps demands a linked tool chain of technologies to facilitate collaborative change.   Interchangeability is key to the success of the DevOps toolchain (via loosely coupled via APIs).   Open Source tool adoption and appetite remain strong; however, large-enterprise clients prefer commercially supported Open Source distributions.   You will see tool chains depicted many different ways with different players and buckets but this example gives a decent overview of the high-level linkage of processes/components.  There are many different tools out in the market that fit into these buckets but I have picked just a couple for each to act as illustrations.
  • It all starts with new code
  • Continuous integration(CI) is the practice in software engineering of merging all developer working copies to a shared mainline several times a day.   Changes are immediately tested and reported on when they are added to the larger code base.
  • Version Control: These changes to the code are tracked in a central repository  –“one source of truth”
  • Code deployment: installs the code across 100s/1000s of servers
  • Measurement and monitoring: continuously measures and monitors the environment to identify bottle necks. This information is then fed back at the front of the process to drive improvements.  This data is then fed back to the front of the chain to drive improvements
  • Across this chain the code travels in the form of Microservices that are conveyed in containers.

9) Microservices: essential to iterate, scale and speed

  • Lets take a closer look at microservices which although they support DevOps, have developed independently over the last few years as a grassroots, developer driven effort.   Microservices is the concept of the decomposing software applications into loosely coupled and recombinable bite-sized processes Eg breaking a “store” component into: order processing, fulfillment, and tracking services .  This decomposition greatly increases the ability to iterate, scale and it increases speed, thereby enabling continuous delivery.  Microservices and cloud go hand-in-hand, where autoscaling can help ensure no service becomes a bottleneck by adding horse power where needed.  Docker and microservices are a perfect fit.

10) Enter the modern container:

  • As I mentioned previously, containers fit well as the conduit to deliver microservices.  While containers have been around for a decade in the form of Solaris Zones, BSD jails as well as at Google where they have used them to run their infrastructure (creating and blowing away 2 billion containers a week).  It has only been in the last year or two that they have come to the fore thanks to Docker who evolved Linux containers in the context of modern applications and made containers easy to use for the general dev/ops person (Docker expertise is currently the second most sought after skill today in the tech world).
  • Containers serve perfectly as vehicles to convey microservices and applications across the tool chain from development through testing, staging and production, much the same way goods in shipping containers can be packaged and sent on a truck from the warehouse the loaded on a ship and then put on a truck waiting on the other side.  Additionally they can be used on public and private clouds as well as bare metal servers.

11) Containers vs VMs.

  • Architecturally VMs and containers differ in that VMs sit on top of hypervisor and each VM contains both a guest OS as well as an app.  Containers on the other hand package an app or service by itself and it sits directly on top of the OS.  Given the maturity of VMs, they are more secure than containers, they also take much longer to spin up.   Containers on the other hand don’t currently have the security of a VM but spin up in milliseconds vs seconds or minutes.  In order to address security concerns, in most cases today organizations are running containers within virtual machines
  • As all new technology, containers are still rough around the edges and if you aren’t an early adopter kind of organization, you may want to play with/pilot them but not implement on a large scale just yet.

12) The landscape: 

  • At this point the container landscape is an ever changing field populated by small and large players.  This space is dominated by open source offerings.
  • Container engines: As the center of gravity for of the landscape are the container engines themselves made up by the 800 pound gorilla, Docker as well as Rocket which was created by CoreOS in response to what CoreOS felt was a lack of security in the Docker container.  This summer the Open Container Initiative was kicked off to bring the two sides together and create a common spec.
  • MicroOS’s: Sitting beneath the containers are the micro OS’s, basically the size of 25 pictures on your cell phone (100 MB) or 1/20th the size of a typical OS.   What makes these so small is that they have been stripped down to the bare necessities eg no fax sw included.  These began with CoreOS and now there are offerings from Red Hat (atomic), Microsoft (nano), VMware (photon) and Rancher etc (others include Intel’s ClearOS and Ubuntu’s Snappy)
  • Container Orchestration: Just like having VM or server sprawl, you  can have container sprawl and need to be able to manage them.  The offering that sits at the center is Google’s Kubernetes built on their own container management platform and which can combined with the other orchestration offerings.   The others include, Rancher, Docker Swarm, CoreOS, Mesosphere (based off of the Apache Mesos project) and Flocker a container data volume manager
  • Clouds with Docker Support: Most clouds are now building docker support from OpenStack to Joyent’s Triton, Google’s container engine, EC2 and Microsoft Azure

13) The DevOps equine continuum

  • Now if we zoom back out and take a look at the implemtation of DevOps it can be illustrated by the analogy of an “Equine continuum.”  Here is a model for classifying companies into three buckets illustrating their position on DevOps journey.
  • In the upper right you have the “Unicorns” (not the billion dollar-valued unicorns of the valley) such as AWS, google, uber etc who have employed devops methodology since their beginnings or soon there after.  This tend to be cloud based companies.
  • Next on the continuum are “Race Horses” often times banks like Goldman Sachs or JP Morgan Chase who are starting to implement DevOps to increase their agility and gain a competitive edge.
  • In lower left are the “Work horses” who have just started looking into how they can improve their competitiveness via digital transformation and what role DevOps may play.

14) Where do I start

  • If you fit into the workhorse classification and you’re looking to get started we are not advocating that you dump all your existing infrastructure and start implementing DevOps, for one thing you would have a mutiny on your hands.   The best place to focus is on those fast changing applications and services on the front end that are customer facing.  You would want to leave stable transaction-oriented systems on the back as they are.

15) What Dell is doing in this space

Offerings

  • Professional services: Dell’s professional services organization has an array of offerings to enable organizations to implement DevOps practices:
    • Agile/DevOps Advisory services; Agile Delivery Services
    • CI/CD consulting and implementation services
    • DevOps Migration/managed services
    • DevOps focussed test Automation, performance testing services
  • OpenShift: Working with our partner Red Hat, Dell is making the OpenShift Platform as a Service available to our customers.
  • Dell XPS 13 developer edition:  This is an Ubuntu Linux-based developer laptop  that allows developers to create applications/microservices within Docker containers on their laptops and then deploy these containers directly to the cloud.
  • Open Networking OS 10:  This switch OS works with Kubernetes which coordinates the hardware pieces.  OS 10 programs the hardware as containers come and go.

Projects

  • Flocker pluginCode that allows ClusterHQ’s Flocker to integrate with the Dell Storage SC Series has been made available on github. What this does is allow developer and operations teams to use existing storage to create portable container-level storage for Docker.  Rather than coming from an internal planning process or committee, the idea for a Flocker plugin came from Dell storage coder Sean McGinnis. Sean was looking for ways to make Dell Storage an infrastructure component in an open source environment.
  • Containerizing an old-school application: There are also several projects going on within the company to develop a greater understanding of containers and their advantages. About a year ago Senior Linux engineer Jose De la Rosa had heard so much Docker and container-mania that he thought he’d find out what the fuss was all about.  Jose started looking around for an app within Dell that he could containerize and came across Dell’s OpenManage Server Administrator (OMSA).  In case you’re wondering, OMSA is an in house application used to manage and monitor Dell’s PowerEdge servers.  Rather than being a micro-service based application, OMSA is an old school legacy app.  Jose succeeded in containerizing the application and learned quite a bit in the process.
  • CTO Lab: Dell’s CTO team has set up Joyent’s elastic container infrastructure, Triton, in our lab running Docker. The idea is to learn from this platform and then work with the Active Systems Manager team to decompose ASM  into microservices and run it on the Triton platform.

Industry Consortia and Internal use of DevOps

  • Open Container Initiative: Dell is a member of the Open Container Initiative which is hosted by the Linux foundation and is chartered to create common specifications for containers to allow for interoperability and increased security.
  • Dell IT:  Within Dell itself, devops is being used to support Dell.com and internal IT.  Dell’s Active System Manager employees the DevOps methodology in its product development process.

Extra-credit reading

Pau for now…

 


Dell launches Debian-based Open Networking OS

February 8, 2016

A couple weeks ago when Silicon Valley-based Darius Goodall and Cliff Wichmann made the pilgrimage out to Austin I grabbed some time with them to learn about the recently announced OS 10.  Darius heads up the DevOps and tech partner ecosystem in Dell’s networking group while Cliff is the software architect for OS 10.

Take a listen as they take us through the new OS and where it’s going.

OS10 overview

Some of the ground Darius and Cliff cover

  • A couple of years ago Dell disaggregated the  switch hardware from the software and now we’re disaggregating the SW
  • Think of the switch itself as a Debian-based server with a bunch of ethernet ports
  •  It will allow you to orchestrate, automate and integrate Linux-based apps into your switching environment
  • Timeline: Base version coming out in March – a DevOps friendly server environment
  • Timeline: In June/July the premium applications will be released which will be the switching packages to use on top of the Linux base+ a fancy routing suite (if you want to get going before hand you can use Quagga on top of the base)
  • CPS: programatic interface we’ve added into the base in order to enable developers

Extra-credit reading

  • Dell serves up its own disaggregated OS – NetworkWorld
  • Dell drops next network OS on the waiting world – The Register
  • Dell’s OS10 aims to open up networks, then whole data centers – PCWorld,

Pau for now…


Working on Triton in the lab, what’s on the horizon

January 27, 2016

As we’ve talked about before, a few of us in Dell’s CTO group have recently been working with our friends at Joyent.   This effort is a part of the consideration of platforms capable of intelligently deploying workloads to all major infrastructure flavors – bare-metal, virtual machine, and container.

Today’s post on this topic comes to us complements of Glen Campbell — no, not that one, this one:

Glen has recently come from the field to join our merry band in the Office of the CTO.  He will be a part of the Open Source Cloud team looking at viable upstream OSS technologies across infrastructure, OS, applications, and operations.

Here is what Glen had to say:

What’s a Triton?

Joyent’s Triton Elastic Container Infrastructure, a Private Cloud variant of the Joyent Elastic Container Service PublicTriton slide

Cloud, allows customers to take advantage of the technologies and scale Joyent leverages in their Public Cloud.

On the Triton Elastic Container Infrastructure (which I’ll call “Triton” from now on) bare-metal workloads are intelligently sequestered via the use of the “Zones” capabilities of SmartOS.   Virtual machines are deployed via the leveraged KVM hypervisor in SmartOS, and Docker containers are deployed via the Docker Remote API Implementation for Triton and the use of the Docker or Docker Compose CLIs.

What’s the Dell/Joyent team doing?

As part of interacting with Triton we are working to deploy a Dell application, our Active System Manager (ASM), as a series of connected containers.

The work with Triton will encompass both Administrative and Operative efforts:

Administrative

  • Investigate user password-based authentication via LDAP/Active Directory
    • in conjunction with SSH key-based authentication for CLI work

Operative

  • Use of:
    • Admin web UI and User Portal to deploy single/multi-tier applications
    • Joyent Smart Data Center (SDC) node.js client to deploy from remote CLI
      • Newer Triton node client to see next-gen of “sdc-X” tools
  • Docker Compose
    • build a multi-tier Docker application via Docker Compose, deploy on Triton via its Docker Remote API endpoint
  • Triton Trident…
    • deploy a 3-tier application composed of:
      • Zone-controlled bare-metal tier (db – MySQL)
      • Docker-controlled container tier (app – Tomcat)
      • VM-based tier (presentation – nginx)
    • Dell Active System Manager — a work in progress
      • aligning with Dell’s internal development and product group to establish a container architecture for the application

Stay tuned

Our test environment has been created and the Triton platform has been deployed.  Follow-on blog posts will cover basic architecture of the environment and the work to accomplish the Admin and Ops tasks above.  Stay tuned!

Extra-credit reading

Pau for now…


Mark Shuttleworth talks 16.04 LTS, Snaps & Charms

January 26, 2016

Last week I flew out to sunny California to participate in SCaLE 14x and the UbuCon summit.  As the name implies this was the 14th annual SCaLE (Southern California Linux Expo) and, as always, it didn’t disappoint.  Within SCaLE was the UbuCon summit which focused on what’s going on within the Ubuntu community and how to better the community.

While there I got to deliver a talk on Project Spuntik The Sputnik story: innovation at a large company, I also got to hang out with some of the key folks within the Ubuntu and Linux communities.  One such person is Mark Shuttleworth, Ubuntu and Canonical founder.  I grabbed some time with Mark between sessions and got to learn about the upcoming 16.04 LTS release (aka Xenial Xerus) due out on April 21st.

Take a gander:

Some of the ground Mark covers

The big stories for 16.04 LTS

  • LXD — ultralight VMs that operate like containers and give you the ability to run 100s of VMs on a laptop.   Mark’s belief is that this will fundamentally change the way people use their laptops to do distributed development for the cloud.
  • Snappy — a very tight packaging format, for Ubuntu desktop and server distros.  It provides a much better way of sharing packages than PPAs and Snaps provide a cleaner, faster way of creating packages.

Juju and charms

  • Where do Juju charms and snappy intersect? (hint: They’re orthogonal but work well together, charms can use snaps)

OS and services

  • The idea is to have the operating system fade into the background so that users can focus instead on services in the cloud eg “give me this service in the cloud” (which juju will allow) or “deliver this set of bits to a whole set of machines ala snappy”

Pau for now…


Follow

Get every new post delivered to your Inbox.

Join 151 other followers

%d bloggers like this: