Dell EMC’s Ceph Performance and Sizing Guide for the PowerEdge R730XD

September 23, 2016

If you’re not familiar with it, Red Hat’s Ceph storage is a distributed object store and file system.  To support its deployment on the Dell EMC PowerEdge R730XD, a team from Dell EMC recently put together a white paper that acts as a performance and sizing guide.

In the first video below, Amit Bhutani of Dell EMC’s Linux and open source group explains Ceph and takes us through the test environment that was used to create the deployment guide.  Video number two stars Valerie Padilla from Dell EMC’s server solution CTO team.  Valerie gives an a high level view of the white paper and the five categories of the results.

Take a listen

Extra credit reading

Pau for now…


Introducing the Webilicious PowerEdge C8000

September 19, 2012

Today Dell is announcing our new PowerEdge C8000 shared infrastructure chassis which allows you to mix and match compute, GPU/coprocessor and storage sleds all within the same enclosure.  What this allows Web companies to do is to have one common building block that can be used to provide support across the front-, mid- and back end tiers that make up a web company’s architecture.

To give you a better feel for the C8000 check out the three videos below.

  1. Why — Product walk thru:  Product manager for the C8000, Armando Acosta takes you through the system and explains how this chassis and the accompanying sleds better server our Web customers.
  2. Evolving — How we got here:  Drew Schulke, marketing director for Dell Data Center solutions explains the evolution of our shared infrastructure systems and what led us to develop the C8000.
  3. Super Computing — Customer Example:  Dr. Dan Stanzione, deputy director at the Texas Advanced Computing Center talks about the Stampede supercomputer and the role the C8000 plays.

Extra Credit reading

  • Case Study: The Texas Advanced Computing Center
  • Press Release:  Dell Unveils First Shared Infrastructure Solution to Provide Hyperscale Customers with New Modular Computational and Storage Capabilities
  • Web page: PowerEdge C8000 — Optimize data center space and performance

Pau for now…


Savtira streams media and apps from the cloud with beefy PowerEdge C combo

April 18, 2011

Savtira Corporation, who provides outsourced Cloud Commerce solutions, has chosen Dell DCS’s PowerEdge C line of servers and solutions to deliver streamed media and apps from the cloud.  Dell’s gear will help power the Savtira Cloud Commerce platform and Entertainment Distribution Network (EDN).

With a little help from PowerEdge C, businesses will now be able to use EDN to stream all digital media (business apps, games, music, movies audio/ebooks) from the cloud to any device.  One of the particularly cool features is, since the state and configuration are cloud-based, consumers can switch between devices and pick up exactly where they pushed pause on the last device.

Talk about supercharging

To power Savtira’s EDN data center, the company picked PowerEdge C410xs packed with NVidia Tesla M2070 GPUs and driven by PowerEdge C6145s.  If you think GPUs are just for rendering first-person shooters, think again.  GPUs can also cost-effectively supercharge your compute-intensive solution by offloading a lot of the processing from the main CPUs.  According to NVidia, for 1/10 the cost and with only 1/20 of the power consumption, GPUs deliver the same performance as CPUs.

To  help you get an idea of the muscle behind this solution, the PowerEdge C410x PCIe expansion chassis holds up to 16 of the Tesla M2070s GPUs, each of which exceeds over 400 cores.  Two fully populated C410xs are in turn powered by one PowerEdge C6145 for a combined total of 33 Teraflops in just 7U.

Talk about a lot of power in a little space 🙂

Extra-credit reading

  • PowerEdge C6145 — Dell DCS unveils its 4th HPC offering in 12 months, and its a beefy one
  • PowerEdge C410x — Say hello to my little friend — packing up to 16 GPGPUs
  • NVIDIA: from gaming graphics to High Performance Computing

Pau for now…


Live from World Hosting Days – AMD’s John Freuhe talks about the AMD based PowerEdge C systems

March 23, 2011

This week, outside of Frankfurt, WorldHostingDays is taking place.  A whole delegation of folks from the Data Center Solutions group is there to support the announcement of our new microserver line.   A lot of our key partners are there as well.  One such partner is AMD.

Earlier today, AMD director of product marketing John Fruehe held a session entitled “Core Scalability in a cloud environment.”  Above is a three minute section where John talks about the three AMD-based systems that are part of the PowerEdge C line:

  • The PowerEdge C5125 microserver which we announced yesterday
  • The PowerEdge C6105 optimized for performance per watt per dollar.
  • The PowerEdge C6145 our HPC monster machine

Take a listen as John walks you through the products and their use cases.

Extra-credit reading

Pau for now…


DCS brings its experience to a wider Web Hosting audience — announcing PowerEdge C microservers

March 22, 2011

Over the past three years Dell’s Data Center Solutions group has been designing custom microservers for a select group of web hosters.  The first generation allowed one of France’s largest hosters, Online.net to enter a new market and gain double digit market share.  The second generation brought additional capabilities to the original design along with greater performance.

Today we are announcing that we are taking our microserver designs beyond our custom clients and are making these systems available to a wider audience through our PowerEdge C line of systems.  The PowerEdge C5125 and C5220 are ultra-dense 3U systems that pack up to twelve individual servers into one enclosure.  The C5125, which is AMD based, will be available next month and the Intel-based C5220 will be available in May.

The PowerEdge C5125 with one of the 12 server sleds pulled out.

So what the heck is a “microserver”

Microservers are a new class of systems specifically designed for those use cases where multi-core CPU architecture and extensive virtualization are overkill.  What they provide instead are multiple low-cost dedicated servers, each featuring a single-socket CPU, where one CPU is perfect for running single applications.

The general idea behind these lighter weight systems is that they are right-sized for a particular set of applications such as serving up Web pages, streaming video and certain online gaming services.

DCS’s third generation of microservers

One of the most important attributes of the PowerEdge C5125 and C5220 is their density.  By packing 12 one-socket servers in a 3U form factor these systems deliver four times the density of more conventional 1U servers.  This translates to four times less floor space, cabling and racks all of which means greater revenue per square foot for web hosters and data center operators.

These systems further save on power and cooling by leveraging shared infrastructure.  The server nodes in the chassis share mechanicals, high-efficiency fans and redundant power supplies all of which helps it save up to 75% in cooling costs compared to typical 1U servers.

One of the server sleds from the C5125. This is a four 2.5-inch HDD version, there is also a two 3.5-inch HDD version.

So if power, cooling and revenue per square foot are somethings you are concerned with or you are looking to provide dedicated hosting to your customers of lighter weight applications you just might find the PowerEdge C microserver systems something you want to take a closer look at :).

Extra-credit reading

Pau for now…


PowerEdge C powers OpenStack Install Fest

November 10, 2010

Yesterday morning I made the drive down to San Antonio for OpenStack’s second design summit (and first open to the public).  If you’re not familiar with OpenStack, its an open source cloud platform founded on contributed code from Rackspace and NASA’s Nebula cloud.   The project was kicked off back in July at an inaugural design summit held in Austin.

The project has picked up quite a bit of momentum in its first four months.  Attending this week’s 4-day conference are close to 300 people, representing 90 companies, from 12 countries.  The event is broken into a business track and design track (where actual design decisions are being made and code is being written).

Powering the Install Fest

For the project Dell has sent down a bunch of PowerEdge C servers which have been set-up upstairs on the 5th floor.  OpenStack compute has been installed on the two racks of servers and are up and running.   Tomorrow, coders will get access to these systems during the install fest.   During the fest attendees will each be given a virtual machine on the cloud to test and learn about installing and deploying OpenStack to the cloud.

I got Bret Piatt, who handles Technical Alliances for OpenStack, to take me on a quick tour of the set-up.  Check it out:

Featuring: Brett Piatt, PowerEdge C1100, C2100, C6100 and C6105

Extra-Credit reading:

Pau for now…


PowerEdge C410x — Whiteboard topology

August 5, 2010

In the last of my GPGPU/PowerEdge C410x trilogy I offer up a whiteboard session with the system’s architect, Joe Sekel.

Some of the topics Joe walks through:

  • How does having remote GPGPUs connected via cable back to a server compare in performance to having the GPGPUs embedded in the server?
  • The topology of the PCI express  x16 (16 lanes per link) plumbing: from the chipset in the host sever through to the GPGPU.
  • The data transfer bandwidth that x16 Gen 2 gives you. 

Extra-credit reading:

Pau for now…


Deep dive tour(s) of the PowerEdge C410x

August 5, 2010

In my last entry I talked about the wild and wacky world of GPGPUs and provided an overview of the PowerEdge C410x expansion chassis that we announced today. For those of you who want to go deeper and see how to set up and install this 3U wonder you’ll want to take a look at the three videos below.

  1. Card installation: How to install/replace a NVIDIA Tesla M1060 GPU card in the PowerEdge C410x taco.
  2. Setting up the system: How to set up the PowerEdge C410x PCIe expansion chassis in a rack, power it up and pull out cards.  Also addresses port numbering.
  3. BMC card mapping: How to map the PCIe cards in the PowerEdge C410x via the BMC web interface.  Also covered are how to monitor power usage, fans and more.

Happy viewing!  (BTW, the C410x’s code name was “titanium” so when you hear Chris refer to it as that don’t be thrown)

Extra-credit reading:

Pau for now…


PowerEdge C1100 – Skinny & Dense

April 13, 2010

Here is the third in my series of four videos exploring the new Dell PowerEdge C server line.  Today’s feature, the PowerEdge C1100.

If you’re wondering about the funky game show-like setting, I shot this after hours on the day of our launch in the whisper suite.  Your guide, as before, is the incomparable Dell Solutions Architect, Rafael Zamora.

A few highlights

  • The C1100 is a high memory, cluster optimized, compute node
  • Dont let its slim pizza box looks fool you, upfront you can pack either four 3.5 inch drives or ten 2.5 inch drives.
  • For high memory optimized compute you can get 18 DIMM sticks for 144GB of RAM.
  • Comes with your choice of either Intel’s Nehalem or Westmere processors.
  • Raf also gives a couple of examples of recent customers and how they’ve decide to configure their units.
  • The C1100 will also serve as the cloud management server for the upcoming Joyent solution and the Ubuntu Enterprise Cloud.

Tune in next week when Rafael will take us through the PowerEdge C2100.

Pau for now…


PowerEdge C6100 – HPC & Cloud machine

April 8, 2010

As a follow on to last week’s PowerEdge C line overview, here is the first individual system overview:  the C6100.   Click below and let Dell Solutions Architect Rafael Zamora guide your thru the design and features of this densely packed machine targeted at HPC and cloud workloads.

Some of the highlights:

  • The PowerEdgeC 6100 holds the equivalent of 4 systems which have been packaged into “sleds,” each containing boards, RAM and microprocessors.
  • Upfront you can put a ton o’ disk drives, either 24 x 2.5″ drives or 12 x 3.5″ drives.
  • Great for markets like HPC clustering and search engines where compute density is key.  (This is not intended for running general purpose apps like Exchange, SQL or Oracle).
  • It will serve as the compute node in the Ubuntu Enterprise Cloud solution from our partner Canonical.

Still to come, overviews of the C2100 and C1100.

Extra-Credit Reading:

Pau for now…


Dell at DockerCon — Config guides, developer laptops, plugins and more

June 29, 2016

Today you would have to be under an IT rock if you haven’t at least heard of containers.  Containers, which have recently been made easily usable by a wide audience, allow applications to flow in a uniform package from development, to test, to production.  Containers also allow applications to be moved between public and private clouds as well as bare metal environments.  All of this increases agility and reduces friction in the overall development to deployment cycle, increasing the speed that organizations can deliver services to customers.

The 800 pound gorilla in the space is Docker which makes the most widely used container format and is building out additional offerings in the greater container ecosystem.

DockerCon

Last week in Seattle Docker held DockerCon 2016, its fourth conference promoting the general container ecosystem.  The event featured dozens of participating companies as well as a plethora of talks.  There was a ton of energy and the event even included a “full on kitty laser death match” on the main stage:

Laser cats

I attended the show and while there I attended sessions, conducted a bunch of interviews (see below) and spent time working at the Dell booth supporting Dell’s presence.

Dell Booth

At our booth we showed off four major offerings/projects in the Docker and container DockerCon Dell boothspace (here’s a video I did giving a brief overview what we were featuring).

We showed:

 

  • BlueData configuration guide:  BlueData’s platform provides customers with Big Data as a service, giving them the ability to leverage one pool of storage across multiple versions and distributions of big data tools.  The platform leverages Docker to deliver bare-metal performance with the flexibility of virtualization.  The configuration guide details the configuration set-up for BlueData’s Big-Data-as-a-Service (BDaaS) platform on Dell’s PowerEdge Servers.
  • Developer laptops: In the booth we showed off our line of Ubuntu-based developer laptops (Project Sputnik). These Linux-based systems provide a native platform for Docker-based development and allows developers to push their container-based apps to the cloud.  On the second day we gave away a “Sputnik” laptop (see below for the crowd on hand for the drawing).
  • Flocker plugin: This plugin allows ClusterHQ’s Flocker to integrate with the Dell Storage SC Series. This allows developer and operations teams to use existing storage to create portable container-level storage for Docker.
  • Docker Swarm plugin: This plugin, which is in the proof of concept phase, connects Docker Swarm with Dell’s next gen networking operating system, OS10.  The plugin automates configuration of vlan & routers for Docker’s Macvlan/Ipvlan driver orchestrated using Docker Swarm.

All in all a great show, helping to raise Dell’s presence in the space and providing us with greater insight into customer needs and the ecosystem evolution.

Waiting in front of the Dell booth for the Sputnik drawing

Waiting in front of the Dell booth for the Sputnik drawing

Video interviews:

 

Extra-credit reading

Pau for now…


DevOps, Microservices and Containers – a High Level Overview

February 8, 2016

A little while ago I put together a short presentation intended to provide a high-level overview of the wild and wacky world of DevOps, Microservices and Containers.  I present this deck both internally and externally to give folks an overview of what is happening in IT today.

For your reference, I have added the speaker notes after the deck.  I’m sure everyone has a different take on the concepts and explanations here.

Feel free to add your thoughts.

DevOps, Microservices and containers – a high level overview from Barton George

 

Speaker notes

1) Cover

2) Digital Players

  • Digital pioneers have reset customer expectations and disrupted industries resulting in the need for organizations to digitally transform in order to be competitive and ultimately survive (witness Kodak, Borders, Blockbuster, the taxi industry etc).  Additionally there is no time to waste, 5 years after the financial crisis companies who have been in cost cutting mode are all waking up at the same time realizing that they have a lack luster product portfolio and need to innovate.

3) Digital Business = Software (and it has a shelf life)

  • The key enabler for digital businesses is software and that software has a shelf-life.  To be competitive that software needs to reach customers as soon as possible.  To help drive this speed and customer focus, The Agile manifesto of 2001 was created.  The manifesto was a reaction to the long development cycles driven by the “waterfall” method of software development.  Agile turned its focus to the customer and quick iterative turns of development.

4) But that’s only “half” of the equation

  • While agile has sped up software develop and has made it more responsive to customer needs, unless its paired with a greater cooperation with operations, the overall delivery of software to customers remains the same.
  • In the past, Developers have kept their distance from operations.  It is not surprising that these groups have stood apart in light of how vastly different their goals and objectives have been.
    • Developers are goaled to drive innovation and reinvention in order to constantly improve on user experience and deliver new features to stay one step ahead of the competition.
    • Operations on the other hand is focused on providing rock solid stability, never letting the site go down, while at the same time being able to scale at a moment’s notice.

5) Dev + Ops: A Methodology

  • And this is where DevOps comes in.  DevOps is a methodology intended to get developers and operations working together to decrease friction and increase velocity.  You want to be able to get your “product” to customers as quickly as you can, and shorten this time frame as much as possible,  you also want to be able to continuously improve your product via feedback.
  • The gap between developers and operations is often referred to as “the wall of confusion” where code that often isn’t designed for production is lobbed over the wall.  Besides silos, the tools on each side do not fit together and there isn’t a common “tool chain.”  When the site goes down finger pointing results and ops accuses devs of writing bad code and devs accuse ops of not implementing it correctly.  This friction is obviously not productive in a world where “slow is the new down”
  • By tearing down the wall, the former delineation of responsibilities blurs:
    • Developers are asked to put “skin in the game” and for example carry a pager to be notified when an application goes down.
    • Conversely operations will need to learn some basic coding.
  • In this new world order, developers and ops folks who understand and can work with “the other side” are in high demand.

6) DevOps What its all about

  • Double clicking on DevOps, here is how it flows from Tenets to Requirements and then Benefits.   I should say that there are a lot of different interpretations of which components make up the key characteristics of DevOps but in the true spirit of the methodology, you need to move forward with “good enough. ” (“Always ready, never done”)   One factor that is widely agreed upon is that culture is the most important characteristic of DevOps.  Without it, you can have all the great processes and tools you want but they will languish.  All of this underpinned by the foundation of cloud, open source software (which the majority of the tools and platforms are composed of) as well as microservices – which I will expand on in a second.

7 & 8) Tool chain

  • Now while I said tools are not as important as culture, the concept of a tool chain provides a good illustration of the connected nature of DevOps.  DevOps demands a linked tool chain of technologies to facilitate collaborative change.   Interchangeability is key to the success of the DevOps toolchain (via loosely coupled via APIs).   Open Source tool adoption and appetite remain strong; however, large-enterprise clients prefer commercially supported Open Source distributions.   You will see tool chains depicted many different ways with different players and buckets but this example gives a decent overview of the high-level linkage of processes/components.  There are many different tools out in the market that fit into these buckets but I have picked just a couple for each to act as illustrations.
  • It all starts with new code
  • Continuous integration(CI) is the practice in software engineering of merging all developer working copies to a shared mainline several times a day.   Changes are immediately tested and reported on when they are added to the larger code base.
  • Version Control: These changes to the code are tracked in a central repository  –“one source of truth”
  • Code deployment: installs the code across 100s/1000s of servers
  • Measurement and monitoring: continuously measures and monitors the environment to identify bottle necks. This information is then fed back at the front of the process to drive improvements.  This data is then fed back to the front of the chain to drive improvements
  • Across this chain the code travels in the form of Microservices that are conveyed in containers.

9) Microservices: essential to iterate, scale and speed

  • Lets take a closer look at microservices which although they support DevOps, have developed independently over the last few years as a grassroots, developer driven effort.   Microservices is the concept of the decomposing software applications into loosely coupled and recombinable bite-sized processes Eg breaking a “store” component into: order processing, fulfillment, and tracking services .  This decomposition greatly increases the ability to iterate, scale and it increases speed, thereby enabling continuous delivery.  Microservices and cloud go hand-in-hand, where autoscaling can help ensure no service becomes a bottleneck by adding horse power where needed.  Docker and microservices are a perfect fit.

10) Enter the modern container:

  • As I mentioned previously, containers fit well as the conduit to deliver microservices.  While containers have been around for a decade in the form of Solaris Zones, BSD jails as well as at Google where they have used them to run their infrastructure (creating and blowing away 2 billion containers a week).  It has only been in the last year or two that they have come to the fore thanks to Docker who evolved Linux containers in the context of modern applications and made containers easy to use for the general dev/ops person (Docker expertise is currently the second most sought after skill today in the tech world).
  • Containers serve perfectly as vehicles to convey microservices and applications across the tool chain from development through testing, staging and production, much the same way goods in shipping containers can be packaged and sent on a truck from the warehouse the loaded on a ship and then put on a truck waiting on the other side.  Additionally they can be used on public and private clouds as well as bare metal servers.

11) Containers vs VMs.

  • Architecturally VMs and containers differ in that VMs sit on top of hypervisor and each VM contains both a guest OS as well as an app.  Containers on the other hand package an app or service by itself and it sits directly on top of the OS.  Given the maturity of VMs, they are more secure than containers, they also take much longer to spin up.   Containers on the other hand don’t currently have the security of a VM but spin up in milliseconds vs seconds or minutes.  In order to address security concerns, in most cases today organizations are running containers within virtual machines
  • As all new technology, containers are still rough around the edges and if you aren’t an early adopter kind of organization, you may want to play with/pilot them but not implement on a large scale just yet.

12) The landscape: 

  • At this point the container landscape is an ever changing field populated by small and large players.  This space is dominated by open source offerings.
  • Container engines: As the center of gravity for of the landscape are the container engines themselves made up by the 800 pound gorilla, Docker as well as Rocket which was created by CoreOS in response to what CoreOS felt was a lack of security in the Docker container.  This summer the Open Container Initiative was kicked off to bring the two sides together and create a common spec.
  • MicroOS’s: Sitting beneath the containers are the micro OS’s, basically the size of 25 pictures on your cell phone (100 MB) or 1/20th the size of a typical OS.   What makes these so small is that they have been stripped down to the bare necessities eg no fax sw included.  These began with CoreOS and now there are offerings from Red Hat (atomic), Microsoft (nano), VMware (photon) and Rancher etc (others include Intel’s ClearOS and Ubuntu’s Snappy)
  • Container Orchestration: Just like having VM or server sprawl, you  can have container sprawl and need to be able to manage them.  The offering that sits at the center is Google’s Kubernetes built on their own container management platform and which can combined with the other orchestration offerings.   The others include, Rancher, Docker Swarm, CoreOS, Mesosphere (based off of the Apache Mesos project) and Flocker a container data volume manager
  • Clouds with Docker Support: Most clouds are now building docker support from OpenStack to Joyent’s Triton, Google’s container engine, EC2 and Microsoft Azure

13) The DevOps equine continuum

  • Now if we zoom back out and take a look at the implemtation of DevOps it can be illustrated by the analogy of an “Equine continuum.”  Here is a model for classifying companies into three buckets illustrating their position on DevOps journey.
  • In the upper right you have the “Unicorns” (not the billion dollar-valued unicorns of the valley) such as AWS, google, uber etc who have employed devops methodology since their beginnings or soon there after.  This tend to be cloud based companies.
  • Next on the continuum are “Race Horses” often times banks like Goldman Sachs or JP Morgan Chase who are starting to implement DevOps to increase their agility and gain a competitive edge.
  • In lower left are the “Work horses” who have just started looking into how they can improve their competitiveness via digital transformation and what role DevOps may play.

14) Where do I start

  • If you fit into the workhorse classification and you’re looking to get started we are not advocating that you dump all your existing infrastructure and start implementing DevOps, for one thing you would have a mutiny on your hands.   The best place to focus is on those fast changing applications and services on the front end that are customer facing.  You would want to leave stable transaction-oriented systems on the back as they are.

15) What Dell is doing in this space

Offerings

  • Professional services: Dell’s professional services organization has an array of offerings to enable organizations to implement DevOps practices:
    • Agile/DevOps Advisory services; Agile Delivery Services
    • CI/CD consulting and implementation services
    • DevOps Migration/managed services
    • DevOps focussed test Automation, performance testing services
  • OpenShift: Working with our partner Red Hat, Dell is making the OpenShift Platform as a Service available to our customers.
  • Dell XPS 13 developer edition:  This is an Ubuntu Linux-based developer laptop  that allows developers to create applications/microservices within Docker containers on their laptops and then deploy these containers directly to the cloud.
  • Open Networking OS 10:  This switch OS works with Kubernetes which coordinates the hardware pieces.  OS 10 programs the hardware as containers come and go.

Projects

  • Flocker pluginCode that allows ClusterHQ’s Flocker to integrate with the Dell Storage SC Series has been made available on github. What this does is allow developer and operations teams to use existing storage to create portable container-level storage for Docker.  Rather than coming from an internal planning process or committee, the idea for a Flocker plugin came from Dell storage coder Sean McGinnis. Sean was looking for ways to make Dell Storage an infrastructure component in an open source environment.
  • Containerizing an old-school application: There are also several projects going on within the company to develop a greater understanding of containers and their advantages. About a year ago Senior Linux engineer Jose De la Rosa had heard so much Docker and container-mania that he thought he’d find out what the fuss was all about.  Jose started looking around for an app within Dell that he could containerize and came across Dell’s OpenManage Server Administrator (OMSA).  In case you’re wondering, OMSA is an in house application used to manage and monitor Dell’s PowerEdge servers.  Rather than being a micro-service based application, OMSA is an old school legacy app.  Jose succeeded in containerizing the application and learned quite a bit in the process.
  • CTO Lab: Dell’s CTO team has set up Joyent’s elastic container infrastructure, Triton, in our lab running Docker. The idea is to learn from this platform and then work with the Active Systems Manager team to decompose ASM  into microservices and run it on the Triton platform.

Industry Consortia and Internal use of DevOps

  • Open Container Initiative: Dell is a member of the Open Container Initiative which is hosted by the Linux foundation and is chartered to create common specifications for containers to allow for interoperability and increased security.
  • Dell IT:  Within Dell itself, devops is being used to support Dell.com and internal IT.  Dell’s Active System Manager employees the DevOps methodology in its product development process.

Extra-credit reading

Pau for now…

 


Containerizing an old school Dell application

November 24, 2015

About a year ago Senior Linux engineer Jose De la Rosa had heard so much Docker and container-mania that he thought he’d find out what the fuss was all about.  Jose started looking around for an app within Dell that he could containerize and came across Dell’s OpenManage Server Administrator (OMSA).  In case you’re wondering, OMSA is an in house application used to manage and monitor Dell’s PowerEdge servers.  Rather than being a micro-service based application, OMSA is an old school legacy app.

To hear how Jose tackled the task, why, and what he learned, check out the following video (also take a look at the deck below that he presented at the Austin Docker meet up).

Here’s the deck Jose presented at the Austin Docker Meetup back in September.

For more info about what Jose and the Dell Linux engineering team are doing in this space, check out linux.dell.com/docker

Extra-credit reading

Pau for now…


Project Sputnik now comes with 3 month free trial on Joyent cloud

July 23, 2013

Joyent logoAs of today we are making available three months of free use of the Joyent Cloud to owners of the XPS 13 developer edition.

The idea behind Project Sputnik, has always been to provide a client-to-cloud platform for developers and today we are offering access to the Joyent Cloud to complete the solution.

What you get and how you get it

With the trial you get either two g3-standard-0.625-kvm instances running Ubuntu for 3 months or one g3-standard-1.75-kvm instance running Ubuntu for 3 months.

We will be setting up a landing page in the next day or two provide elegant access to the Joyent Cloud but for those who want to get started right away you can simply follow the “How do I get Started” instructions below.  We are kicking this off to begin with with 500 free accounts, first come first served.

3 components wJoyentProfile Tool and Cloud Launcher

Also available now are the Project Sputnik Cloud Launcher and profile tool.   The profile tool is designed to provide access to a library of community-created profiles, and to configure and quickly set up development environments and tool chains.  Today we have three sample profiles available: Emacs, Ruby and JavaScript.  Documentation on how to create a profile will be coming soon so stay tuned.

The cloud launcher creates a seamless link from the client to the cloud, to facilitate ongoing development of application environments.  There is a Juju version of the launcher that currently comes with Sputnik and today we are announcing a version that Opscode has developed which uses spiceweasel as its underlying library.  You can check out a demo of it here.  We are also working to connect the chef version of the cloud launcher to the Joyent trial, more to come on that soon.

But wait, there’s more

In related Dell Open Source news we’ve got a whole lot of momentum going on.  You can check out all the news in today’s press release but here are the highlights:

Dell OpenStack-Powered Cloud Solution

Now available with: OpenStack Grizzly support, support for Dell Multi-Cloud Manager (formerly Enstratius), and extended reference architecture support, including the Dell PowerEdge C8000

Dell Cloud Transformation Services

The new consulting services provide assistance with assessing, building, operating and running cloud environments, and enable and accelerate enterprise OpenStack adoption.

Dell Cloudera Hadoop Solution

Now supports the newest version of Cloudera Enterprise. Updates allow customers to perform real-time SQL interactive queries and Hadoop-based batch processing, simplifying the process of querying data in Hadoop environments.

Intel Distribution for Apache Hadoop

Dell has tested and certified the Intel Distribution for Apache Hadoop on Dell PowerEdge servers. Additionally Dell Solution Centers validated the reference architecture and developed a technical whitepaper that simplifies the deployment of Intel Distribution on the Dell platform

 Crowbar

Dell has released RAID and BIOS configuration capabilities to the Crowbar open source community.  SUSE has integrated Crowbar functionality as part of SUSE Cloud to make OpenStack-based private cloud deployments seamless.

Dasein open source project

Dell confirmed its commitment to further develop and support the Dasein open source project, as pioneered by recently acquired Enstratius.

Phew, a whole lot of shaking going on! 🙂

===========================================

How do I get Started with Joyent Cloud trial

Step 1:

Open a terminal window press Ctrl + Alt +T

1.1. $ wget https://us-east.manta.joyent.com/jens/public/sputnik.tar

1.2. $ sudo tar -C / -xvf sputnik.tar

Step 2:

Find and run the “Install Joyent Public Could” in the launcher.

Look for the big Joyent LOGO.

Step 3:

Signup for a free trial account on the Joyent Public Cloud.

Open Firefox, goto http://www.joyent.com

Step 4:

Back in the terminal window, type the following command:

$ /usr/share/applications/joyentInstaller.sh

Step 5:

5.1. $ wget -O key-generator.sh https://us-east.manta.joyent.com/jens/public/key-generator.sh

5.2. $ chmod 755 key-generator.sh

5.3. $ ./key-generator.sh (enter you username and password for the jpc)

To source your new environment variables run the following commands

5.4. source ~/.bash_profile

Step 6:

6.1   To Confirm that the Joyent cloud SDK is installed:   $ sdc-listdatacenters

6.2   To confirm that the Joyent Manta SDK is installed:   $ mls /manta/public/sdks

How do I provision a new instance?

Sign in to the Joyent portal and click the  in the upper right portion of the screen. Once you’re there, the tool will walk you through the choice of datacenters, images, and instance types and sizes. You’ll have a chance to review the hourly and monthly cost of the instance, and provide a memorable name for the instance. Once you’ve decided on the type of instance that fits your project, click the  button and the system will ask to confirm your request. The provisioning will start immediately, but may take a few seconds to complete. Clicking on the new named instance will show its assigned public IP address when provisioning is complete. You may SSH into the instance with ssh -l root <ip address>.

How do I stop, resize or reboot instances?

Shutting down, resizing or rebooting your instance can all be executed through the customer portal of Joyent. In addition, we’ve provided a script you can use to perform these steps within your instance.

How do I install software on my instance?

To install or update software on your instance, you’ll need to run commands as either the administrative or root user of your instance. For tips on how to run commands and installation processes, check out the pages on how to install software on your instance.

How do I secure my instance?

Joyent take cloud security very seriously and we have refined many processes to reduce risk and preserve the integrity of data managed in your instance. For a full list of security checks and processes, please visit the security center in our documentation.

How do you manage your instance resource usage?

One of strengths of Joyent is the ability to have full and detailed transparency of every aspect of your infrastructure and application. You can use Cloud Analytics to provide you real-time, diagnostic heatmaps of system behavior. In addition, using these tips here can provide you better control over optimizing the performance of your instance.

How do you manage a database on your instance?

Instances on Joyent can be pre-configured to run a wide range of databases and database services. Joyent supports: MySQL, Percona, Riak, MongoDB, as well as integration to database services from companies like, Cloudant or MongoLab. For big data projects, Joyent is an ideal platform for configuring and running a Hadoop cluster. Check out these guides on how to set up a database or configure your Hadoop cluster.

How do you analyze performance of my instances?

Joyent is the best cloud in the industry for monitoring the entire health of your stack. Using Cloud Analytics, you have the ability to examine, in real-time, the performance characteristics of every level of your application, and network. If you just want to perform server level monitoring, we’ve built integration with leading monitoring tools from New Relic and Nodefly as well.

Where can I learn more?

Our documentation center and engineering blogs are terrific resources for you to learn more about Joyent and participate in the Joyent community. The Dev Center resources we’ve built for you will hopefully get you started on a path to success with Joyent. For additional help or training, please visit:

Pau for now…


Game on! Ubuntu comes to Alienware

April 5, 2013

Ubuntu has been available on Dell business laptops for quite awhile, including the recently introduced XPS 13 developer edition.  A few weeks ago we announced that we were expanding our Ubuntu certification beyond our cloud servers to include Dell’s 12G servers.

Today we are announcing that Ubuntu is coming to another member of the Dell family, the Alienware X51 gaming desktop.

Alienware+Ubuntu

You can easily install Steam on to the X51 and although there aren’t tons of games supported yet, the list is continuing to grow and now includes classics such as Team Fortress 2 and Serious Sam.

To learn more and get a first-person account of using Ubuntu on the X51 check out the Direct2Dell blog post.

Update: corrected Ubuntu logo on above screenshot

Extra-credit reading

Pau for now…


On beyond North America — Dell’s OpenStack solution now available in Europe and Asia

March 21, 2012

Last summer at OSCON Dell announced the availability of our OpenStack solution in the US and Canada.  Today at World Hosting Days in Rust Germany we are now announcing that our OpenStack-Powered Cloud Solution is available in Europe and Asia.

If you’re not familiar with it, OpenStack is an open source cloud project built on a foundation of code initially donated by NASA and Rackspace.  The project kicked off a little over a year and a half ago here in Austin and it has gained amazing traction since then.

Dell’s offering

Dell’s OpenStack cloud offering is an open source, on premise cloud solution based on the OpenStack platform running on Ubuntu.  Its composed of:

  • The OpenStack cloud operating system
  • PowerEdgeC servers: C6100, C6105, C2100 and, coming soon, Dell’s new C6220 and R720
  • The Crowbar deployment and management software framework – developed and coded by Dell 🙂
  • Dell’s OpenStack reference architecture
  • Dell Services

Crowbar software framework

To give a little more  background on the Crowbar software framework, its an open source project developed initially at Dell and you can grab it off github.  The framework, which is under the Apache 2.0 license,  manages the OpenStack deployment from the initial server boot to the configuration of the primary OpenStack components, allowing users to complete bare metal deployment of multi-node OpenStack clouds in hours, as opposed to days.

Once the initial deployment is complete, you can use Crowbar to maintain, expand, and architect the complete solution, including BIOS configuration, network discovery, status monitoring, performance data gathering, and alerting.   Beyond Dell, companies like VMware, Dreamhost and Zenoss have built “barclamps”  that allow them to utilize Crowbar’s modular design.  Additionally, customers who buy the Dell OpenStack-Powered Cloud Solution get training, deployment, and support on Crowbar.

So as of today, customers in the UK, Germany and China can purchase the Dell OpenStack-Powered Cloud Solution.  As customer demand grows in other regions we will be adding more countries so stay tuned.  If the first 18 mos of the project are any indication of whats the pace is like to come, we are all going to be in for a lot more excitement.

For more info, email: OpenStack@Dell.com

Extra-credit reading

Pau for now…


Now available: Dell | Cloudera solution for Apache Hadoop

September 12, 2011

A few weeks ago we announced that Dell, with a little help from Cloudera, was delivering a complete Apache Hadoop solution.  Well as of last week its now officially available!

As a refresher:

The solution is comprised of Cloudera’s distribution of Hadoop, running on optimized Dell PowerEdge C2100 servers with Dell PowerConnect 6248 switch, delivered with joint service and support from both companies.  You can buy it either pre-integrated and good-to-go or you can take the DIY route and set up yourself with the help of

Learn more at the Dell | Cloudera page.

Extra-credit reading

Pau for now…


Introducing the Dell | Cloudera solution for Apache Hadoop — Harnessing the power of big data

August 4, 2011

Data continues to grow at an exponential rate and no place is this more obvious than in the Web space.  Not only is the amount exploding but so is the form data’s taking whether that’s transactional, documents, IT/OT, images, audio, text, video etc.   Additionally much of this new data is unstructured/ semi-structured which traditional relational databases were not built to deal with.

Enter Hadoop, an Apache open source project which, when combined with Map Reduce allows the analysis of entire data sets, rather than sample sizes, of structured and unstructured data types.  Hadoop lets you chomp thru mountains of data faster and get to insights that drive business advantage quicker.   It can provide near “real-time” data analytics for click-stream data, location data, logs, rich data, marketing analytics, image processing, social media association, text processing etc.  More specifically, Hadoop is particularly suited for applications such as:

  • Search Quality — search attempts vs. structured data analysis; pattern recognition
  • Recommendation engine — batch processing; filtering and prediction (ie use information to predict what similar users like)
  • Ad-targeting – batch processing; linear scalability
  • Thread analysis for spam fighting and detecting click fraud —  batch processing of huge datasets; pattern recognition
  • Data “sandbox” – “dump” all data in Hadoop; batch processing (ie analysis, filtering, aggregations etc); pattern recognition

The Dell | Cloudera solution

Although Hadoop is a very powerful tool, it can be a bit daunting to implement and use.  This fact wasn’t lost on the founders of Cloudera who set up the company to make Hadoop easier to used by packaging it and offering support.   Dell has joined with this Hadoop pioneer to provide the industry’s first complete Hadoop Solution (aptly named “the Dell | Cloudera solution for Apache Hadoop”).

The solution is comprised of Cloudera’s distribution of Hadoop, running on optimized Dell PowerEdge C2100 servers with Dell PowerConnect 6248 switch, delivered with joint service and support. Dell offers two flavors of this big data solution: Cloudera’s distribution with the free download of Hadoop software, and Cloudera’s enterprise version of Hadoop that comes with a charge.

It comes with its own “crowbar” and DIY option

The Dell | Cloudera solution for Apache Hadoop also comes with Crowbar, the recently open-sourced Dell-developed software, which provides the necessary tools and automation to manage the complete lifecycle of Hadoop environments.  Crowbar manages the Hadoop deployment from the initial server boot to the configuration of the main Hadoop components allowing users to complete bare metal deployment of multi-node Hadoop environments in a matter of hours, as opposed to days. Once the initial deployment is complete, Crowbar can be used to maintain, expand, and architect a complete data analytics solution, including BIOS configuration, network discovery, status monitoring, performance data gathering, and alerting.

The solution also comes with a reference architecture and deployment guide, so you can assemble it yourself, or Dell can build and deploy the solution for you, including rack and stack, delivery and implementation.

Some of the coverage (added Aug 12)

Extra-credit reading

 

Pau for now…


Dell announces availability of OpenStack solution; Open sources “Crowbar” software framework

July 26, 2011

Today at OSCON we are announcing the availability of the Dell OpenStack Cloud Solution along with the open sourcing of the code behind our Crowbar software framework.

The Solution

Dell has been a part of the OpenStack community since day one a little over a year ago and today’s news represents the first available cloud solution based on the OpenStack platform.  This Infrastructure-as-a-service solution includes a reference architecture based on Dell PowerEdge C servers, OpenStack open source software, the Dell-developed Crowbar software and services from Dell and Rackspace Cloud Builders.

Crowbar, keeping things short and sweet

Bringing up a cloud can be no mean feat, as a result a couple of our guys began working on a software framework that could be used to quickly (typically before coffee break!) bring up a multi-node OpenStack cloud on bare metal.   That framework became Crowbar.  What Crowbar does is manage the OpenStack deployment from the initial server boot to the configuration of the primary OpenStack components, allowing users to complete bare metal deployment of multi-node OpenStack clouds in a matter of hours (or even minutes) instead of days.

Once the initial deployment is complete, Crowbar can be used to maintain, expand, and architect the complete solution, including BIOS configuration, network discovery, status monitoring, performance data gathering, and alerting.

Code to the Community

As mentioned above, today Dell has released Crowbar to the community as open source code (you can get access to it the project’s GitHub site).  The idea is allow  users to build functionality to address their specific system needs.  Additionally we are working with the community to submit Crowbar as a core project in the OpenStack initiative.

Included in the Crowbar code contribution is the barclamp list, UI and remote API’s, automated testing scripts, build scripts, switch discovery, open source Chef server.  We are currently working with our legal team to determine how to release the BIOS and RAID which leverage third party components.  In the meantime since it is free (as in beer) software, although Dell cannot distribute it, users can directly go the vendors and download the components for free to get that functionality.

More Crowbar detail

For those who want some more detail, here are some bullets I’ve grabbed from Rob “Mr. Crowbar” Hirschfeld’s blog:

Important notes:

  • Crowbar uses Chef as it’s database and relies on cookbooks for node deployments
  • Crowbar has a modular architecture so individual components can be removed, extended, and added. These components are known individually as “barclamps.”
  • Each barclamp has it’s own Chef configuration, UI subcomponent, deployment configuration, and documentation.

On the roadmap:

  • Hadoop support
  • Additional operating system support
  • Barclamp version repository
  • Network configuration
  • We’d like suggestions!  Please comment on Rob’s blog!

Extra-credit reading

Pau for now…


Intel version of Dell’s third gen Microserver now available

July 19, 2011

Over the past three years Dell’s Data Center Solutions group has been designing custom microservers for a select group of web hosters.  The first generation allowed one of France’s largest hosters, Online.net to enter a new market and gain double digit market share.  The second generation brought additional capabilities to the original design along with greater performance.

A few months ago we announced that we were taking our microserver designs beyond our custom clients and making these systems available to a wider audience.  Last month the AMD-based PowerEdge C5125 microserver became available and yesterday the Intel-based PowerEdge C5220 microserver made its debut.   Both are ultra-dense 3U systems that pack up to twelve individual servers into one enclosure.

To get a great overview of  both the 12 sled and 8 sled versions of the new C5220 system, let product manager Deania Davidson take you on a quick tour:

Target use-cases and environments

  • Hosting applications such as dedicated, virtualized, shared, static content, and cloud hosting
  • Web 2.0 applications such as front-end web servers
  • Power, space, weight and performance constrained data center environments such as co-los and large public organizations such as universities, and government agencies

Extra-credit reading

Pau for now..,