App Think Tank: Cloud vs. hyperscale

May 7, 2014

This is the final video clip from the Dell Services Application think tank held earlier this year.  Today’s clip features the always enlightening and entertaining Jimmy Pike.  Jimmy, who is a Senior Fellow at Dell and was once called the Willy Wonka of servers, was one of the 10 panelists at the Think Tank where we discussed the challenges of the new app-centric world.

In this clip, Jimmy talks about the fundamental differences between “purpose-built hyperscale” and the cloud environments that most organizations use.

As Jimmy points out, when moving to the cloud it is important to first understand your business requirements and what your SLAs need to be.

If you’re interested in hearing what else Jimmy has to say, check out this other clip from the think tank,  The persistently, ubiquitously connected to the network era.

The Think Tank, Sessions one and two

Extra-credit reading (previous videos)

Pau for now…


Dell and Sputnik go to OSCON

July 18, 2013

Next week, myself, Michael Cote and a whole other bunch of Dell folk will be heading out to Portland for the 15th annual OSCON-ana-polooza.  We will have two talks that you might want to check out:

Cote and I will be giving the first and the second will be lead by Joseph George and James Urquhart.

Sputnik Shirt

And speaking of Project Sputnik, we will be giving away three of our XPS 13 developer editions:  one as a door prize at the OpenStack birthday party, one as a drawing at our booth and one to be given away at James and Joseph’s talk listed above.

We will also have a limited amount of the shirt to the right so stop by the booth.

But wait, there’s more….

To learn firsthand about Dell’s open source solutions be sure to swing by booth #719 where we will have experts on hand to talk to you about our wide array of solutions:

  • OpenStack cloud solutions
  • Hadoop big data solutions
  • Crowbar
  • Project Sputnik (the client to cloud developer platform)
  • Dell Multi-Cloud Manager (the platform formerly known as “Enstratius”)
  • Hyperscale computing systems

Hope to see you there.

Pau for now…


Introducing the Webilicious PowerEdge C8000

September 19, 2012

Today Dell is announcing our new PowerEdge C8000 shared infrastructure chassis which allows you to mix and match compute, GPU/coprocessor and storage sleds all within the same enclosure.  What this allows Web companies to do is to have one common building block that can be used to provide support across the front-, mid- and back end tiers that make up a web company’s architecture.

To give you a better feel for the C8000 check out the three videos below.

  1. Why — Product walk thru:  Product manager for the C8000, Armando Acosta takes you through the system and explains how this chassis and the accompanying sleds better server our Web customers.
  2. Evolving — How we got here:  Drew Schulke, marketing director for Dell Data Center solutions explains the evolution of our shared infrastructure systems and what led us to develop the C8000.
  3. Super Computing — Customer Example:  Dr. Dan Stanzione, deputy director at the Texas Advanced Computing Center talks about the Stampede supercomputer and the role the C8000 plays.

Extra Credit reading

  • Case Study: The Texas Advanced Computing Center
  • Press Release:  Dell Unveils First Shared Infrastructure Solution to Provide Hyperscale Customers with New Modular Computational and Storage Capabilities
  • Web page: PowerEdge C8000 — Optimize data center space and performance

Pau for now…


All the best ideas begin on a cocktail napkin — DCS turns 5

April 11, 2012

A little over a week ago, Dell’s Data Center Solutions (DCS) group marked its fifth birthday.  As Timothy Prickett Morgan explains in his article subtitled, “Five years old, and growing like a weed”:

DCS was founded originally to chase the world’s top 20 hyperscale data center operators, and creates stripped-down, super-dense, and energy-efficient machines that can mean the different between a profit and a loss for those data center operators.

This team, which now represents a greater than $1 billion dollar business and has expanded beyond just custom systems to include standard systems built for the “next 1000,”  all started on a simple napkin.

The origin of DCS -- Ty’s Sonic sketch - November 2, 2006

From napkin to “Frankenserver,” to today

Ty Schmitt who was one of the original team and now is the executive director of Dell’s modular infrastructure team within DCS, explains:

This was sketch I made over drinks with Jimmy Pike late one night after visiting a big customer on the west coast.  We we were working on a concept for a 1U system for them based on their requested requirements.   As you can see by the date (Nov 2006) it was actually before DCS became official … we were a skunk works team called “Sonic” consisting of a hand full of people.   We wanted to take an existing chassis and overhaul it to fit 4 HD’s, a specific MB, and SATA controller.  When we got back to Austin, I modified the chassis in the RR5 machine shop (took parts from several different systems and attached them together) and Jimmy outfitted it with electronics, tested it, and it was sent to the customer as a sample unit.

This first proto was described by the customer as “Frankenserver” and was the beginning of the relationship we have with one of our biggest customers.

A little over five years later, Dell’s DCS team has gone from Frankenserver to commanding 45.2 percent revenue share in a market that IDC estimates at $458 million in sales last quarter.  Pretty cool.

Extra-credit reading:

Pau for now…


IDC starts tracking the hyperscale server market

March 26, 2012

In a recent post that highlighted the demise of the midrange  server market, Timothy Prickett Morgan talked about the new server classification that IDC has just started tracking, “Density-optimized”:

These are minimalist server designs that resemble blades in that they have skinny form factors but they take out all the extra stuff that hyperscale Web companies like Google and Amazon don’t want in their infrastructure machines because they have resiliency and scale built into their software stack and have redundant hardware and data throughout their clusters….These density-optimized machines usually put four server nodes in a 2U rack chassis or sometimes up to a dozen nodes in a 4U chassis and have processors, memory, a few disks, and some network ports and nothing else per node.

Source: IDC -- Q3 2011 Worldwide Quarterly Server Tracker

Here are the stats that Prickett Morgan calls out (I particularly like the last bullet :-):

  • By IDC’s reckoning these dense servers accounted for $458 million in sales, up 33.8 percent in a global server market that fell by 7.2 percent to $14.2 billion in the quarter.
  • Density optimized machines accounted for 132,876 servers in the quarter, exploding 51.5 percent, against the overall market, which comprised 2.2 million shipments and rose 2 percent.
  • Dell, by the way, owns this segment, with 45.2 percent of the revenue share, followed up by Hewlett-Packard with 15.5 percent of that density-optimized server pie.

Extra-credit reading

Pau for now…


Whitepaper: 5 points to consider when choosing a Server Vendor for Hyperscale Data Centers

April 15, 2011

A whitepaper came out a little while ago from the management consulting firm, PRTM, that gives a perspective on the server industry.  The paper, which Dell was one of the contributors to, specifically focuses on something near and dear to our hearts, hyperscale data centers.

The paper, entitled Hyperscale Data Centers: Value of a Server Brand, talks about what organizations who are looking to build out these ginormous data centers should consider when selecting a system vendor.

In particular, PRTM offer their points to consider in light of the decision of working with a system OEM (Original Equipment Manufacturer) like a Dell or HP, or going directly to an ODM (Original Design Manufacturers) like a Foxconn or Quanta.

The five main areas PRTM recommends focusing on when choosing a server vendor are:

  1. Providing total solution reliability
  2. Ability to accommodate future capacity swings
  3. Ability to guarantee supply of components and sub-systems
  4. Accountability
  5. Ability to manage the entire spectrum of a large-scale deployment

Check out the whitepaper and see where you land (I know which I would choose :))

Update:

Dave Ohara of Green Data Center blog fame did a post about choosing between OEMS and ODMs building on this entry.  He provides a lot of great detail and factoids, check it out:

Extra-credit reading

  • PRTM blog: Hyperscale Data Centers—5 Points to Consider When Choosing a Server Vendor

Pau for now…


Dells Data Center Solutions group turns Four!

March 28, 2011

Dell’s Data Center Solutions group (DCS) is no longer a toddler.  Over the weekend we turned four!

Four years ago on March 27, 2007 Dell announced the formation of the Data Center Solutions group, a special crack team designed to service the needs of hyperscale customers.  On that day eWeek announced the event in their article Dell Takes On Data Centers with New Services Unit and within the first week Forrest Norrod, founding DCS GM and currently the GM of Dell’s server platform division, spelled out to the world our goals and mission (in re-watching the video its amazing to see how true to that mission we have been):

The DCS Story

If you’re not familiar with the DCS story, here is how it all began.  Four years ago Dell’s Data Center Solutions team was formed to directly address a new segment that begin developing in the marketplace, the “hyperscale” segment.  This segment was characterized by customers who were deploying 1,000s if not 10,000s of servers at a time.

These customers saw their data center as their factory and technology as a competitive weapon.  Along with the huge scale they were deploying at, they had a unique architecture and approach specifically, resiliency and availability were built into the software rather than the hardware.  As a result they were looking for system designs that focused less on redundancy and availability and more on TCO, density and energy efficiency.  DCS was formed to address these needs.

Working directly with a small group of customers

From the very beginning DCS took the Dell direct customer model and drove it even closer to the customer.  DCS architects and engineers sit down with the customer and before talking about system specs they learn about the customer’s environment, what problem they are looking to solve and what type of application(s) they will be running.  From there the DCS team designs and creates a system to match the customer’s needs.

In addition to major internet players, DCS’s customers include financial services organizations, national government agencies, institutional universities, laboratory environments and energy producers.  Given the extreme high-touch nature of this segment, the DCS group handles only 20-30 customers worldwide but these customers such as Facebook, Lawrence Livermore National Labs and Microsoft Azure are buying at such volumes that the system numbers are ginormous.

Expanding to the “next 1000”

Ironically because it was so high-touch, Dell’s scale out business didn’t scale beyond our group of 20-30 custom customers.   This meant considerable pent up demand from organizations one tier below.   After thinking about it for a while we came up with a different model to address their needs.  Leveraging the knowledge and experience we had gained working with the largest hyperscale players, a year ago we launched a portfolio of specialized products and solutions to address “the next 1000.”

The foundation for this portfolio is a line of specialized PowerEdge C systems derived from the custom systems we have been designing for the “biggest of the big.”  Along with these systems we have launched a set of complete solutions that we have put together with the help of a set of key partners:

  • Dell Cloud Solution for Web Applications: A turnkey platform-as-a-service offering targeted at IT service providers, hosting companies and telcos.  This private cloud offering combines Dell’s specialized cloud servers with fully integrated software from Joyent.
  • Dell Cloud Solution for Data Analytics: A combination of Dell’s PowerEdge C servers with Aster Data’s nCluster, a massively parallel processing database with an integrated analytics engine.
  • Dell | Canonical Enterprise Cloud, Standard Edition: A “cloud-in-a-box” that allows the setting up of an affordable Infrastructure-as-a-Service (Iaas)-style private clouds in computer labs or data centers.
  • OpenStack: We are working with Rackspace to deliver an OpenStack solution later this year.  OpenStack is the open source cloud platform built on top of code donated by Rackspace and NASA and is now being further developed by the community.

These first four years have been a wild ride.  Here’s hoping the next four will be just as crazy!

Extra-credit reading

Articles

DCS Whitepapers

Case studies


A Walk-thru of our new Hyper-scale inspired Microserver

March 22, 2011

Earlier this morning at WorldHostingDays outside of Frankfurt, we announced our new line of PowerEdge C microservers.  While this is our third generation of microservers, its the first that are available beyond the custom designed systems we’ve been building for a small group of hyperscale web hosters.

If you’re not familiar with microservers, their big appeal is that they are right-sized for many dedicated hosting applications and provide extreme density and efficiency, all of which drive up a data center’s revenue per square foot.  As an example, our first generation allowed one of France’s largest hosters, Online.net to efficiently enter a new market and gain double digit market share.

To see exactly what these systems are all about, check out this short walk thru by Product Manager Deania Davidson.  The system that Deania is showing off is the AMD-based PowerEdge C5125 which will be available next month.  Also announced today is the Intel-based PowerEdge C5220 which will be out in May.

Extra-credit reading

Pau for now…


Rob on Hyperscale Cloud Architecture

February 21, 2011

Earlier this month when the Bexar release for OpenStack went live, a meet up was held in Santa Clara.  As a part of the event, a series of lightening talks were given by various OpenStack community members.  One of the speakers was Dell’s very own Rob Hirschfeld, a senior cloud solutions architect, who has been actively involved with the OpenStack project from the get-go.

Here is the short presentation that Rob gave where he talks about some of the key characteristics of a hyperscale environment and how it differs from a traditional enterprise data center.

Some of the topics Rob touches on:

  • “Nested centralized” vs “Flat Edges”
  • Fully redundant vs. cloud non-redundant
  • Fault zones across applications
  • Cloud-ready hardware

Extra-credit reading

Pau for now…


Low voltage DIMMs can mean huge savings in Hyperscale environments

January 16, 2011

Dell’s Data Center Solutions (DCS) group focuses on customers operating huge scaled out environments.  Given the number of systems deployed in these environments we are always looking for ways to take energy out of our systems.  A half a watt here, a half a watt there means big energy savings when multiplied across a hyper scale environment and translates into lower costs to our environment and to our customers’ operating budgets.

Recently we have adopted Samsung’s low voltage DIMMs (“Green DDR3″) in our efforts to drive efficiencies.   Take a listen to DCS’s Executive Director of engineering and architecture, Reuben Martinez, in the video below as he walks you through how a seemingly small decrease in DIMM voltage can translate to millions of dollars of savings in hyper scale environments.

Some of the ground Reuben covers:

  • How much energy US data centers consume and how this has grown.
  • What is happening to the cost of energy (hint: its going up:).
  • How our PowerEdge C6105 is designed for power efficiency including utilizing Samsung’s low-voltage memory. (BTW, Samsumg’s Green DDR3’s are also available in our C1100, C2100 and C6100)
  • The amount of power consumed by memory compared to the CPU (you may be surprised)
  • [2:35] The TCO calculation that shows the savings that low voltage DIMMs can provide in a typical data center environment.

Extra-credit reading:

Pau for now…


Enter the Viking: Light Weight server for Hosting and Web 2.0

September 12, 2010

Over the last few years, we have been working with some of the world’s biggest hyperscale data center operators, folks who are deploying thousands, to tens of thousands of servers at a time. Within this select group, the theme that keeps coming up over and over is uber-efficiency.

The customers that we’ve been working with in areas like Web 2.0 and hosting  require solutions that are not only extremely dense, but also dramatically drive down costs.  When operating at the scale that these organizations do, ultra-efficiency is not a nice to have; it’s one of the most important tools the organization has to drive profitability.

It is with these customers and with their need for ultra-efficiency in mind that we designed the newest edition to our custom light-weight server line-up: Viking, designed to “pillage” inefficiency  :)

Some of the points Ed touches on:

  • Viking can hold eight or 12 server nodes in a 3U chassis
  • Each node is a single socket server with up to 4 hard drives & 16GB of RAM along with two gigabit ethernet ports
  • It supports Intel’s Lynnfield or Clarkdale processors which means its 2-4 core’s per processor
  • The chassis also features an integrated switch and includes shared power and cooling infrastructure
  • The system is cold-aisle serviceable which means everything you need to get to is right in the front.

Related Reading:

Pau for now…


5 lessons from the Cloud about Efficient Environments

August 2, 2010

The week before last our team decided to divide and conquer to cover two simultaneous events.  Half of us headed to Portland, Oregon for  OSCON and the other half stayed here in Austin to participate in hostingcon.

The Hostingcon keynote

Among those participating in hostingcon was my boss Andy Rhodes who gave the keynote on Tuesday.  Here’s are the slides Andy delivered:

(If the presentation doesn’t appear above, click here to view it.)

The idea of the keynote was to share with hosters the five major lessons we have learned over the last several years working with a unique set of customers operating at hyperscale.  Those five lessons are:

  1. TCO models are not one size fits all.  Build a unique model that represents your specific environment and make sure that you get every dollar of cost in there.  Additionally, make sure that your model is flexible enough to accommodate new info and market changes.
  2. Don’t let the status quo hold you back.  Not adapting soon enough and delays in rolling out solutions can cost you dearly.
  3. The most expensive server/storage node is the one that isn’t used (sits idle for 6-12 weeks) or the one you don’t have when you need it most.
  4. Don’t let Bad Code dictate your hardware architecture.
  5. Don’t waste time on “Cloud Washing.”  Talk to your customers about real pain points and how to solve them.

The WHIR’s take

The WHIR did a good write up of the keynote, here is the concluding paragraph:

So, it seems that cloud best practices will help companies reduce their physical infrastructure, which seems to be a bit counter-intuitive, given that Rhodes is representing a hardware provider. But it makes sense. Given the never-ending list of projects for IT staff, and as they drive down costs, their business will grow, and they’ll be able to increase their IT spend for innovative efforts. “What we’re hoping to do is let you do more with less.”

Extra-credit reading:

Pau for now…


Follow

Get every new post delivered to your Inbox.

Join 88 other followers

%d bloggers like this: