App Think Tank: Cloud vs. hyperscale

May 7, 2014

This is the final video clip from the Dell Services Application think tank held earlier this year.  Today’s clip features the always enlightening and entertaining Jimmy Pike.  Jimmy, who is a Senior Fellow at Dell and was once called the Willy Wonka of servers, was one of the 10 panelists at the Think Tank where we discussed the challenges of the new app-centric world.

In this clip, Jimmy talks about the fundamental differences between “purpose-built hyperscale” and the cloud environments that most organizations use.

As Jimmy points out, when moving to the cloud it is important to first understand your business requirements and what your SLAs need to be.

If you’re interested in hearing what else Jimmy has to say, check out this other clip from the think tank,  The persistently, ubiquitously connected to the network era.

The Think Tank, Sessions one and two

Extra-credit reading (previous videos)

Pau for now…


Dell and Sputnik go to OSCON

July 18, 2013

Next week, myself, Michael Cote and a whole other bunch of Dell folk will be heading out to Portland for the 15th annual OSCON-ana-polooza.  We will have two talks that you might want to check out:

Cote and I will be giving the first and the second will be lead by Joseph George and James Urquhart.

Sputnik Shirt

And speaking of Project Sputnik, we will be giving away three of our XPS 13 developer editions:  one as a door prize at the OpenStack birthday party, one as a drawing at our booth and one to be given away at James and Joseph’s talk listed above.

We will also have a limited amount of the shirt to the right so stop by the booth.

But wait, there’s more….

To learn firsthand about Dell’s open source solutions be sure to swing by booth #719 where we will have experts on hand to talk to you about our wide array of solutions:

  • OpenStack cloud solutions
  • Hadoop big data solutions
  • Crowbar
  • Project Sputnik (the client to cloud developer platform)
  • Dell Multi-Cloud Manager (the platform formerly known as “Enstratius”)
  • Hyperscale computing systems

Hope to see you there.

Pau for now…


Introducing the Webilicious PowerEdge C8000

September 19, 2012

Today Dell is announcing our new PowerEdge C8000 shared infrastructure chassis which allows you to mix and match compute, GPU/coprocessor and storage sleds all within the same enclosure.  What this allows Web companies to do is to have one common building block that can be used to provide support across the front-, mid- and back end tiers that make up a web company’s architecture.

To give you a better feel for the C8000 check out the three videos below.

  1. Why — Product walk thru:  Product manager for the C8000, Armando Acosta takes you through the system and explains how this chassis and the accompanying sleds better server our Web customers.
  2. Evolving — How we got here:  Drew Schulke, marketing director for Dell Data Center solutions explains the evolution of our shared infrastructure systems and what led us to develop the C8000.
  3. Super Computing — Customer Example:  Dr. Dan Stanzione, deputy director at the Texas Advanced Computing Center talks about the Stampede supercomputer and the role the C8000 plays.

Extra Credit reading

  • Case Study: The Texas Advanced Computing Center
  • Press Release:  Dell Unveils First Shared Infrastructure Solution to Provide Hyperscale Customers with New Modular Computational and Storage Capabilities
  • Web page: PowerEdge C8000 — Optimize data center space and performance

Pau for now…


All the best ideas begin on a cocktail napkin — DCS turns 5

April 11, 2012

A little over a week ago, Dell’s Data Center Solutions (DCS) group marked its fifth birthday.  As Timothy Prickett Morgan explains in his article subtitled, “Five years old, and growing like a weed”:

DCS was founded originally to chase the world’s top 20 hyperscale data center operators, and creates stripped-down, super-dense, and energy-efficient machines that can mean the different between a profit and a loss for those data center operators.

This team, which now represents a greater than $1 billion dollar business and has expanded beyond just custom systems to include standard systems built for the “next 1000,”  all started on a simple napkin.

The origin of DCS -- Ty’s Sonic sketch - November 2, 2006

From napkin to “Frankenserver,” to today

Ty Schmitt who was one of the original team and now is the executive director of Dell’s modular infrastructure team within DCS, explains:

This was sketch I made over drinks with Jimmy Pike late one night after visiting a big customer on the west coast.  We we were working on a concept for a 1U system for them based on their requested requirements.   As you can see by the date (Nov 2006) it was actually before DCS became official … we were a skunk works team called “Sonic” consisting of a hand full of people.   We wanted to take an existing chassis and overhaul it to fit 4 HD’s, a specific MB, and SATA controller.  When we got back to Austin, I modified the chassis in the RR5 machine shop (took parts from several different systems and attached them together) and Jimmy outfitted it with electronics, tested it, and it was sent to the customer as a sample unit.

This first proto was described by the customer as “Frankenserver” and was the beginning of the relationship we have with one of our biggest customers.

A little over five years later, Dell’s DCS team has gone from Frankenserver to commanding 45.2 percent revenue share in a market that IDC estimates at $458 million in sales last quarter.  Pretty cool.

Extra-credit reading:

Pau for now…


IDC starts tracking the hyperscale server market

March 26, 2012

In a recent post that highlighted the demise of the midrange  server market, Timothy Prickett Morgan talked about the new server classification that IDC has just started tracking, “Density-optimized”:

These are minimalist server designs that resemble blades in that they have skinny form factors but they take out all the extra stuff that hyperscale Web companies like Google and Amazon don’t want in their infrastructure machines because they have resiliency and scale built into their software stack and have redundant hardware and data throughout their clusters….These density-optimized machines usually put four server nodes in a 2U rack chassis or sometimes up to a dozen nodes in a 4U chassis and have processors, memory, a few disks, and some network ports and nothing else per node.

Source: IDC -- Q3 2011 Worldwide Quarterly Server Tracker

Here are the stats that Prickett Morgan calls out (I particularly like the last bullet :-):

  • By IDC’s reckoning these dense servers accounted for $458 million in sales, up 33.8 percent in a global server market that fell by 7.2 percent to $14.2 billion in the quarter.
  • Density optimized machines accounted for 132,876 servers in the quarter, exploding 51.5 percent, against the overall market, which comprised 2.2 million shipments and rose 2 percent.
  • Dell, by the way, owns this segment, with 45.2 percent of the revenue share, followed up by Hewlett-Packard with 15.5 percent of that density-optimized server pie.

Extra-credit reading

Pau for now…


Whitepaper: 5 points to consider when choosing a Server Vendor for Hyperscale Data Centers

April 15, 2011

A whitepaper came out a little while ago from the management consulting firm, PRTM, that gives a perspective on the server industry.  The paper, which Dell was one of the contributors to, specifically focuses on something near and dear to our hearts, hyperscale data centers.

The paper, entitled Hyperscale Data Centers: Value of a Server Brand, talks about what organizations who are looking to build out these ginormous data centers should consider when selecting a system vendor.

In particular, PRTM offer their points to consider in light of the decision of working with a system OEM (Original Equipment Manufacturer) like a Dell or HP, or going directly to an ODM (Original Design Manufacturers) like a Foxconn or Quanta.

The five main areas PRTM recommends focusing on when choosing a server vendor are:

  1. Providing total solution reliability
  2. Ability to accommodate future capacity swings
  3. Ability to guarantee supply of components and sub-systems
  4. Accountability
  5. Ability to manage the entire spectrum of a large-scale deployment

Check out the whitepaper and see where you land (I know which I would choose :))

Update:

Dave Ohara of Green Data Center blog fame did a post about choosing between OEMS and ODMs building on this entry.  He provides a lot of great detail and factoids, check it out:

Extra-credit reading

  • PRTM blog: Hyperscale Data Centers—5 Points to Consider When Choosing a Server Vendor

Pau for now…


Dells Data Center Solutions group turns Four!

March 28, 2011

Dell’s Data Center Solutions group (DCS) is no longer a toddler.  Over the weekend we turned four!

Four years ago on March 27, 2007 Dell announced the formation of the Data Center Solutions group, a special crack team designed to service the needs of hyperscale customers.  On that day eWeek announced the event in their article Dell Takes On Data Centers with New Services Unit and within the first week Forrest Norrod, founding DCS GM and currently the GM of Dell’s server platform division, spelled out to the world our goals and mission (in re-watching the video its amazing to see how true to that mission we have been):

The DCS Story

If you’re not familiar with the DCS story, here is how it all began.  Four years ago Dell’s Data Center Solutions team was formed to directly address a new segment that begin developing in the marketplace, the “hyperscale” segment.  This segment was characterized by customers who were deploying 1,000s if not 10,000s of servers at a time.

These customers saw their data center as their factory and technology as a competitive weapon.  Along with the huge scale they were deploying at, they had a unique architecture and approach specifically, resiliency and availability were built into the software rather than the hardware.  As a result they were looking for system designs that focused less on redundancy and availability and more on TCO, density and energy efficiency.  DCS was formed to address these needs.

Working directly with a small group of customers

From the very beginning DCS took the Dell direct customer model and drove it even closer to the customer.  DCS architects and engineers sit down with the customer and before talking about system specs they learn about the customer’s environment, what problem they are looking to solve and what type of application(s) they will be running.  From there the DCS team designs and creates a system to match the customer’s needs.

In addition to major internet players, DCS’s customers include financial services organizations, national government agencies, institutional universities, laboratory environments and energy producers.  Given the extreme high-touch nature of this segment, the DCS group handles only 20-30 customers worldwide but these customers such as Facebook, Lawrence Livermore National Labs and Microsoft Azure are buying at such volumes that the system numbers are ginormous.

Expanding to the “next 1000”

Ironically because it was so high-touch, Dell’s scale out business didn’t scale beyond our group of 20-30 custom customers.   This meant considerable pent up demand from organizations one tier below.   After thinking about it for a while we came up with a different model to address their needs.  Leveraging the knowledge and experience we had gained working with the largest hyperscale players, a year ago we launched a portfolio of specialized products and solutions to address “the next 1000.”

The foundation for this portfolio is a line of specialized PowerEdge C systems derived from the custom systems we have been designing for the “biggest of the big.”  Along with these systems we have launched a set of complete solutions that we have put together with the help of a set of key partners:

  • Dell Cloud Solution for Web Applications: A turnkey platform-as-a-service offering targeted at IT service providers, hosting companies and telcos.  This private cloud offering combines Dell’s specialized cloud servers with fully integrated software from Joyent.
  • Dell Cloud Solution for Data Analytics: A combination of Dell’s PowerEdge C servers with Aster Data’s nCluster, a massively parallel processing database with an integrated analytics engine.
  • Dell | Canonical Enterprise Cloud, Standard Edition: A “cloud-in-a-box” that allows the setting up of an affordable Infrastructure-as-a-Service (Iaas)-style private clouds in computer labs or data centers.
  • OpenStack: We are working with Rackspace to deliver an OpenStack solution later this year.  OpenStack is the open source cloud platform built on top of code donated by Rackspace and NASA and is now being further developed by the community.

These first four years have been a wild ride.  Here’s hoping the next four will be just as crazy!

Extra-credit reading

Articles

DCS Whitepapers

Case studies


Follow

Get every new post delivered to your Inbox.

Join 93 other followers

%d bloggers like this: