Introducing the Webilicious PowerEdge C8000

September 19, 2012

Today Dell is announcing our new PowerEdge C8000 shared infrastructure chassis which allows you to mix and match compute, GPU/coprocessor and storage sleds all within the same enclosure.  What this allows Web companies to do is to have one common building block that can be used to provide support across the front-, mid- and back end tiers that make up a web company’s architecture.

To give you a better feel for the C8000 check out the three videos below.

  1. Why — Product walk thru:  Product manager for the C8000, Armando Acosta takes you through the system and explains how this chassis and the accompanying sleds better server our Web customers.
  2. Evolving — How we got here:  Drew Schulke, marketing director for Dell Data Center solutions explains the evolution of our shared infrastructure systems and what led us to develop the C8000.
  3. Super Computing — Customer Example:  Dr. Dan Stanzione, deputy director at the Texas Advanced Computing Center talks about the Stampede supercomputer and the role the C8000 plays.

Extra Credit reading

  • Case Study: The Texas Advanced Computing Center
  • Press Release:  Dell Unveils First Shared Infrastructure Solution to Provide Hyperscale Customers with New Modular Computational and Storage Capabilities
  • Web page: PowerEdge C8000 — Optimize data center space and performance

Pau for now…


All the best ideas begin on a cocktail napkin — DCS turns 5

April 11, 2012

A little over a week ago, Dell’s Data Center Solutions (DCS) group marked its fifth birthday.  As Timothy Prickett Morgan explains in his article subtitled, “Five years old, and growing like a weed”:

DCS was founded originally to chase the world’s top 20 hyperscale data center operators, and creates stripped-down, super-dense, and energy-efficient machines that can mean the different between a profit and a loss for those data center operators.

This team, which now represents a greater than $1 billion dollar business and has expanded beyond just custom systems to include standard systems built for the “next 1000,”  all started on a simple napkin.

The origin of DCS -- Ty’s Sonic sketch - November 2, 2006

From napkin to “Frankenserver,” to today

Ty Schmitt who was one of the original team and now is the executive director of Dell’s modular infrastructure team within DCS, explains:

This was sketch I made over drinks with Jimmy Pike late one night after visiting a big customer on the west coast.  We we were working on a concept for a 1U system for them based on their requested requirements.   As you can see by the date (Nov 2006) it was actually before DCS became official … we were a skunk works team called “Sonic” consisting of a hand full of people.   We wanted to take an existing chassis and overhaul it to fit 4 HD’s, a specific MB, and SATA controller.  When we got back to Austin, I modified the chassis in the RR5 machine shop (took parts from several different systems and attached them together) and Jimmy outfitted it with electronics, tested it, and it was sent to the customer as a sample unit.

This first proto was described by the customer as “Frankenserver” and was the beginning of the relationship we have with one of our biggest customers.

A little over five years later, Dell’s DCS team has gone from Frankenserver to commanding 45.2 percent revenue share in a market that IDC estimates at $458 million in sales last quarter.  Pretty cool.

Extra-credit reading:

Pau for now…


Dell’s Modular Data Center powers Bing Maps

August 1, 2011

Late last week we announced that Dell’s Data Center Solutions group had outfitted Bing Maps’ uber-efficient, uber-compact data center (or as Microsoft calls it  “microsite”), located in Longmont, Colorado.  The facility is a dedicated imagery processing site to support Streetside, Bird’s Eye, aerial and satellite image types provided by Bing Maps.  The site’s key components are Dell’s Modular Data Centers and Melanox Infiniband networking.

Brad Clark, Group Program Manager, Bing Maps Imagery Technologies described their goal for the project, “Our goal was to push technological boundaries, to build a cost effective and efficient microsite.  We ended-up with a no-frills high-performance microsite to deliver complicated geospatial applications that can in effect ‘quilt’ different pieces of imagery into a cohesive mosaic that everyone can access.”

Keeping things cool

The challenge when building out the Longmont site was to design a modular outdoor solution that was optimized for power, space, network connectivity and workload performance.

The modules that Dell delivered use a unique blend of  free-air with evaporative cooling technology, helping to deliver world-class efficiency and a Power Usage Effectiveness (PUE) as low as 1.03.

To watch the whole site being built in time-lapse check this out:

Extra-credit reading


A walk thru Facebook’s HQ on Open Compute day

April 12, 2011

Last Thursday a group of us from Dell attended and participated in the unveiling of Facebook’s Open Compute project.

Much the way open source software shares the code behind the software, the Open Compute project has been created to provide the specifications behind the servers and the data center.    By releasing these specs, Facebook is looking to promote the sharing of data center and server technology best practices across the industry.

Pre-Event

The unassuming entrance to Facebook's Palo Alto headquarters.

The Facebook wall.

Facebook headquarters at 8am. (nice monitors! :)

Words of wisdom on the wall.

The Event

Founder and CEO Mark Zuckerburg kicks off the Open Compute event.

The panel moderated by Om Malik that closed the event. Left to right: Om, Graham Weston of Rackspace, Frank Frankovsky of Facebook, Michael Locatis of the DOE, Alan Leinwand of Zynga, Forrest Norrod of Dell (with the mic) and Jason Waxman of Intel.

Post-event show & tell: Drew Schulke of Dell's DCS team being interviewed for the nightly news and showing off a Dell DCS server that incorporates elements of Open Compute.

Extra credit reading

  • GigaOM: Bringing Facebook’s Open Compute Project Down to Earth
  • The Register:  Facebook’s open hardware: Does it compute?

Pau for now…


Forrest Norrod of Dell on Open Compute

April 7, 2011

This morning, at Facebook’s headquarters in Palo Alto, the company unveiled the Open Compute project.  Also on hand to support the announcement were partners such as Dell and Intel, who served on a panel alongside representatives from Rackspace, the Department of Energy, Zynga and Facebook.  Forrest Norrod, GM of Dell’s server platform division represented Dell on the panel.

I caught up with Forrest after the event to get his take on the Open Compute project and what it means for Dell.

Extra-credit reading

Pau for now…


Facebook, OpenCompute and Dell

April 7, 2011

Today at its headquarters in Palo Alto, Facebook and a collection of partners such as Dell, Intel and AMD  — as well as kindred spirits like RackSpace’s founder (the company behind OpenStack) and the CIO of the Department of Energy — are on hand to reveal the details behind Facebook’s first custom-built data center and to announce the Open Compute project.

Efficiency: saving energy and cost

The big message behind Facebook’s new data center, located in Prineville Oregon, is one of efficiency and openness.  The facility will use servers and technology that deliver a 38 percent gain ìn energy efficiency.  To bring the knowledge that the company and its partners have gained in constructing this hyper-efficient hyper-scale data center Facebook is announcing the Open Compute project.

Much the way open source software shares the code behind the software, the Open Compute project has been created to provide the specifications behind the hardware.  As a result, Facebook will be publishing the specs for the technology used in their data center’s servers, power supplies, racks, battery backup systems and building design.  By releasing these specs, Facebook is looking to promote the sharing of data center and server technology best practices across the industry.

How does Dell fit in?

Dell, which has a long relationship with Facebook, has been collaborating on the Open Compute project.  Dell’s Data Center Solutions group has designed and built a data center solution using components from the Open Compute project and the server portion of that solution will be on display today at Facebook’s event.  Additionally Forrest Norrod, Dell’s GM of server platforms will be a member of the panel at the event talking about the two companies’ common goal of designing the next generation of hyper efficient data centers.

A bit of history

Dell first started working with Facebook back in 2008 when they had a “mere” 62 million active users.  At that time the three primary areas of focus in with regards to the Facebook IT infrastructure were:

  1. Decreasing power usage
  2. Creating purpose-built servers to match Facebook’s tiered infrastructure needs
  3. Having tier 1 dedicated engineering resources to meet custom product and service needs

Over the last three-plus years, as Facebook has grown to over 500 million active users, Dell has spefically helped out to address these challenges by:

  • Building custom solutions to meet Facebook’s evolving needs, from custom-designed servers for their web cache, to memcache systems to systems supporting their database tiers.
  • Delivering these unique servers quickly and cost effectively via Dell’s global supply chain.  Our motto is “arrive and live in five”, so within five hours of the racks of servers arriving at the dock doors, they’re live and helping to support Facebook’s 500 million users.
  • Achieving the greatest performance with the highest possible efficiency. Within one year, as the result of Dell’s turnkey rack integration and deployment services, we were able to save Facebook 84,000 pounds of corrugated cardboard and 39,000 pounds of polystyrene during that same year.

Congratulations Facebook! And thank you for focusing on both open sharing and on energy efficiency from the very beginning!

Pau for now…


Dells Data Center Solutions group turns Four!

March 28, 2011

Dell’s Data Center Solutions group (DCS) is no longer a toddler.  Over the weekend we turned four!

Four years ago on March 27, 2007 Dell announced the formation of the Data Center Solutions group, a special crack team designed to service the needs of hyperscale customers.  On that day eWeek announced the event in their article Dell Takes On Data Centers with New Services Unit and within the first week Forrest Norrod, founding DCS GM and currently the GM of Dell’s server platform division, spelled out to the world our goals and mission (in re-watching the video its amazing to see how true to that mission we have been):

The DCS Story

If you’re not familiar with the DCS story, here is how it all began.  Four years ago Dell’s Data Center Solutions team was formed to directly address a new segment that begin developing in the marketplace, the “hyperscale” segment.  This segment was characterized by customers who were deploying 1,000s if not 10,000s of servers at a time.

These customers saw their data center as their factory and technology as a competitive weapon.  Along with the huge scale they were deploying at, they had a unique architecture and approach specifically, resiliency and availability were built into the software rather than the hardware.  As a result they were looking for system designs that focused less on redundancy and availability and more on TCO, density and energy efficiency.  DCS was formed to address these needs.

Working directly with a small group of customers

From the very beginning DCS took the Dell direct customer model and drove it even closer to the customer.  DCS architects and engineers sit down with the customer and before talking about system specs they learn about the customer’s environment, what problem they are looking to solve and what type of application(s) they will be running.  From there the DCS team designs and creates a system to match the customer’s needs.

In addition to major internet players, DCS’s customers include financial services organizations, national government agencies, institutional universities, laboratory environments and energy producers.  Given the extreme high-touch nature of this segment, the DCS group handles only 20-30 customers worldwide but these customers such as Facebook, Lawrence Livermore National Labs and Microsoft Azure are buying at such volumes that the system numbers are ginormous.

Expanding to the “next 1000”

Ironically because it was so high-touch, Dell’s scale out business didn’t scale beyond our group of 20-30 custom customers.   This meant considerable pent up demand from organizations one tier below.   After thinking about it for a while we came up with a different model to address their needs.  Leveraging the knowledge and experience we had gained working with the largest hyperscale players, a year ago we launched a portfolio of specialized products and solutions to address “the next 1000.”

The foundation for this portfolio is a line of specialized PowerEdge C systems derived from the custom systems we have been designing for the “biggest of the big.”  Along with these systems we have launched a set of complete solutions that we have put together with the help of a set of key partners:

  • Dell Cloud Solution for Web Applications: A turnkey platform-as-a-service offering targeted at IT service providers, hosting companies and telcos.  This private cloud offering combines Dell’s specialized cloud servers with fully integrated software from Joyent.
  • Dell Cloud Solution for Data Analytics: A combination of Dell’s PowerEdge C servers with Aster Data’s nCluster, a massively parallel processing database with an integrated analytics engine.
  • Dell | Canonical Enterprise Cloud, Standard Edition: A “cloud-in-a-box” that allows the setting up of an affordable Infrastructure-as-a-Service (Iaas)-style private clouds in computer labs or data centers.
  • OpenStack: We are working with Rackspace to deliver an OpenStack solution later this year.  OpenStack is the open source cloud platform built on top of code donated by Rackspace and NASA and is now being further developed by the community.

These first four years have been a wild ride.  Here’s hoping the next four will be just as crazy!

Extra-credit reading

Articles

DCS Whitepapers

Case studies


Follow

Get every new post delivered to your Inbox.

Join 97 other followers

%d bloggers like this: