We launched the Open Compute Project, an effort to nurture industry collaboration on the best practices and implementation of power- and cost-efficient compute infrastructure, yesterday. At the heart of the project lies the Open Compute server, a highly optimized server layout developed by Facebook engineers and industry partners.

Facebook began with a clean slate and engineered a custom motherboard, power supply, chassis, rack, battery backup cabinet, and thermal solution. You can view and download the detailed specifications and CAD drawings at http://opencompute.org/servers/.

Motherboard

We designed two different motherboards for the Open Compute Project: one for AMD CPUs and the other for Intel CPUs. Both motherboards have similar feature sets, including a direct interface with the power supply. The boards are bare-bone designs; we removed many common features that we didn’t need, like multiple expansion slots. In some cases we designed workarounds — like reboot over LAN — to maintain the functionality of removed components without adding costs.

Power Supply

One of the most challenging pieces of hardware to develop for the Open Compute server was the power supply. We decided to push the limits of efficiency and at the same time create a novel backup system to replace the traditional data center UPS. By working closely with our partners we attained a very high efficiency of 94.5%.

Closer inspection of the power supply reveals that it has two input connectors. One accepts 277VAC power. This is the primary input that operates at higher, more efficient voltage levels when compared to traditional 208VAC systems. The second accepts 48VDC input power, which supplies the server with power in the event of a utility power outage.

One question that has come up is why include AC power to each server at all? The utility provides us with AC voltage which we then convert to DC very close to the motherboard in the power supply. Our goal was to bring the high voltage as close as possible to the load to minimize IR losses. We could do an efficient AC–DC conversion in the rack and then distribute DC to the individual servers. But that means using several feet of low voltage conductors made from large copper bars, which would have higher IR losses.

Chassis

Our chassis is beautiful… functionally beautiful. In fact, we like to call it “vanity free.” It was designed with utility in mind. We didn’t use plastic bezels, paint, or even mounting screws, which lowered the cost and weight. Our key customers — our data center technicians — provided a lot of input to the chassis design. The result was an easy to service chassis almost free of screws. A server actually can be assembled by hand in less than eight minutes and thirty seconds!

The emphasis we placed on adding only necessary components reduced the weight of an Open Compute server by six pounds when compared to a standard 1U server. Those six pounds, when multiplied across our whole fleet, makes a huge impact for shipping and fuel consumption, not to mention on servicing — the data center techs have to lift less weight each time they have to pick up a server.

Rack

We fondly refer to our rack as the “triplet.” It’s built to house 90 servers installed in three columns. By using the triplet rack we made it fast and easy to deploy servers in our brand new data center. If underutilized, the cost of networking equipment can become a burden. We didn’t want to pay for extra network ports so the number of servers in each rack was specifically tailored to utilize the ports on our switches.

Serviceability was a key part of the rack design. Rather than using traditional rails, we created sheet metal walls and punched out shelves to mount the servers. Each server is then held in place by a spring loaded plunger. This reduced the time and effort required to rack a server.

Battery Cabinet

The battery cabinet supplies the energy required to keep the servers online in the event of a power failure. Every battery cabinet feeds two triplet racks via a simple system of cables and power strips. In the event of a power outage, five strings of four batteries each (48VDC per string) discharge into the servers. The discharge can be maintained for up to 45 seconds, although the building’s generators will typically come on much sooner.

Each cabinet contains an AC/DC rectifier to charge the batteries and a battery impedance testing circuit to monitor the health of each 12V battery. When a battery needs to be replaced, an alert is sent over the network and a technician is dispatched. Proximity of the battery cabinet to the servers allows us to deploy fewer batteries when compared to a traditional system since we don’t need to build in extra capacity to overcome electrical losses.

This alternative battery system is 99.5% efficient (we use some energy to charge batteries) when compared to the 90 – 95% efficiency associated with industry standard UPS systems. The cost of this system including the additional backup circuit in the power supply is 1/8th the cost of a building-wide UPS.

Thermal Design

The energy required to cool a server can represent a significant portion of its overall power budget. We were on a mission to reduce the amount of energy that server fans consume. The Open Compute server chassis is 1.5U (2.63 inches) tall compared with standard 1U (1.75 inches) chassis. This provided more space for larger heat sinks, which have more surface area and are more efficient at removing heat from components. The taller chassis allowed us to use 60mm fans as opposed to 40mm fans. Larger fans use less energy to move an equal volume of air.

These optimizations create a data center environment that is surprisingly quiet. Fans in the Open Compute server rotate slowly, consuming very little energy: only 2-4% of total server power, compared to 10-20% for a standard server.

The Open Compute servers represent a significant improvement in energy efficiency and a substantial reduction in server cost. The development of such technologies allows Facebook to continue to deploy innovative products in an economical and efficient fashion. Facebook was built using many open source software projects, and we hope the Open Compute Project will enable the next generation of innovators to deploy a highly-efficient compute infrastructure using open hardware.

Amir is the engineering manager for the Open Compute Project.

Leave a Reply

To help personalize content, tailor and measure ads and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy