New-Tech Europe Magazine | Feb 2017

Figure 1. Data center monthly-amortized costs (source: James Hamilton’s blog)

industry faces pressure to scale data centers, one of the most constrained resources is power. It is often the case that the power capacity of existing data centers is exhausted well before they run out of storage or processing capacity. The two main factors of this power capacity limitation have been the need to provide supply redundancy and the way power is partitioned within data centers, both of which take up significant space but more importantly leaving untapped power sources idle. And this is despite the fact that current server designs are far more power efficient than previous generations and have significantly lower idle power consumption. Providing additional power capacity within a data center is also time consuming and expensive even assuming that the local utility can supply the additional load, which

IDC forecasts could double from 48GW in 2015 to 96GW by 2021 for a typical data center. From a capital expenditure standpoint, as shown in figure 1, the power and cooling infrastructure cost of a data center is second only to the cost of its servers. The nature of Cloud services also means that demand can fluctuate dramatically with a significant difference between the peak and average power consumed by a server rack. Consequently, providing enough power to meet peak-load requirements will clearly result in underutilization of the installed power capacity at other times. Also, lightly loaded power supplies will always be less efficient than those operating under full-load conditions. Clearly any measure that can even out power loading and free up surplus supply capacity has to be welcome in enabling data center

operators to service additional customer demand without having to install extra power capacity. With regard to efficiency considerations, servers and server racks use distributed power architectures where the conversion of power from ac to dc is undertaken at various levels. For example, a rack may be powered by a front-end ac-dc supply that provides an initial 48 Vdc power rail. Then, at the individual server or board level, an intermediate bus converter (IBC) would typically drop this down to 12 Vdc leaving the final conversion, to the lower voltages required by CPUs and other devices, to the actual point-of-load (POL). This distribution of power at higher voltages helps efficiency by minimizing down-conversion losses and also avoiding the resistive power losses in cables and circuit board traces, which are proportional

New-Tech Magazine Europe l 39

Made with