I was recently in Salt Lake City and visited the new datacenter that opened back in May.
Quick facts:
- ~300M$ investment
- 240,000 square foot building; inside, 3x rooms with 20,000 sqf each of rack-worthy raised floor
- fault tolerant Tier IV data center
- designed PUE of 1.4
- 7.2 MW of total server load
- 400V/230V power distribution (230V to servers)
- Outside air used for cooling at least half the year (water-side economizer)
- Total hot-aisle/cold-aisle containment
- Deploy a rack anywhere/anytime, thanks to ToRs plus optional in-row adaptive cooling
Its opening was covered quite well in the press and blogosphere, see for instance 1, 2, 3, 4 for details, pics, color.
Some observations:
- Internet-scale Maestro James Hamilton is right on when he says that there are significant economies of scale in building and operating an Internet-scale datacenter, albeit with a very high cost of entry;
- How low can you go in the layers. I developed system, server, network aspects for this datacenter and thought of myself covering low layers of infrastructure. Move 800 miles East from the office, and these same aspects now look like the tip of an iceberg. That is, they are the topmost layer in a deep stack of power distribution layers, cooling layers, backups of backups before terminating at the power substation and the high-voltage power lines;
- It’s hard to manage all dependencies, especially when the overall system of systems is mission-critical. Kudos to the hardware folks who are so much better at this than us software types;
- Then there’s Cloud Computing… Given the cost of entry and the long-term commitment to an Internet-scale datacenter, it’s no surprise that Clouds are becoming increasingly competitive against traditional options (e.g., lease colo space or roll your own).