This is more for tech geeks than for the usual audience I work for, but here are a few observations.
- There are lots of ways to save energy. Raised floors and air conditioning are the industry standard for captive and hosted data centers. But that wouldn’t work at Weta, where equipment tends to run full time at full capacity during a big project. Through engineering that includes water cooled radiators, closed rack space and passive rooftop heat exchangers, the data center stays cool for all but a few weeks a year with zero air conditioning! Running a data center on the cost of turning a few electric water pumps is kind of like the feeling you get running your hybrid to the market on zero gasoline.
- Virtualization isn’t the only path to parallel computing efficiency. Purpose-driven job work at Weta is different than in a MPP supercomputing environment targeted at a single huge task. Rather than coordinating processors through virtualization, Weta uses advanced queueing software from Pixar to fire millions of tasks in specific order to idle processors, whose CPU runs at full load for the duration of a task. The job work is then reassembled in an order required to create a visual sequence. Says Gunn, “virtualization would just slow us down," not something you hear every day from a data center administrator.
- DC power is more efficient but not necessarily the first priority. Gunn said he was aware of the efficiency of using distributed transformerless DC power in a data center rather than the usual alternating current buss. While he's looking into configuring DC for modified power supplies down the road, his investment in the cooling plant brought much faster and greater ROI.
- Sometimes you have to own it. Though colocation and managed hosting are growing rapidly, data transfer rates ruled out any sort of distributed environment for Weta. So did the proprietary content managed in an industry that is very protective of material that might fall to illegal copy and redistribution. Mainly though, the function of the data center required tight stacking of the processing and SAN to keep the networking runs as short as possible. Not one, but multiple 10-gigabit links are needed to manage the data transfer for a big project like AVATAR.
There are commonalities to running Weta versus any other data center. Whether you're working for big shot movie executives or a bunch of clients in a colocation center, breakdowns and redundancy are high priorities. While it does have the usual battery banks and generators, Weta probably has less spare processing in its 4000-plus blades than a lot of data centers when things are running flat out. That's how intense it gets, so the machines are throttled to be fast -- but not too fast.
That's the pressure of a big project, and while there's daily drudgework, I got the feeling these guys get to kick back and have a good time between big jobs. Just imagine the kinds of video games the folks who made AVATAR can spool up on their down time.