Here is a little gallery showing pictures of the cluster box still in our office, plus two from its place in our datacenter co-location:
(click on any picture to display a larger version)
We built Wolke36 as a datacenter scale-out hardware platform for hosting the cloud data-storage and -synchronization services of Appzdata, our upcoming Linked-Data application engine, to hit a sweep-spot of power- and cost-efficiency as well as compute/storage density.
When we started looking for datacenter hardware to fit our requirements and needs, we could not find a suitable and readily available product that hit the sweep spot of being reasonably cheap, very power efficient and having moderate CPU power and high storage density.
Then there is the Open Compute project which has set out with the goal of building "one of the most efficient computing infrastructures at the lowest possible cost". It's incredible work made open-source.
However, Open Compute hardware is intended for deployment on a data center scale (or at a minimum of 7 racks). It is very power efficient, very cost efficient, but deployment is not simple, and can hardly be done on a bootstrapping budget.
So we decided to just build it - from off the shelf PC components, with only chassis and cabling as custom items.
Each node is booted from a local 4GB USB memory stick containing the root filesystem of a stock FreeBSD (8.2 Release 64-Bit x86).
There are two low-power 2.5" 1TB hard drives per node (Samsung SpinPoint MT2 HM100UI, 5400RPM, 8MB Cache, SATA 3Gbps). The hard drives are used exclusively for user data storage and are managed by the ZFS filesystem.
The cluster consists of three independent units, where each unit is powered by a high-efficiency SFX standard PC power supply (be quiet! SFX Power 350W 80+ efficiency) and connected to a fully switched 8-port Gbit Ethernet switch. The three switches in turn wire to the uplink rack switch.
Mechanically, the 5U rack box is custom built from aluminium strands. The power cabling is also custom made (and took considerable time to complete).
When we planned the cluster chassis, we wanted to achieve a passively cooled, but still thermally stable design suitable for stacking multiple boxes in a standard datacenter rack.
Also, what we wanted was a design that makes use of optimal placement of the large cooling elements on the boards that transport heat off the CPU and chipset. We ended up placing the boards vertically, which keeps the cooling elements' long-sides directed vertically:
The desired result is a chimney effect allowing cold air to enter the rack from the cooling floor under the racks, flow vertically through the boxes from the rack bottom to top, leaving the rack at the top:
This is different from a standard datacenter rack cooling configuration where cold air enters the rack from the front, flows through the rack equipment from the front to the back, with the hot air leaving the rack equipment at the rack rear. The latter requires active ventilation to support the front-to-back airflow.
With a vertically oriented air flow, the airflow does not need active ventilation support, since hot air naturally rises upwards. Thus, by using a vertical configuration, we can make use of the chimney effect which increases the airflow without the need for active ventilation.
For monitoring/administration, we've started a monitoring infrastructure and console Web UI. Here is a screenshot showing hard drive temperatures:
The first box is now at our datacenter co-location (see the last two pics above), we are using it for development and testing purposes, and so far has met our expectations.
Currently we are planning version 2 of the cluster box, named Wolke40, which - as you guessed - is going to feature 40 Intel Atom cores, run on 12V-only power supplies, have MCU-based remote power/reset control and sliding rack rails.