120 PB monster 'for simulation'. Nuke labs?

Flash may be one cutting edge of storage action, but big data is causing developments at the other side of the storage pond, with IBM developing a 120 petabyte 200,000-disk array.

The mighty drive is being developed for a secret supercomputer-using customer "for detailed simulations of real-world phenomena" according to MIT's Technology Review, and takes current large-array technology trends a step or two further.

IBM Almaden storage systems research director Bruce Hillsberg says that 200,000 SAS disk drives are involved, rather than SATA ones, because performance is a concern. A back-of-an-envelope calculation suggests 600GB drives are being used and Seagate 2.5-inch Savvios come to mind.
Won't lose any data for a million years? Really, give it a rest, this is marketing BS...
We're told that wider racks than normal are being used to accommodate the drives in a smaller amount of floorspace than standard racks would require. Also these racks are water-cooled rather than fan-cooled, which would seem logical if wide drawers crammed full of small form factor (SFF) drives were being used.

Some 2TB of capacity may be needed to hold the file data for the billions of files in the array. The GPFS parallel file system is being used with a hint that flash memory storage is used to speed its operations. This would indicate that the 120PB array would include, say, some Violin Memory arrays to hold the meta-data, and would scan 10 billion files in about 43 minutes.

RAID 6, which can protect against two drive failures, is not enough - not with 200,000 drives to look after - and so a multi-speed RAID set-up is being developed. Multiple copies of data would be written and striped so that a single drive failure could be tolerated easily. A failed drive would be rebuilt slowly in the background. The rebuild would not slow the accessing supercomputer down much if at all. A dual-drive failure would have a faster re-build. A three-drive failure would get a faster rebuild again, with, we assume, the compute side of the supercomputer slowing down somewhat due to a lower array I/O rate.

Hillsberg doesn't say how many drives could simultaneously fail. The MIT article text says the array will be "a system that should not lose any data for a million years without making any compromises on performance". Really, give it a rest, this is marketing BS. Having it work and not lose data for 15 years will be good enough.

We're interested that homogeneous disk drives are being used - presumably all the data on the array will be classed as primary data, apart from the file meta data which will need a flash speed-up. That means no tiering software is needed.

There will be lessons here for other big data drive array suppliers, such as EMC's Isilon unit, DataDirect and Panasas. It will be interesting to see if they abandon standard racks in favour of wider units, SFF drives, water-cooling and improved RAID algorithms too.

Bootnote

Storage-heavy supercomputer simulations are used in such tasks as weather forecasting, seismic surveying and complex molecular science - but there would seem no reason to keep any such customer's identity a secret. Another area in which supercomputer simulations are important is nuclear weapons: without live tests it becomes a difficult predictive task to tell whether a warhead will still work after a given period of time. As a result, the US nuclear weaponry labs are leaders in the supercomputing field.