For many core apps, flash-based storage arrays are too much of a good thing.
Every few days, it seems a new or improved flash-memory array is announced. We see IOPS in the millions, and just as noticeably, prices in the high tens to low hundreds of dollars per terabyte. On the surface, anyway, these arrays look like the answer to our performance prayers, but there are a couple of questions you should ask before you buy.
First, can your applications really benefit from millions of IOPS? You may say more is better, but the capacity of these arrays is typically small. For example, an array from Pure Storage provides just 5TB capacity. A good rule of thumb is that capacity should up when IOPS go up. It's not linear, since we're coming from an era when we were starved for IOPS, but consider that if faster storage means you can do more computing, the results will need to be stored somewhere.
This limited capacity means you have to look at a way of tiering storage, with HDD arrays below the flash array tier. That means array-to-array traffic will eat into bandwidth and complexity will rise. Tiering software, moreover, will needs to be integrated and managed.
Tiering also raises the issue of object/file tiering versus block-level tiering (a.k.a. caching). In many cases, a few files generate the bulk of the I/O -- key files in databases, for instance. If your app has the ability to relegate old, rarely accessed data to a separate file space, even better. Putting high-activity files on flash achieves most of the speedup you are after, without needing huge and costly flash-array spaces.
In many cases, there are quite viable alternatives to flash arrays. Converting just a few HDDs to SSD, for instance, may be enough to handle near-term needs, and it's relatively inexpensive to do. Let's say that 2TB of key files make up 60 percent of I/O. Adding a set of six 0.5-TB SSDs to store those files will push the available IOPS just for those files to a whopping 240,000, versus a mere 1,000 IOPS from HDD. This releases all of the disk IOPS back to the system, too, so the HDD effectively doubles in performance, as well.
(SSD are just like HDD insofar as they can be RAID'ed together. Performance in the RAID controller leads me to recommend a RAID 1 or 10 configuration, which adds a bit of cost but avoids the parity calculation bottleneck in the RAID controller.)
Which type of SSD to use? In high traffic environments, SLC-based drives wear better than those using MLC and will last longer. SLC drives cost substantially more, but they get as much as 10x the wear life. Future memory technologies may change this calculation, but not before 2014, at the earliest.
Many companies are looking at VDI deployments. Unfortunately, VDI "in the raw" suffers from performance peaks thousands of times higher than average loads. Notably, the initial booting of desktops tends to occur on a great number of systems at the same moment, just as employees start work in the morning. These "boot storms" cause IOPS nightmares.
The good news is that most of the data is the same for every one of those virtual desktops. If everyone is using Windows 7, that's actually a few gigabytes of storage. In most companies, the apps are standardized, and company approved, so a few more gigabytes covers the full spectrum of code. That leaves a small amount of user-proprietary space.
The best way to serve up (read-only) boot images is to clone them from a single copy. Ideally, this is done first by the hypervisor, which uses a copy in its DRAM that is shared by all the VMs on that server, saving both DRAM space and the time needed to load it. Some configurations use a flash-memory card in the server to hold a copy of the boot image. The protocol here is to boot the single copy from the card, which doesn't need to be large, and then clone access pointers to it. Doing this cuts boot time and networked IOPS even further.
I can attest from experience that expensive flash arrays can be overkill. I've managed an 8,000-VDI boot from a single NetApp filer equipped with a flash accelerator, for instance, with all of the desktops operational in 5 minutes.
To sum up, new flash arrays offer huge performance, but for many users right now, going from an I/O famine to more IOPS than could be dreamed of may be an unnecessary and overly expensive jump forward. I'm not saying there is no market for the flash array; anyone working with video, surveillance, movie editing, HPC, and large financial modeling will find them particularly interesting. But many users don't need, and cannot fully utilize, the huge performance that these arrays make available. They may well be served best by a hybrid-SSD array approach.