When synthetic loads are created from IO generation benchmark tools, you shouldn’t be impressed. Running these tools will tell you nothing meaningful for your application owners, end-users, and customers. If you want to see how well these applications will work, run the applications themselves, not a tool designed to mimic them (which will not necessarily be representative of your environment). If you can’t test your applications during AFA testing (highly recommended), don’t trust vendor-crafted benchmarks tools. Dissect them and understand how they relate (or not) to your actual applications. Then, craft a better synthetic load (if that is your only option).
- one with blended IO sizes (abandon fixed IO sizes)
- reducible data sets and datastreams (these are data reduction arrays and a majority of customer datasets/datastreams are reducible)
- mis-aligned IO (don’t allow vendors to align the test scripts to their fixed block sizes)
- IO banding which incorporates temporal and spatial locality
- appropriate queue depths
That way you’re at least closer to “Real World” through synthetic means. Focus on what matters to the business and how flash can help make your company money, save your company money, increase productivity, differentiate you from your competition, or get your product(s) to market quicker. Those are far more meaningful than some unrealistic and uniquely crafted IOPs test. Yes, 1M IOPs is possible (even from an array that doesn’t scale out) but does it really matter to you? Don’t be impressed.