Storage analysts, Evaluator Group have announced a new storage specific benchmark for VDI that takes an interesting and innovative approach to the inherent complexity of attempting to benchmark the storage infrastructure needed to support VDI workloads.

I have been championing the cause of a transparent benchmarking of desktop virtualization workloads ever since the days of the almost weekly spats between Citrix and VMware arguing the over the performance of XenDesktop and View until I was eventually able to announce that Citrix had adopted Login VSI as its “standard” benchmarking tool for XenDesktop. This was at the time the smartest thing that Citrix could have done. Not because it was good (although it was) nor because it was independently developed (although that helped), but because it was well documented, transparent, and freely available; and as result had already been adopted by the majority of the independent desktop virtualization subject matter experts as the de facto standard to judge the performance of Remote Desktop Services and VDI platforms. Citrix’s support of Login VSI insured that the results that Citrix published could be independently verified and it was hoped by many that VMware would follow Citrix’s lead and move to adopt it as well. Disappointingly, VMware chose instead stick with its own internally developed benchmark Reference Architecture Workload Code (RAWC) which it followed up with the more accessible if less well-known VMware View Planner. From a technical perspective RAWC had some advantages over Login VSI, most notably in the way it could randomize workload to more realistically simulate random variations in user activity. Regardless, it would have been good to see VMware engage the broader community by partnering with the Virtual Reality Check team and contribute its expertise to the development of Login VSI. Unfortunately, competitive concerns meant that VMware was unwilling to abandon its own solution at the time and the opportunity was lost.

Now, 18 months later the Evaluator Group is looking to introduce a new standard, so what is it and what should you make of it?

Announced as VDI-IOmark, the new benchmark does away with much of the infrastructure needed by Login VSI and RAWC to drive VDI workloads (the virtual desktops themselves, along with supporting Active Directory, e-mail, Web servers etc.) in favor of a far simpler model that bypasses the virtual desktops altogether and instead replays “pre-recorded” I/O transactions directly against the storage infrastructure under test. The attractions of this approach are readily apparent, the cost and complexity of setting up a test environment are dramatically reduced. Evaluator Group claims that it is possible to perform a comparable the test using 1/10th of the resources that’s a conventional test would require, making it available to organizations with even the most limited resources. But does it compare favorably with either Login VSI or RAWC? Well it really depends on what you want to do.

Before going any further it is worthwhile looking more closely at the different types of benchmarking solutions.

In general, there are three different types of benchmarks that might be used in in this context:

  • Application Benchmarks – Generate load using real world applications driven an automated functional testing tool. Examples of application benchmarking tools would include Login VSI, RAWC, and View Planner.
  • Synthetic Benchmarks – Generate load by combining basic computer functions in proportions the developers feel will represents an indicative measure of the performance capabilities of a system under test.
  • Workload Replay – This approach combines the best of both worlds capturing the essential characteristics of real application workload then reproducing them on demand in the same why that a synthetic benchmark generates load.

Read the rest of the article