Measures disk (block storage) IOPS and IO consistency relative to a baremetal (non-virtualized) baseline. The metric is derived 50% on IOPS and 50% on IO consistency.
- Baremetal Dell M610 PowerEdge Blade Server
- 2x 2.5" Seagate SAS 2.0 6 Gb/s 10K hard drives - model ST9146803SS. OS installed on 1 disk, testing conducted on the other
- Dual hex core Intel Xeon Westmere X5650 2.66GHz
- 48GB DDR3-1066 Ram
This summary metric is derived from the following 36 IO workloads (rw workloads use a ratio of 80% reads/20% writes):
fio Runtime Settings
Prior to execution of the test workloads, a 100% fill operation is performed in order to clear out caches using the following runtime settings:
After the fill operation, each of the 36 workloads is then executed in an abbreviated test using incrementing iodepths. This process is used to determine optimal iodepths that will then be used for testing. Optimal values are based on the highest iodepth where iops have increased from the prior depth.
Benchmarking is conducted on dedicated test volumes a minimum of 100GB in size.
A disk_performance metric of 100 signifies near IOPs parity relative to the baseline, greater than 100 better, and less than 100 worse.