This week we’ve seen an update on NFS benchmarks with Hitachi announcing their latest results (via Hu’s blog) using HUS File and FMDs (Flash Module Drives).  Of course Hitachi wouldn’t be announcing if they hadn’t beaten EMC with the figures; no-one releases data that makes them look worse, after all.  But what are the data behind those numbers and can they really be interpreted at face value?

The SPECsfs2008_nfs.v3 benchmark isn’t a simple IOPS measure but rather is an NFS workload comparison, using a blend of common NFS commands.  At a high level, results show a “throughput” figure and a  overall response time (ORT) value in milliseconds.  In their latest published results, Hitachi achieved an overall 607,647 score at 0.89ms with a four-node system and 298,648 score at 0.59ms, representing the lowest response time for any system tested so far.  At first glance, Hitachi beat EMC’s VNX8000, submitted the quarter previously.  I’ve charted up the figures over the last 2 years and added some additional metrics for comparison.  As you can see from Figures 1 & 2, a Hitachi system did indeed beat the EMC array both on throughput and ORT; Hitachi should rightly be pleased.

But wait – surely on pure throughput, the Huawei Oceanstor and Avere FXT 3800 are better, with much higher figures?  Yes, it’s true, but we have to question the configurations.  Huawei used a 24 node system in their test, Avere used a 32 node system.  So, let’s review the node counts and redo the figures based on capacity per node (Figure 3).  Now we see Hitachi much further ahead of EMC, who had to use eight X-blades to get the throughput in their system, putting them way down the list.  Using these metrics, Avere and Huawei don’t look anywhere near as good and Hitachi have three of the top four spots.  This calculation now shows that Oracle’s ZFS appliance stands out.  It may not be the fastest in absolute terms but based on the number of nodes, the Oracle solution is a clear winner.

However, again we have to look at the specification.  Figure 4 shows a graph of the presented system storage (TB) versus memory per system (GB).  Oracle’s submitted solutions deploy masses of memory (as do Avere and Huawei), which clearly has a distinct advantage in throughput for this test.  What does this mean?  Well, having to deploy additional memory or many nodes means additional cost.  Some other solutions have hundreds of hard drives deployed, all of which takes up space, power and cooling.

For the sake of completeness, one graph where EMC does outperform Hitachi is in Throughput per TB of capacity deployed (Figure 5).  This does have to be taken into consideration in terms of the numbers of controllers, however, as raw storage alone can’t drive throughput.

The Architect’s View

Benchmark comparisons aren’t about the raw numbers.  We need to very careful to ensure the hardware comparisons are fair and genuine.  They should be about a number of things;

  • What the customer’s requirements are – whether that’s low latency or high throughput
  • What the customer’s efficiency concerns are – power, space, cooling
  • TCO – ultimately it’s all about how much these systems cost per TB and per IO
Unfortunately costs, even list prices aren’t included in these calculations, unlike the Storage Performance Council’s Price-Performance figures.  This means it’s easy to deploy a system with huge capacity, that wins the performance test but would be a poor choice financially.  Getting back to HDS, their systems performed well across all tests.  It would be good to see them and other vendors insisting that cost is put into the calculations, as ultimately the price paid has a direct impact on customer behaviour.

Related Links

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.

Copyright (c) 2007-2018 – Brookend Ltd, first published on http://architecting.it, do not reproduce without permission.

Written by Chris Evans

With 30+ years in IT, Chris has worked on everything from mainframe to open platforms, Windows and more. During that time, he has focused on storage, developed software and even co-founded a music company in the late 1990s. These days it's all about analysis, advice and consultancy.