SPEC SFS Results comparison (courtesy E8 Storage)

This week E8 Storage announced that the company had successfully beaten the SPEC SFS 2014 benchmark, with a test based on an E8 appliance and IBM Spectrum Scale (GPFS).  The test chosen was software build, which generates lots of I/O activity across many files.  As the press release from E8 says, this workload emulates the type of profile seen with graphics rendering and genomic sequencing.  There’s no doubt that beating the existing record so convincingly is an impressive achievement.  As the graph shows, the E8 solution was way ahead of the competition.  However, as (stable) patches for Spectre/Meltdown eventually reach the market, will there be an impact on benchmark testing like this?

Syscalls

Without going over the details again – see Further Reading if you need to – the impact from implementing the initial patches for Spectre/Meltdown has been a report of anything from 30-50% in system performance for I/O intensive workloads.  The overhead is from additional code to implement features like Kernel Page Table Isolation, so any application doing lots of context switches in and out of the kernel is likely to be affected.  Julie Herd, Director of Technical Marketing told me that she believes the E8 software itself isn’t affected by Spectre/Meltdown, but couldn’t speak to any impact on the GPFS part of the test.

File Systems

From what we’ve seen reported elsewhere (here and here), file systems are heavily impacted, so it seems reasonable to think GPFS will be too.  So presumably, running the same SPEC test against a patched set of clients may see different results.  I say “may”, because, at this point, we have no idea what the impact would be unless someone actually runs a test to show the differences.

Outside of file system impact, vendors are likely to patch SDS solutions and appliances able to run external code.  So-called “closed” appliances are unlikely to get patched, which theoretically means there isn’t a level playing field going forward.

The Architect’s View

How should we view this and future benchmarks in context?  First of all, in an absolute sense, the results of the E8 benchmark still stand alone as a successful 1st place result.  Every vendor ensures that their test results represent their product in the best light, so you can bet that when NetApp, WekaIO and others did their tests, they made sure the results would be as good as possible.  E8 are top here, which is a job well done.

Going forward, initial patching may result in a blip in the results if vendors were tested again.  But over time this doesn’t matter.  First, the effect on performance by patching is likely to disappear over time, as the code is reviewed and optimised.  Second, application code is constantly being optimised.  GPFS version 5.5 or 6.0 could be rewritten to mitigate any Spectre/Meltdown impacts.  Third, every vendor here may have some impact or other – again, without testing we won’t know.

So with all of this, should we care?  For me, the point of benchmarks is to demonstrate credibility in a solution.  If you claim your solution can hit 10 million IOPS, then put it to independent test and prove it.  However, benchmarks are somewhat of a moveable feast.  Vendors are continuously improving their solutions, optimising code and bringing new designs to market.  This means any specific rest result could be beaten the next day, week or month.  The value here is seeing a progression in the performance figures over time.  Although the testing can be expensive to do, if you’re selling your solution on the merits of low latency and high throughput, then it comes with the territory.

Further Reading

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2018 – Post #15A8 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.

Share me!Share on Facebook0Share on Google+0Tweet about this on TwitterShare on LinkedIn0Buffer this pageEmail this to someoneShare on Reddit0Share on StumbleUpon0

Written by Chris Evans