With a significant amount of annual IT budgets being spent on storage (25% by some estimates), the focus on making storage deliver to the needs of the business has never been greater. However, despite the advances in flash technology, larger disk drives, the move to commodity and software defined storage, the problems faced by customers remain largely the same as they were 20 years ago.
In a recent report, Tintri Inc, a vendor of hybrid and all-flash storage appliances, highlighted the major issues confronting storage administrators today. The top 4 continue to be (in order) – performance, capital expenses, scale and manageability. Infinidat Inc, another hybrid array vendor found cost, reliability, operational complexity and performance to be the main headaches for the storage administrators in their survey.
From these studies it’s clear that a number of factors stand out; delivering high performing storage at a reasonable cost (both capital and operational) is an ongoing issue for pretty much every IT organisation.
Unfortunately, these problems (particularly performance) continue to arise because storage is complex. The consolidation of storage onto external appliances that started around 20 years ago has brought us from a position of managing a few hundred terabytes of data to systems that can store multi-petabytes of information. In parallel, the I/O density (IOPS per TB of storage) has increased with increasing server processor power. The move to (server and desktop) virtualisation means workloads are becoming more random and more unpredictable in nature.
The availability and variety of storage products has also never been greater. The traditional vendors are being attacked on all sides, with new start-ups, open-source and software defined solutions (including hyper-convergence). Each of these platforms will have strengths and weaknesses when it comes to delivering storage I/O because by their nature they are architected in different ways using multiple media types (DRAM, NVDIMM, Flash and HDD). Storage appliances are essentially black boxes, with the only way to fully determine their behaviour being through testing and observation. The only way to truly know that you are choosing the right storage solution for your workloads is to simulate them in a test lab and see how they perform under a number of relevant loading conditions.
The Issue With Synthetics
One comparison method is to use synthetic workload generators to test the performance of storage, such as IOMETER and FIO. These tools are good, but have limited functionality and scalability and do require a reasonable investment in time to both get working and to process and interpret the results.
Vendors themselves do a lot of work in the performance area, submitting their hardware for test by organisations like the Storage Performance Council. SPC benchmarks do give a flavour of how different storage systems compare; the SPC Benchmark 1 uses a mix of workload types, such as database, OLTP and mail servers. However, there are gaps in these measuring techniques. SPC-1 for example doesn’t allow the testing of systems where space reduction features like de-duplication and compression can’t be turned off. Vendors can bias a test by over-provisioning capacity to distribute the workload across more physical devices (although the price/performance figures help to counter that), which means any potential customer looking at the SPC-1 results needs to know how to interpret them. SPC benchmarks can be useful in helping to create a vendor shortlist, but they really don’t represent your specific application workloads.
Ultimately though, synthetic performance tests are just that; they are testing methodologies that use workloads that only approximate to actual I/O profiles. Synthetic testing goes only so far in validating hardware configurations against application workload requirements. This because today’s IT organisations can deploy hundreds if not thousands of applications against a single storage appliance, making it unlikely that any two IT departments will see the same workload profile from their systems.
A more practical approach is to look at profiling existing systems and use this information as the basis for testing potential target products. Working with a profile of the data that more accurately reflects installed systems means results are likely to be closer to the real experience of using that system. But what specifically does profiling allow us to do that wasn’t practical with basic synthetic testing?
From the perspective of the user, profiling provides a number of key advantages:
- When testing new products, an equivalent test can be run against multiple vendor hardware solutions with more confidence that the results accurately reflect the outcome of moving those applications over.
- Testing can be varied based on metrics like scale – keeping the same profile while increasing the volume of transactions. This allows “what if” testing to be done that can push hardware to the limit where performance drops or reaches unacceptable levels.
- Specific metrics of the test can be varied to see the outcome of changes that affect the way storage appliances operate; for example, increasing or decreasing average block size, increasing queue depths or changing the read/write mix. This allows testing to show what would happen if the workload profile was operating out of normal ranges.
- Testing under failure scenarios; most performance testing is done with systems that are operating normally. However, storage systems do fail and it’s important to be able to test how performance changes when (for example) data is being rebuilt after a drive failure.
- Testing can be done on different configurations based on media, validating exactly how different media capacities will perform. This can help prevent overbuying of solutions that may be recommended by the vendor. For example, many flash vendors are adding lower cost and lower performance TLC to their systems. You will need to evaluate whether TLC-based products can meet your performance requirements.
- Testing can be done with time-based measurement, ensuring that the different profiles that exist over varying times of day, days of the week or days of the month can all be catered for.
Most of this testing might be thought of as validating new hardware as part of a storage refresh. There are other scenarios where the operation of a storage array might change and so testing would be required:
- Microcode/software updates – vendors continually tweak the algorithms in their code to make storage run faster. Most of the time this is done to optimise the performance of writing to slow media like hard disk drives (HDDs). It’s not unknown for a vendor to change the code and have an adverse affect on performance. Profiling against test hardware (in a lab) running the new code can help show up these kinds of errors.
- Tiering changes – related to the previous point, changing tiering algorithms or settings can result in a positive or negative effect on overall I/O performance. Having a workload profile allows what-if scenario testing with tiering settings.
- New Media – media is also dependent on microcode running on the device controller. Firmware updates can introduce changes to device behaviour that can be detected as part of a profiling process. In some cases, this can highlight where the same theoretical type of media (e.g. consumer and enterprise grade HDDs) provide similar or different test results.
- New features – as new features are introduced into existing products (the most obvious being data de-duplication and compression), then testing can be performed using lab equipment that shows the effect of having those optimisation features turned on or off. Exactly how data services work depends on the implementation by the vendor (getting us back to the black box scenario again) and may even have a positive rather than negative effect.
We have quickly highlighted some of the benefits of building I/O profiles that are based on real-world workloads and applications. When a workload profile can be measured and quantified, it can be re-used to test equipment as part of a new purchase or upgrade. This is the difference between using synthetic tests and profiling based performance tools like Load DynamiX; a profile can quickly be recreated from the actual production environment at regular intervals. By comparison, tools like FIO and IOMETER are much more manual in their configuration process.
Finally, remember that the testing scenarios we’ve been discussing doesn’t always mean having to run an in-house lab; once a profile is captured, it could be provided to a vendor or your local reseller for pre-testing to be performed prior to a POC for example.
- Fixing Storage Performance with Software
- Key Storage performance metrics for virtual environments (Computer Weekly, July 2014)
- Load DynamiX Product Suite (Load DynamiX website)
This post is the first of a series of independently written technical briefings on storage performance and is commissioned by Load DynamiX. For more information on Load DynamiX and the company’s storage performance management tools, please visit their site at: http://www.loaddynamix.com/
Comments are always welcome; please read our Comments Policy first. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2009-2016 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.