Windows Server, Release Cycles and HCI

The discussion on Storage Spaces Direct (S2D) seems to have generated a lot of interest and a lot of defensive posts. These posts seem to think that what has been written is either wrong or misinformation.  Actually, the issue is poor communication on Microsoft’s part.  However, what’s subsequently been said has left more questions than… Read more »

What Happened to Storage Spaces Direct?

Can this article on The Register be true?  Has Microsoft removed Storage Spaces Direct from the latest Windows Server build? To be honest I’d initially not given this article much thought until I decided to have a look at the latest Windows Server build myself.  I was interested to find that I can’t actually download… Read more »

Block is Not the Solution for Persistent Container Storage

It appears we’re reaching a consensus that persistent storage is needed for containers.  Despite early resistance, with an assumption that containers and their data should be transient, the logic of data persistence is starting to take hold.  To be honest it simply makes sense. Yes, I could keep my data in sync between multiple nodes… Read more »

Can Violin Systems Successfully Rise from the Ashes of Violin Memory?

I recently met up with the CEO of Violin Systems, Ebrahim Abbasi, for a chat about the industry and the future of the company.  Those who knew Violin Memory will be familiar with this company as it is essentially the assets and IP of the failed Violin, acquired by the Soros Group and relaunched as… Read more »

That 100TB Drive Is Closer Than You Think

Another day goes by and we have another story about hard drive technology.  Yesterday, Western Digital held a media event (I wasn’t invited) to announce MAMR, the latest technology for improving hard drive density.  MAMR stands for Microwave Assisted Magnetic Recording and is a process to increase the stability of recording with very small grain… Read more »

Fixing the Problem of I/O With Parallel Computing – DataCore MaxParallel

The idea of improving the performance of applications through the use of parallel computing isn’t a new concept.  Look back 50 years and we can see that Amdahl’s Law shows how to predict the performance improvement that can be achieved by using multiple processors.  You can find details of Amdahl’s Law on Wikipedia, but essentially… Read more »

It’s Time for Hard Drives to Join Tape In The Archive Tier

This week we’ve seen the announcement of 14TB 3.5” hard drives from one vendor and 12TB from another. It’s hard to imagine the concept of 14TB in a single hard drive when ten years ago we were looking at only 1TB devices. However, these devices have a problem. The throughput and latency has remained relatively… Read more »

Azure Enterprise NFS by NetApp – Initial Thoughts

One of the big announcements coming out of NetApp Insight in Las Vegas is the release of Microsoft Azure Enterprise NFS Service using Data ONTAP technology.  The idea that a cloud hyperscaler could use a mainstream technology for storage instead of building their own is unique enough, but the more interesting scenario is the ability… Read more »

Scale Computing Debuts HC3 in Google Cloud Platform

Hyper-converged infrastructure is a great technology for running a range of mixed workloads.  With HCI how do you manage DR?  Typically the standard solution might be to run up another cluster in a separate data centre and run both 50% active/active or have one purely for standby.  What about running in the public cloud?  With… Read more »

Qumulo Releases QF2 – A Cross-Cloud Scale-Out File System

Over the last few weeks, I’ve talked a lot about data mobility and exactly how we should get data into the public cloud from on-premises locations.  You can find links to the posts at the end of this blog entry.  Much of what I’ve talked about revolves around moving data in a relatively static way,… Read more »

Latest
  • Cloud Data Migration – Shipping Virtual Machines

    Previously we discussed the use of content migration tools to get data into public cloud services.  Another alternative is to ship entire virtual machines into the cloud, including the application and data. Why VMs? Why would it be more practical to ship an entire VM than just the data?  In some instances and with some… Read more »

  • QLC NAND – how real is it and what can we expect from the technology?

    Since NAND flash storage was first introduced into enterprise computing, we’ve seen a rapid explosion in the types and capabilities of flash products that can now be deployed in servers, HCI solutions and storage arrays.  QLC is the next evolution of cost reducing, space increasing flash technology.  What is it and what can we expect… Read more »

  • Flash Diversity: High Capacity Drives from Nimbus and Micron

    The desire to provide cheaper and greater capacity storage devices knows no bounds and has been a focus of the industry for the last 60 years.  Following on from Seagate’s 60TB “concept” flash drive, we now have two vendors selling 50TB OEM flash drives from Nimbus Data.  Let’s put that into context – the largest… Read more »

  • Scale Computing Moves Deeper Into the Enterprise With All-Flash HCI Nodes

    Sometimes a technology captures your imagination and shows real promise for developing into something of real value for the customer.  Scale Computing is one such company that has continued to demonstrate technical innovation and a continuning evolution in their platform.  The most recent announcements bring to the market higher performance and capacity appliances, including an… Read more »

  • Cloud Data Migration – Data Transfer Using Physical Shipping & Appliances

    Updated 19 September 2017 with details of IBM’s new shipping offering. Updated 10 October 2017 with details of Microsoft Azure’s new shipping offering. Updated 17 October 2017 with details of Backblaze Fireball. This is one of a series of posts on migrating data into the cloud.  Other posts in the series: Cloud Data Migration –… Read more »

  • Dude, Where’s My 100TB Hard Drive?

    It wasn’t that long ago we were being promised hard drives of unimaginable capacities.  I’m sure that at some stage triple digit numbers were being talked about.  However today we’re stuck with a “mere” 12TB of capacity per drive.  So what went wrong, or are we just being a little too hopeful? I did some… Read more »

  • Scality Introduces Zenko – A Multi-Cloud Data Controller

    When I met with Scality CEO Jerôme Lecat last week, he was at pains to ensure that Zenko – the company’s latest software product – wasn’t described as a storage gateway.  Jerôme sees gateways as protocol conversion devices, which as we will discuss is definitely not what Zenko is about.  So what is it exactly? … Read more »

  • Tintri IPO: What Next?

    Despite a little hiccup and a delay of a day, Tintri Inc (NASDAQ:TNTR) finally successfully executed their Initial Public Offering of shares on 30 June 2017.  It looks like the extra day was used to rethink the opening share price, which reduced from $10 to $7 a share, netting the company around $60m.  So far,… Read more »

  • Pure1 META – Analytics for Pure Storage Arrays

    It seems storage array analytics solutions are like opinions – everybody has one!  Pure Storage is the most recent entrant into the platform analytics game, with the introduction of Pure1 META at Pure Accelerate last month.  META claims to analyse the data from 1000’s of storage arrays, with over 7 petabytes of stored data and… Read more »