Home | Uncategorized | NTFS Update

NTFS Update

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

I did some more work on my NTFS issue on Friday. As previously mentioned, I was seeing NTFS filesystems with large levels of fragmentation even after drives were compressed.

The answer turns out to be quite simple; Windows doesn’t consolidate the free space blocks which accumulate as files are created and deleted. So, as a test I started with a blank 10GB volume and created a large file on it. Sure enough the allocation occurs in a small (2 or 3) number of extents. I then deleted the large file and created 10,000 small (5K) files and deleted those too. I then re-created the large file, which immediately was allocated in 100′s of small fragments and needed defragmentation immediately. The large file was created using the freespace blocks freed up from the small files.

What’s not clear from the standard fragmentation tool provided with Windows is that the free space created by the deletion of files is added to a chain of free space blocks. These free space blocks are never consolidated even if they are contiguous (i.e. as in this instance where I deleted all the files on the disk). This means even if you *delete* everything on a volume, then the free space is still fragmented and files will be created with instant fragmentation. The other thing to note is that the standard Windows defragmenter doesn’t attempt to consolidate those segments when a drive is defragmented, it simply ensures that files are re-allocated contiguously. It also doesn’t report that fact either.

I’m currently downloading Diskeeper, which allegedly does consolidate free space. I’m going to trial this and see how it affects my fragmentation problem.

Incidentally, I used one of Sysinternals’ free tools to look at the map of my test drive. Sysinternals were bought by Microsoft in the summer of 2006, however you can find their free tools here. I used Diskview to give me a map of the drive and understand what was happening as I created and deleted files. What I would like, however is a tool which displays the status of free space fragments. I haven’t found one of those yet.

So, now I have an answer, I just have to determine whether I think fragmentation causes any kind of performance issue on SAN-presented disks!

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Erin

    Chris,

    Interesting post. Yes, free space is a critical component of complete disk defragmentation. That is why it is a high priority for PerfectDisk (www.raxco.com).

    Some white papers that may be of interest:
    http://www.raxco.com/products/perfectdisk2k/wp.cfm

    And some other information:
    http://www.raxco.com/products/perfectdisk2k/PerfectDisk_Comparisons.cfm

    Thanks,
    Sherry Murray
    Raxco Software, Inc.
    http://www.perfectdiskblog.com

  • Sim

    Hi Chris,

    Did Diskeeper end up consolidating the free space?

    Cheers,
    Sim

  • Sim

    So in a thin provisioning environment for NTFS which does require defragmenting a defrag tool that consolidates free space seems essential.

    From my reading the thin provisioning pool will probably expand out initially and then the free space will be reclaimed back into the pool depending on the sophistication of the thin provisioning system. Does that sound right?

  • Chris Evans

    Sim

    Yes, it did!

  • Chris Evans

    Sim

    Actually, there are two things required – first a free-space consolidator, second where blocks are freed, those blocks would have to be written with binary zeros or the host would need a mechanism to tell the array that a block could be released (which is how I think the latest version of Veritas Volume Manager works). In the array you need an array which can reclaim zeroed out blocks (like Hitachi with zero-block reclaim, or 3Par). Otherwise those blocks would be logically free, but if a file system had spread across the entire volume, there may be few or no blocks at the array level which haven’t been touched. I’ve a post coming out on this soon, where I’ll expand in more detail.

    Chris

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×