Disk Defragmentation – Background and Engineering the Windows 7 Improvements


One of the features that you’ve been pretty clear about (I’ve received over 100 emails on this topic!) is the desire to improve the disk defrag utility in Windows 7. We did. And from blogs we saw a few of you noticed, which is great. This is not as straight forward as it may appear. We know there’s a lot of history in defrag and how “back in the day” it was a very significant performance issue and also a big mystery to most people. So many folks came to know that if your machine is slow you had to go through the top-secret defrag process. In Windows Vista we decided to just put the process on autopilot with the intent that you’d never have to worry about it. In practice this turns out to be true, at least to the limits of automatically running a process (that is if you turn your machine off every night then it will never run). We received a lot of feedback from knowledgeable folks wanting more information on defrag status, especially during execution, as well as more flexibility in terms of the overall management of the process. This post will detail the changes we made based on that feedback. In reading the mail and comments we received, we also thought it would be valuable to go into a little bit more detail about the process, the perceptions and reality of performance gains, as well as the specific improvements. This post is by Rajeev Nagar and Matt Garson, both are Program Managers on our File System feature team. –Steven


In this blog, we focus on disk defragmentation in Windows 7. Before we discuss the changes introduced in Windows 7, let’s chat a bit about what fragmentation is, and its applicability.


Within the storage and memory hierarchy comprising the hardware pipeline between the hard disk and CPU, hard disks are relatively slower and have relatively higher latency. Read/write times from and to a hard disk are measured in milliseconds (typically, 2-5 ms) – which sounds quite fast until compared to a 2GHz CPU that can compute data in less than 10 nanoseconds (on average), once the data is in the L1 memory cache of the processor.


This performance gap has only been increasing over the past 2 decades – the figures below are noteworthy.


Graph of Historical Trends of CPU and IOPS Performance


Chart of Performance Improvements of Various Technologies


In short, the figures illustrate that while disk capacities are increasing, their ability to transfer data or write new data is not increasing at an equivalent rate – so disks contain more data that takes longer to read or write. Consequently, fast CPUs are relatively idle, waiting for data to do work on.


Significant research in Computer Science has focused on improving overall system I/O performance, which has lead to two principles that the operating system tries to follow:



  1. Perform less I/O, i.e. try and minimize the number of times a disk read or write request is issued.

  2. When I/O is issued, transfer data in relatively large chunks, i.e. read or write in bulk.

Both rules have reasonably simply understood rationale:



  1. Each time an I/O is issued by the CPU, multiple software and hardware components have to do work to satisfy the request. This contributes toward increased latency, i.e., the amount of time until the request is satisfied. This latency is often directly experienced by users when reading data and leads to increased user frustration if expectations are not met.

  2. Movement of mechanical parts contributes substantially to incurred latency. For hard disks, the “rotational time” (time taken for the disk platter to rotate in order to get the right portion of the disk positioned under the disk head) and the “seek time” (time taken by the head to move so that it is positioned to be able to read/write the targeted track) are the two major culprits. By reading or writing in large chunks, the incurred costs are amortized over the larger amount of data that is transferred – in other words, the “per unit” data transfer costs decrease.

File systems such as NTFS work quite hard to try and satisfy the above rules. As an example, consider the case when I listen to the song “Hotel California” by the Eagles (one of my all time favorite bands). When I first save the 5MB file to my NTFS volume, the file system will try and find enough contiguous free space to be able to place the 5MB of data “together” on the disk. Since logically related data (e.g. contents of the same file or directory) is more likely to be read or written around the same time. For example, I would typically play the entire song “Hotel California” and not just a portion of it. During the 3 minutes that the song is playing, the computer would be fetching portions of this “related content” (i.e. sub-portions of the file) from the disk until the entire file is consumed. By making sure the data is placed together, the system can issue read requests in larger chunks (often pre-reading data in anticipation that it will soon be used) which, in turn, will minimize mechanical movement of hard disk drive components and also ensure fewer issued I/Os.


Given that the file system tries to place data contiguously, when does fragmentation occur? Modifications to stored data (e.g. adding, changing, or deleting content) cause changes in the on-disk data layout and can result in fragmentation. For example, file deletion naturally causes space de-allocation and resultant “holes” in the allocated space map – a condition we will refer to as “fragmentation of available free space”. Over time, contiguous free space becomes harder to find leading to fragmentation of newly stored content. Obviously, deletion is not the only cause of fragmentation – as mentioned above, other file operations such as modifying content in place or appending data to an existing file can eventually lead to the same condition.


So how does defragmentation help? In essence, defragmentation helps by moving data around so that it is once again placed more optimally on the hard disk, providing the following benefits:



  1. Any logically related content that was fragmented can be placed adjacently

  2. Free space can be coalesced so that new content written to the disk can be done so efficiently

The following diagram will help illustrate what we’re discussing. The first illustration represents an ideal state of a disk – there are 3 files, A, B, and C, and all are stored in contiguous locations; there is no fragmentation. The second illustration represents a fragmented disk – a portion of data associated with File A is now located in a non-contiguous location (due to growth of the file). The third illustration shows how data on the disk would look like once the disk was defragmented.


Example of disk blocks being defragmented.


Nearly all modern file systems support defragmentation – the differences generally are in the defragmentation mechanism, whether, as in Windows, it’s a separate, schedulable task or, whether the mechanism is more implicitly managed and internal to the file system. The design decisions simply reflect the particular design goals of the system and the necessary tradeoffs. Furthermore, it’s unlikely that a general-purpose file system could be designed such that fragmentation never occurred.


Over the years, defragmentation has been given a lot of emphasis because, historically, fragmentation was a problem that could have more significant impact. In the early days of personal computing, when disk capacities were measured in megabytes, disks got full faster and fragmentation occurred more often. Further, memory caches were significantly limited and system responsiveness was increasingly predicated on disk I/O performance. This got to a point that some users ran their defrag tool weekly or even more often! Today, very large disk drives are available cheaply and % disk utilization for the average consumer is likely to be lower causing relatively less fragmentation. Further, computers can utilize more RAM cheaply (often, enough to be able to cache the data set actively in use). That together, with improvements in file system allocation strategies as well as caching and pre-fetching algorithms, further helps improve overall responsiveness. Therefore, while the performance gap between the CPU and disks continues to grow and fragmentation does occur, combined hardware and software advances in other areas allow Windows to mitigate fragmentation impact and deliver better responsiveness.


So, how would we evaluate fragmentation given today’s software and hardware? A first question might be: how often does fragmentation actually occur and to what extent? After all, 500GB of data with 1% fragmentation is significantly different than 500GB with 50% fragmentation. Secondly, what is the actual performance penalty of fragmentation, given today’s hardware and software? Quite a few of you likely remember various products introduced over the past two decades offering various performance enhancements (e.g. RAM defragmentation, disk compression, etc.), many of which have since become obsolete due to hardware and software advances.


The incidence and extent of fragmentation in average home computers varies quite a bit depending on available disk capacity, disk consumption, and usage patterns. In other words, there is no general answer. The actual performance impact of fragmentation is the more interesting question but even more complex to accurately quantify. A meaningful evaluation of the performance penalty of fragmentation would require the following:



  • Availability of a system that has been “aged” to create fragmentation in a typical or representative manner. But, as noted above, there is no single, representative behavior. For example, the frequency and extent of fragmentation on a computer used primarily for web browsing will be different than a computer used as a file server.

  • Selection of meaningful disk-bound metrics e.g. boot and first-time application launch post boot.

  • Repeated measurements that can be statistically relevant

Let’s walk through an example that helps illustrate the complexity in directly correlating extent of fragmentation with user-visible performance.


In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. So, which one is correct? Well, before the question can be answered we must understand why defrag in Vista was changed. In Vista, we analyzed the impact of defragmentation and determined that the most significant performance gains from defrag are when pieces of files are combined into sufficiently large chunks such that the impact of disk-seek latency is not significant relative to the latency associated with sequentially reading the file. This means that there is a point after which combining fragmented pieces of files has no discernible benefit. In fact, there are actually negative consequences of doing so. For example, for defrag to combine fragments that are 64MB or larger requires significant amounts of disk I/O, which is against the principle of minimizing I/O that we discussed earlier (since it decreases total available disk bandwidth for user initiated I/O), and puts more pressure on the system to find large, contiguous blocks of free space. Here is a scenario where a certainly amount of fragmentation of data is just fine – doing nothing to decrease this fragmentation turns out to be the right answer!


Note that a concept that is relatively simple to understand, such as the amount of fragmentation and its impact, is in reality much more complex, and its real impact requires comprehensive evaluation of the entire system to accurately address. The different design decisions across Windows XP and Vista reflect this evaluation of the typical hardware & software environment used by customers. Ultimately, when thinking about defragmentation, it is important to realize that there are many additional factors contributing towards system responsiveness that must be considered beyond a simple count of existing fragments.


The defragmentation engine and experience in Windows 7 has been revamped based on continuous and holistic analysis of impact on system responsiveness:


In Windows Vista, we had removed all of the UI that would provide detailed defragmentation status. We received feedback that you didn’t like this decision, so we listened, evaluated the various tradeoffs, and have built a new GUI for defrag! As a result, in Windows 7, you can monitor status more easily and intuitively. Further, defragmentation can be safely terminated any time during the process and on all volumes very simply (if required). The two screenshots below illustrate the ease-of-monitoring:


New Windows 7 Defrag User Interface


New Windows 8 Defrag User Interface


 


In Windows XP, defragmentation had to be a user-initiated (manual) activity i.e. it could not be scheduled. Windows Vista added the capability to schedule defragmentation – however, only one volume could be defragmented at any given time. Windows 7 removes this restriction – multiple volumes can now be defragmented in parallel with no more waiting for one volume to be defragmented before initiating the same operation on some other volume! The screen shot below shows how defragmentation can be concurrently scheduled on multiple volumes:


Windows 7 Defrag Schedule


Windows 7 Defrag Disk Selection


Among the other changes under the hood in Windows 7 are the following:



  • Defragmentation in Windows 7 is more comprehensive – many files that could not be re-located in Windows Vista or earlier versions can now be optimally re-placed. In particular, a lot of work was done to make various NTFS metadata files movable. This ability to relocate NTFS metadata files also benefits volume shrink, since it enables the system to pack all files and file system metadata more closely and free up space “at the end” which can be reclaimed if required.

  • If solid-state media is detected, Windows disables defragmentation on that disk. The physical nature of solid-state media is such that defragmentation is not needed and in fact, could decrease overall media lifetime in certain cases.

  • By default, defragmentation is disabled on Windows Server 2008 R2 (the Windows 7 server release). Given the variability of server workloads, defragmentation should be enabled and scheduled only by an administrator who understands those workloads.

Best practices for using defragmentation in Windows 7 are simple – you do not need to do anything! Defragmentation is scheduled to automatically run periodically and in the background with minimal impact to foreground activity. This ensures that data on your hard disk drives is efficiently placed so the system can provide optimal responsiveness and I can continue to enjoy glitch free listening to the Eagles :-).


Rajeev and Matt

Comments (92)

  1. ari9910 says:

    Has it ever occurred to anyone on the team that the best time to run things like the defragmenter, updater, and backup programs are when we AREN’T using the system. as said in the blog, it won’t run if we turn the system off every night. So if those things ran when we weren’t using the computer yet it was still on, those operations would run and we wouldn’t have to worry about it. I find it increasingly annoying that the updater wants to update (and annoy me about it) while i’m using it because i shut the computer off at the 3:00AM time it is set to run. Similar goes for the defragmenter. I install 3rd party programs because of that, i would appreciate if I didn’t have to do that because, i personally love Microsoft and hope that one day it will actually think more about customer satisfaction and less about profits (to a reasonable degree of course).

  2. Asesh says:

    What about a new file system that doesn’t need defragmenting? When will see such a file system in Windows? Linux already has one. We had high hopes from WinFS and hopefully when it’s done it will remove the need to defragment our hard disks from time-to-time.

    Please don’t release the next version of Windows (after Windows 7) until WinFS is done and a UNIX based kernel or a better one is implemented because Windows is still prone to viruses unlike Mac OS X and Linux which are more secure than Windows.

    I hope all the Windows’ coders out there will see this because it’s high time so we need such a file system and kernel. Thanks

  3. mark_ms says:

    I’ve had a keen interest in defrag products since the days of the 286 and 40Mb hard drives. I find it particularly refreshing to learn about where the line crosses between useful defragging and wasted effort and how much newer technology has been minimizing the effect of fragmentation. Now that I have a better understanding of how Windows 7 defrags, I am now even less concerned about managing it myself.

    I really pity the 3rd party defragmenters out there. They now have a higher bar to clear to convince me that the default defragger isn’t good enough.

    I don’t really mind the defragger kicking in when I’m doing low I/O work e.g. browsing the web but from the screenshots, it seems that this has to be scheduled. I can’t always know what kind of work I would be doing and am reluctant to specify a time when I know the computer would be on and I am working on it. So, I would prefer that there is an option for the defragger to kick in during low I/O if it hasn’t been getting a chance to run at the normal scheduled time.

    One thing I do is recompress tv recordings. Since my recompressor is currently single-threaded, I am very much tempted to run two at the same time on my dual-core machine. Would this create much more fragmented files than if I run them one at a time?

  4. obsidience says:

    First off, I’m delighted that tool disk defragmentation tool was deemed important enough to be blogged on.  Kudos!

    But if you want to improve the speed and efficiency of this new OS then you need to think about:

    1) A defrag tool that can be scheduled to run outside of windows when no files are locked by the OS.  This would allow for a fully defragged disk.

    2) A defrag tool that can profile a standard computer session and defrag files accordingly.  Moving heavily accessed files to the outer areas of the disk platter where data density to rotational speed is highest

    3) A defrag tool that analyzes the boot up pattern and can place files sequentially along the zone allowing for less access and more reads.

    If you investigated these ideas I’m sure you can further reduce your boot up times by a substiantial amount.

  5. SouthPaw42 says:

    Wouldn’t a file system that included a process to reduce fragmentation during file creation and update. I have an OS HDD that is 60% empty space but after 30 days is 50% fragmented.  A drive that empty should not be fragmented.

  6. Vyacheslav Lanovets says:

    I believe that scheduling logic must be changed too.  People who do not understand what defrag is tend to turn off their PCs early. Windows MUST detect that defrag did not have a chance to run for a month or two and advise changing scheduled time accordingly or even do it silently according to user behaviour pattern. But this will be too user-friendly for Microsoft 🙂

    For advanced users Vista defrag can be scheduled to run at boot time, and in this mode it seems to defrag more than during user session. And reboot can be scheduled too. And sometimes even hardware wakeup can be scheduled 🙂

    As Vista itself, Vista defrag is already good but it seems like scheduling logic did not change i Windows 7. Kind of: it won’t work for my mom who at the same time really needs defrag and does not leave your PC turned on at 1:00 AM!

    Windows 8 or 9?

  7. II ARROWS says:

    Asesh, "file system that doesn’t need defragmenting" simply doesn’t exist.

    You wrote "Unix", those file systems work defragmenting while writing new file or expanding them. The results are slower writing time, in most case if a file is wroten you are working and this slow your work. Windows perform this while you are not working.

    Think about converting a video, this operation require a large amount of IO operation in RAM and in disk, generating a lot of file that have to be deleted at the end of the process. If the file system have to defrag while writing, conversion time would be increased.

    Also, you wrote "WinFS". WinFS was not a file system, but an added layer that works on NTFS, an heavy one.

    An idea on wich WinFS was built is splitting up the meta data from data, and in this article I see another step closer to WinFS.

  8. sirus says:

    @ Asesh

    A file system which has the characteristic you mention doesn’t exist. If you take Linux and its most used file system, which is EXT3, the fragmentation percentage is indeed low, however it’s calculated only on big chunks, just like on Windows Vista. If you try another file system such as XFS which is certainly faster than EXT3 you’ll notice that it suffer a greater fragmentation and due to that XFS is provided with a de-fragmentation tool.

    Ending the off-topic, I’d like an "intelligent" de-fragmentation process that runs when I’m not heavily using my HD and possibly an "intelligent" scheduler which automatically modify its configuration based on users’ habits.

  9. Eiki says:

    What about SSD? How does this defragmentation process change if there is an SSD in place?

  10. RobertWrayUK says:

    @Eiki

    One of the bullet points at the end of the entry:

    "If solid-state media is detected, Windows disables defragmentation on that disk. The physical nature of solid-state media is such that defragmentation is not needed and in fact, could decrease overall media lifetime in certain cases."

  11. dovella says:

    My computer is always on With Vista or Windows 7.

    automatically defrag form me is  very interesting and work fine,

    in the Beta of Windows 7, Defrag is even better.

  12. d_e says:

    @Asesh: It seems you have no clue about kernels and filesystems. Especially the windows kernel.

    1. WinFS is not a filesystem. It’s a database. NTFS runs underneath.

    2. There is not filesystem which doesn’t fragment. Such a thing is impossible.

    3. "UNIX or better". What do you mean by this? Unix-compatible systems aren’t inherently more secure than Windows. The design of Unix is very old (and IMHO outdated). I’m confident that Microsoft is wise enough to keep their current kernel.

    4. A secure kernel doesn’t give you a secure computer. Because most users aren’t computer literate and will gladly install anything if an email tells them to do so.

    To the article: I believe defragmentation is misunderstood by most users. Many think this is some sort of magic powder that will make their machine much faster. It won’t. Users shouldn’t have to worry about defragmentation. The decision to hide all this in Vista was the right one. You guys get the 100+ emails from self-proclaimed experts which (mostly) have no clue what they’re writing about.

    The only thing I’m wondering is why my disk (according to the HDD LED) works only 50% when resuming from hibernate…

  13. II ARROWS says:

    I’ve a question about my test configuration:

    I’ve an HDD with Vista x64 and 7 x64, another disk used for data.

    I always move Document(and music, picture and video) folder on the second disk to preserve them if I need to format or simply change machine.

    The structure is like XP:

    My Documents

    |—-My Music

    |—-My Pictures

    |—-My Videos

    After set in the library the folders and set as default save directories, if I boot Vista and try to access the Documents sub folder(Only music, pic and video) it alerts me that I need to be the owner. Only for those folder, files in My documents can be opened and modified.

    May it depends because of the new "intelligent defrag" splitting meta data, and 7 doesn’t want Vista to "undo" 7’s work?

    Or is just a bug that 7 wants those right?

  14. Asesh says:

    @d_e

    Read it carefully, I didn’t say WinFS is a file system. And unix based kernel is more secure than Windows kernel 😛 Yes I do use both Linux and Mac OS X so I can say that.

  15. tgrand says:

    "The physical nature of solid-state media is such that defragmentation is not needed…"

    Did you guys check this or just take some SSD manufacturers word for it?  It’s really disappointing to hear this from you.  I was skeptical about this claim so I tested it myself.  It wasn’t hard at all to get a 30% performance hit reading a fragmented file from an SSD.  Sure, that’s not nearly as bad as it would’ve been on a spinning disk, but 30% is still significant – especially when you’re paying a huge price premium to use one of these drives for performance reasons!

    I argued this point with the manufacturer and they eventually conceded, but said the performance benefit of defragging was outweighed by the shortening of the drive’s lifespan.  I said "OK great, you should claim this, instead of claiming that it’s ‘not needed’, which is a half-truth at best."  As you might imagine, they weren’t very receptive to that idea.  And I only got to do some limited testing.  I’m sure things get much worse when you have a nearly full system drive used over a long time.

    By the way, here’s why fragmentation hurts on an SSD despite the minimal "seek time" penalty: most SSDs only get their high throughput when the individual I/O requests are for sufficiently large amounts of data.  Fragmentation can easily turn a file read operation from a handful of large fast reads into hundreds of tiny slow ones.

  16. SvenC says:

    @asesh: how would you interprete your sentence "What about a new file system that doesn’t need defragmenting? When will see such a file system in Windows? Linux already has one. We had high hopes from WinFS and hopefully when it’s done it will remove the need to defragment our hard disks from time-to-time." other than that you call WinFS as file system.

    Can you give some or at least one technical example to show us how Unix based systems are more secure than Windows systems which are based on an NT kernel?

    I don’t see how using an OS gives you clues on the security level of an OS.

    SvenC

  17. LocTeam says:

    I wish the file system could treat expandable files differently than static files.  Much of the fragmentation on an aged system is from files that have been fragmented due to them being expanded.  Files that are frequently expanded need to have a lot of extra room at the end of the file, where static files (ie: dlls) are never expanded and can be packed tight.  I believe that Raxco recognized this more than 15 years ago on VMS.

    Shan

  18. Vyacheslav Lanovets says:

    @sirus It’s a good point. For a typical user defrag should happen during disk idle times and maybe share that idle time with desktop search service. Defrag should happen only when on AC power.

    Also there should be less HDD thrashing during idle time because HDDs seems to be TOO loud when building search indexes. That happens because because Windows architects think that they can put 100% load on disk during disk idle time. They did not think about all the noise they create with this.  Especially at night!!

    The silly idea that most users should know how to tweak their systems should go away. I liked to watch defragmentation status screens since Norton Speedisk for DOS but that time has gone. I don’t care anymore.

    I know many professional C++ devs who don’t care about installing updated drivers in spite of having occasional BSODs. And they would not run defrag, they would just use Macs at home.

    I am sure that PMs for the Defrag feature understand all that well but are not brave enough to say it out loud and then do something to implement. (Yes, I am lazy too).

    @Asesh I am sorry about that but statistically Windows Vista seems to be more secure and reliable than Unix systems. Reliable – in terms of OS. Not hardware.

    Apple has full control on the hardware and it does a lot of testing of only that specific combination of hardware. MS will never achieve that in a PC. For instance, my Nvidia _Business_ Platform motherboard had Vista stickers all over it but it has faulty Ethernet driver that hangs system when I unplug network cable (independently of what the cable is connected to from the other side).

  19. smartpatrol says:

    "The design of Unix is very old (and IMHO outdated)."

    I hear this alot being familiar with Unix, Unix-Like OS’s and Windows i just have to LOL. I laugh the same at the old Unix battle axes that think Windows is not a Serious Server OS and is unstable.  From my experience 99% of Windows instability comes from 3rd party poor software design/coding; companies not following clear guidelines on how to develop for Microsoft OS’s.

    In regards to defrag I still miss the Norton Disk Doctor defrag interface from the DOS days or even win98(please consider bringing this view back MS!)it was mesmerizing watching it stack the blocks.

    Anyway thanks for the laugh.

  20. nicbot says:

    First, great article and nice work!

    Now, my issue here does not have to do so much with the way Windows handles fragmented files and I/O, it’s the implied state the end user is asked to leave their computer in…On

    Leaving your computer on 24/7 is an absurd waste of power/energy and is, in my opinion, plain irresponsible in this day and age.

    I feel it would be a huge step forward and gesture on Microsoft’s behalf if they were to either move to a more efficient way to concurrently handle defragging in the background while performing common computing (ie- while machine is idle) to eliminate the need for scheduled defragmenting in the first place.  Or if they promoted responsible use by NOT having the schedule default to 1 am and explain to the user in some way so they can be educated as to why.

    I realize it’s a stretch, but a top down approach to helping to solve a global crisis would be HUGE.  And this seems like such a relatively easy thing to implement imo.

    Regards.

  21. Hairs says:

    I’ve noticed much the same thing having spent the past month or two experimenting with defragging – much of the fragmentation comes from files that it wouldn’t be difficult for Windows to "know" are going to become fragmented – Web browser caches are particularly bad for this. Maybe having a look at the layout rules for NTFS again might not be a bad idea.

  22. gss4w says:

    The defragment option prior to Vista was a great tool for incompetent IT help desk techs to use for problems that could not be solved with their number one solution of rebooting the computer.

    Disk defragmentation would take hours to complete, and had pretty pictures to show that the computer was doing something.  With any luck the user would give up asking for help by the time the defrag was complete and the ticket could be closed.

  23. mgarson says:

    Thanks everyone for your comments so far, we’ve noticed several common questions that we’d like to answer:

    @ari9910/Vyacheslav Lanovets/nicbot: The default scheduled run time was picked to avoid interfering with interactive usage. These defaults, of course, can be changed. If defrag is unable to run at the scheduled time because the computer is not on, it will then automatically be scheduled to run next when the computer is idle.

    @Southpaw42: The drawback to defragmenting at creation or update time is that it adds potentially significant latency and I/O overhead to the operation, especially if the create/update is blocked from completing until the defrag is complete. For Windows, that would not be an appropriate design as it would tradeoff system responsiveness, which is very important, in favor of decreasing fragmentation, a relatively less valuable objective.

    @tgrand: There are several reasons for disabling defrag on SSDs – keep in mind that SSDs are a relatively recent technology. Our internal evaluation of SSDs demonstrated that there’s a significant amount of variability in delivered performance. While there are possible benefits to defragmenting SSDs (such as coalescing free space and being able to issue I/O in larger chunks), we were concerned about the potential of decreasing the life time of the flash media from additional I/O. We determined that prioritizing media lifetime and ensuring reliability of data was the correct choice. As expected, we will continue to actively monitor and test this new media type to ensure we optimize our behavior appropriately.

    Hope you’ve had a chance to try out the Win7 Beta!

    Matt Garson

    File and Block Storage Team

    Microsoft

  24. martin_mine@hotmail.com says:

    The defrag tool is much more improoved in Windows 7 from Vista! It it some functions that i miss in the defrag tool:

    – Defrag on boot (Defrags also system and reistry)

    – A diagram that shows the current filestructure on the HDD we are defragmenting

    – A progressbar

    – A field which shows what files that are being defragmented

    I like the new design, but it could be improoved.

    Martin

  25. SvenC says:

    Hi Matt,

    when you say

    "The default scheduled run time was picked to avoid interfering with interactive usage. These defaults, of course, can be changed. If defrag is unable to run at the scheduled time because the computer is not on, it will then automatically be scheduled to run next when the computer is idle"

    do you say that this is the default behavior of the defrag task or do you say that the user can (must) reconfigure it to work like that?

    I just checked my defrag task. The last time should have been 28.1. on 1 am. The system (Windows 7 beta 1 x86) was off at that time. Yesterday and today it was at least idle for 40-60 minutes when I went to launch. But defragging was not started. The last time shown in the task scheduler was my manual defrag action last week.

    What should I expect here? Does the defrag task not update the "last run time" when it is started but does not find a drive worth defragging? Or is this a scheduling bug?

  26. tgrand says:

    Thanks for the reply, Matt!  That’s exactly the kind of explanation that I would hope to see when someone talks about defrag and SSDs.  I understand and agree with your approach.  I just think that sooner or later, people will find out that SSDs are not magically immune to fragmentation, and they’ll appreciate having a better understanding of the situation.  Admittedly, it’s a pretty complex picture.

    It will be very interesting to see how this issue evolves over time.

  27. Neken says:

    Alright, seriously, this 1AM auto-scheduler is simply a flaw at all levels. First, because almost nobody lets their computer open all night. Secondly, because EVEN if it’s open at 1AM, it’s probably because i’m working on it. Third, because EVEN if it reschedule automatically another day, it will still have problem #2.

    I think, like most people said here, the best way would be to automatically start it when the I/O is for some time (like while i’m browsing for 2 hours) and make the defrag able to start and stop quite rapidly.

    The same thing applies to windows updates.

  28. sokolum says:

    It would be nice is the system would consider the file type and place them on a pre-reservered place on a harddisk.

    To make my point:

    An .txt or .log is usually a lot smaller then a .MP3, and a MP3 again is usually a lot smaller then a .AVI.

    System files would never grow, until they got replaced/upgraded.

    I believe for some file types you could define a place on the harddrive so they don’t get all over the place.

  29. Surt says:

    It seems like the algorithm for defragmentation itself must be very poor.  Even on an otherwise idle system, with >50% disk space, it takes far longer to defragment than 2 full reads and writes of the data would explain.  An order of magnitude more at least.  Is any attention being given to actually trying to make defragmentation faster?

  30. mgarson says:

    @SvenC/neken: Let me explain further how the scheduler works. Defrag won’t actually run at 1AM, unless the machine is idle at that time and, if you’re using a laptop, not on battery power. (Conserving battery power is an important goal for Win7 and we’ve made changes to support this throughout the system.) In addition, if defrag is running and you start to use the system, defrag will stop until the system becomes idle again at which point it will resume.

    Quite a bit of work went into intelligently detecting idle time and interactive use. You can learn more at the following links:

    PDC – http://channel9.msdn.com/pdc2008/PC19/

    Paper – Designing Efficient Background Processes: http://www.microsoft.com/whdc/system/pnppwr/powermgmt/BackgroundProcs.mspx

    Matt Garson

    File and Block Storage Team

    Microsoft

  31. wtroost says:

    Just writing to agree with sokolum: "It would be nice is the system would consider the file type and place them on a pre-reservered place on a harddisk."

    On a side note, much of the performance loss people complain about is realited to Explorer add-ons (not disk defrag.)  Any chance for an Explorer add-on manager of kinds?  Run em in a different process please!

  32. Anonymuos says:

    Can you make it such that when multiple volumes are selected, they can defragged one after the other (not in parallel) from the GUI. This makes it use defrag.exe and give up on the nice GUI. Defragmenting multiple volumes simultaneously takes a performance hit if I’m doing something else too on my PC.

  33. graham.lv says:

    I didn’t see this – guess I’ve been busy using 7.  Anyway I just recently sent feedback that it’s useless.  Got a Samsung 160GB Sata II small HD on my test machine and have used about 52 GB.

    Ran Defrag and in just puts ‘pas 1 0.5%’ and up to 100%, then it puts ‘pass 2  35%, etc – apparently there are 10 passes – I didn’t stop to watch, I watered the garden, did the washing and cooked a meal and it was still going… and going… after 4 hours it was on about ‘pass 10  .05%’.

    There are commercial defragers that do a little bit in the background – or  – they may be bull..

    Whichever way – over 4 hours of not touching the computer to Defrag 52 GB means that Defrag is totally USELESS and will never work and/or run.

    160 GB is the smallest HD I could buy – most people will be buying 2 TB.  3 months to Defrag????

  34. Mattisdada says:

    I was under the impression that the performance boost of defraging a SSD is like only 0.5%. As the access time is virtually 0 on a SSD as it uses flash memory rather then a spinning platter, so it didnt matter as much if they were a bit muddled…. It could find them very quickly. A small overhead at most….. not worth the loss of lifetime and reliability.

    And the whole Windows Kernel(vs unix) is unsecure argument going on…. Is just insane. Obviously the Winows Kernel is more secure….

    Ill try and explain it metaphysically. There is two towns in the ye old medievil days. One town is a large town of millions (windows users), its heavily fortified, theres always attackers trying to take over its lands and get inside and kill the people inside. Every now and then one does get in and gives a few people the plague, but they have pretty good doctors and they kill that type of plague.

    Now town 2. Well its a small little forrest village. Only has about 500 people. Very community like village. They have no riches. They have nothing. No one attacks them. But the big rich city always has people trying to attack it…

    So although at times Windows may seem unsecure, its EXTREMELY secure relatively. Its just a matter of perspective. Oh and sorry if my story sucked on trying to explain it to people who blatantly don’t understand :). And most unstableness is indeed 3rd party. When Vista came out. It was like 70% of crashes were caused by nVidia, 5% ATI, 22% others, 3% Microsoft. Or something like that, i cant remember the specifics. But it was a mostly nVidias fault scenario.

  35. tgrand says:

    Mattisdada:

    That is a very common misconception, propagated by false claims from SSD manufacturers, software vendors, reviewers, etc.  People use the relatively low access time of SSDs to convince themselves and others that fragmentation is a non-issue.  But really, there are two main factors that determine how long it takes to read a file from start to finish: 1) access time (a time cost paid per I/O request) and 2) throughput (how much data can be read per unit of time).

    On a traditional spinning disk, the relatively high access time is what causes the most slowdown when trying to read a fragmented file.  If you need to read a file which is in 200 fragments, and your seek time is an average of 9ms, that’s 1.8 seconds spent just seeking around.  On an SSD, that might be more like 200 * 0.1ms = 20ms.  A 90x improvement.

    But access time is only part of the picture.  The other part is throughput, and it’s very important.  Let’s say your SSD is capable of reading at 200MB/sec.  Well guess what – its throughput of an SSD is actually quite variable depending on I/O size.  The size of a single I/O request affects the throughput you get.  Look at the graphs here:

    http://www.guru3d.com/article/gskill-ssd-solid-state-disk-64-gb-review/6

    If you have to issue a bunch of I/O requests for say, 64KB and under, you’re looking at 2-10x decrease in throughput for those requests.  The more fragmented a file is, the more small I/O requests are going to be needed to read it.  It will not be as fast as reading a contiguous file using larger I/O requests, and the difference could easily be much more than 0.5%.

    As I said before, I measured a 30% hit myself.  In that particular test, I used XP, an SATA2 128GB MLC SSD, a 180MB file in 200 fragments (downloaded by Firefox), SysInternals contig to measure and remove fragmentation, filemon to check actual I/O sizes, and a program I wrote to do read timing using different APIs, flags, and requested I/O sizes.

  36. adir1 says:

    Nice summary of graphs and charts in the beginning, but I feel like big "feature" is missing here.

    Perhaps it is not directly related to disk fragmentation (though I though I heard it was back in early Vista days), but what about "aligning" of software for faster loads.

    Great example is during system boot – the OS knows it will need a lot of drivers, registry, executable files (dlls, etc). Aligning all those for one or very few contiguous load would significantly improve startup.

    Granted this is something that can be "prepared" during the initial OS setup, but overtime as you add hardware, update kernel pieces (security, anyone?), what will re-align these pieces for fast load?

    Plus, what about Icons and Background graphics and other such "nonsense" – this is all part of "creating" the user desktop, and the faster it happens the better the experience!

  37. adir1 says:

    One more comment, to add my 2 cents to what tgrand mentions above:

    1. The SSD drives are in their infancy, and manufactures (especially Intel, I hear), are making huge leaps forward with the way data is internally organized to provide huge seek/throughput improvements

    2. Due to the nature of Flash memory Writing and Re-writing to same memory regions degrades media at an accelerated rate. To me, that says that a proper place to "defragment" a file is inside internal firmware of the SSD drive, and not externally by OS. Anyway, I understand most firmware already makes these decisions of where to physically write data, separate from the "logical" OS positioning, based in part on media degradation optimizations.

    On last note, I’d like to see a post about what we’re doing to optimize OS for SSD drives. There is a world of functionality and improvements that can be gained, way beyond the silly "Boost" or whatever that thing is called in Vista. I am talking about scenarios where boot partition is SSD, or it’s other kind of "mix" where system contains SSD and old-school drives. Another post, Win 7 team, perhaps?

  38. tgrand says:

    adir1:

    You said "the proper place to ‘defragment’ a file is inside internal firmware of the SSD drive."  But how could this be done?  The kind of fragmentation we’re talking about here occurs at the filesystem level – the "logical" OS positioning as you called it.  Only the OS can manage the filesystem.  A storage device can’t possibly do it.

    It sounds like you’re either mixing the concepts of filesystem fragmentation and wear leveling (they’re really completely separate), or you’re suggesting there should be some kind of new and very different interaction between OS and storage device…?

  39. Rudi Larno says:

    I must agree with adir1, solokum, hairs and shan.

    How windows organizes the disk is far more relevant than the old-school ‘defragmentation’ routine. The first graph shows that disks are too slow compared to the CPU. So why not use the cpu power to determine the most optimal place for a file when it is written? And use a continuously running service to optimize the disk when not in use. (like diskeeper)

    I’ve written more of how I’d like to see windows organize the disk @ http://www.larud.net/subtext/archive/2009/02/10/46.aspx

  40. Mr32bit says:

    I would add this to the Disk Defrag option.

    Have an option of allowing disk defrag during a screen saver. This would help out since it is during an idle time. Make it a default and allow it able to be turned off.

    I like the other suggestions provided too, but I think it might hinder performance if the OS is continually monitoring when "idle" time is available.

    Disk defrag during a screen saver is a good option. Although not everyone turns their screen saver on, it would be helpful.

  41. DWalker says:

    Mr32bit, I think the OS always knows when the computer is idle.  It’s not that hard to detect.

    And, screen savers (and screen power-off) on laptops that are running on battery power are there to save the battery; defrag while running on a bettery may not be ideal.

  42. RikDederly says:

    I have mixed feelings about the following suggestions, but these are ideas I had while reading the post.

    1) Schedule a hardrive to turn on at a specified time and turn off once the defrag was complete. The drawback for this would be the additional power used in the middle of the night. The benefit would be a defragmented drive at little impact to the user. Of course, there would be no connection to a network or internet when this occurs.

    2) Create an automatic defrag to occur when the system has been idle for 3-4 hours. Under normal use, this would only occur when your system has been left on overnight. Therefore, the user would only need to leave the system on during any given night. The current scheduled listed above appears to require the user to remember to leave the system on on a given day (such as Wednesday). Many users would likely forget to leave their system on during their scheduled defrag time.    

  43. Victor Dubina says:

    To Scheduler automatic defragmentation on idle just simply go to Control Panel > Administrative Tools > Task Scheduler. Expand Task Scheduler Library > Microsoft > Windows. Find "Defrag" subfolder, right click on "ScheduledDefrag" task, choose "Properties" from falling down menu. Click "Triggers" tab.

    Here you may add a new trigger. A limit is only your imagination 🙂 For instance, choose "On idle" from "Begin a task:" list. Press Ok or tune-up with advanced settings.

    As simple as that.

  44. theophoretos says:

    Something good that I just read about windows 7 is that "termination of defragmentation would not damage the system". But does this mean that it would on Vista? I mean, today I started defragmenting my vista HD for the first time, when it has only 10 gb left on a 250 gb HD. After more than an hour, I noticed that defrag.exe was not even consuming any CPU at all on the task manager and assumed that it was all done, and so I killed the process. Only then did I notice it was still going on before I killed it. Would this have damaged any data on my drive?

    Having a status report while defrag is in process now has another reason: so you can actually tell that it is still going on!

  45. tgrand says:

    I believe the defrag API and implementation in modern Windows is set up so that it shouldn’t be possible to have data loss or corruption as a result of interrupting the defrag process – whether the interruption is you killing a process, a driver causing a BSOD, power to the system being cut, etc.  I highly doubt there was any fundamental change here between Vista and Windows 7.  But it would’ve been nice if you’d named your source.

  46. graziano says:

    I read somewhere that the defragmenter in Vista can’t defragment files over 64MB and doesn’t include these files in the list of top fragmented files when you run the analysis from the DOS prompt. Is this true and does this limitation still exist in the Windows 7 defragmenter?

  47. derosnec says:

    Not everything gets defragged.  Some system files are immovable.  See:

    http://support.microsoft.com/kb/227350

    (Files Excluded by the Disk Defragmenter Tool)

    see also:

    http://support.microsoft.com/kb/961095

    http://support.microsoft.com/kb/174619

  48. Bertrand2 says:

    @obsidience:

    Your "zones" proposal does sound like a good idea, but I think it is probably bad both in theory and in practice.

    In theory, it falls foul of a few principles:

    1. Code optimization is an empirical science. You have not measured the cost of not using the zone system. Of course, this is overcome with a fair bit of work. But as far as we know, your proposal may not deliver much in the way of savings, but will create a number of complications.

    2. If you don’t want a radical redesign of either the filesystem or the Windows API, you will need to have the defragmenter make assumptions about the semantic content of the files. This is all kinds of bad. First of all, it will probably be wrong. Even if you get it right, it will become wrong with changes to the API. This sort of thing was the number 1 cause of bugs in earlier Windows OSes and their applications. See Raymond Chen’s ‘The Old New Thing’ blog for various rants. It used to be a little more acceptable, because without those optimizations Windows would have run so slowly as to be unusable, but today it is Really Wrong.

    3. Radically changing the API to optimize a single application is probably not a great idea. It passes on the (economic, not clock) cost to other developers.

    4. Radically changing the filesystem to optimize a single application is also probably not a great idea. You are liable to break third party tools, windows tools, etc. that will corrupt the filesystem if they are allowed to run.

    In practice:

    Much of the stuff that belongs in the different zones resides in a single file. You usually store resources like icons within your executable file.

    1. If you do decide to change the API, so that applications have to store their icons separately from their executable, and that sort of thing, many existing applications will not do this. You cannot just break all these applications, or you will get "Windows 8 is broken" from users, so you will need to keep the old ‘deprecated’ functions. There is absolutely no benefit to the developer of using the ‘new and improved’ interface, since from their perspective their is no improvement, so they will continue to use the deprecated functions. Even if it does start getting used, it will take 5+ years for the majority of applications to start using it. By that time, NTFS may be replaced, hard drives will have mostly become SSD, etc.

    2. If you change the filesystem, say to add predefined tags, so that the filesystem knows which zone a file belongs in, the same argument holds. Also, you are liable to make the filesystem unreadable by earlier versions of Windows, third party tools, etc. You will also need to ensure that the OS is backwards compatible with the old filesystem: Samba will take a while to incorporate this as best it can, but if you don’t do this you will break any number of file servers. People will not say "Oh. Samba is broken. It doesn’t follow the new NTFS filesystem properly." They will say "Windows 8 is fscked. Don’t let it near a corporate network."

    3. If you avoid making these two changes, apart from the theoretical problem, you still need to face the problem of embedded resources. Should you split the file into two zones? This will slow down copying and writing the file. Or should you cache a copy of the resource in the correct zone. Caching is a useful tool, but apart from the space cost, it runs the danger of becoming dirty, eg. when your application crashes. Usually you can avoid this by cleaning the cache when your application starts. We are talking about an operating system, though. "When the application starts" means when the OS boots up. The cache would then actually serve to slow down the boot sequence. If you don’t do this, you *will* get a dirty cache when you have a power failure, etc. But let’s put that aside. The system will need to look in your home directory to see what’s on the desktop and what’s in your start menu. It will then need to see what resources the applications have, presumably by reading them. Then it will go to the cache to load the resource, rather than just getting it from the file.

    I cannot see any obvious benefits here. What you can do, and this is what many other OSes do by default, is put the system and applications in one partition and the data in another. You can do this yourself. Just mount a data partition to the home directory. Obviously it’s not a total solution because you probably have a bunch of applications sitting in your desktop or in your "Documents" folder within your home directory, but it may go some way towards your "zone" system.

  49. stewartjazmin says:

    One thing I was expecting from XP and forward is that I could get a better disk map, extended, showing the files or group of files on each "cluster" or "square" at that zoom level. That tells me where each file is.

    Instead of that, and even when they say they’ve listened -"In Windows Vista, we had removed all of the UI that would provide detailed defragmentation status. We received feedback that you didn’t like this decision, so we listened, evaluated the various tradeoffs, and have built a new GUI for defrag!"

    Say, a new one, with no info! (even not XP disk map)

    I remember (where you took your defrag from) Norton, getting more and more intelligent and user configurable.

    Automatic and user-defined features it supported.

    Placing files according filter conditions at the beggining/end/middle of the drive.

    This meant I could move the fixed size pagefile to the end, the boot files to the beggining (at that time there were no software updates), and the rest in the middle of the disk.

    A complete disk map

    A set of other feature I don’t recall right now. But that was the Norton Utilities, before it was Symantec.

  50. stewartjazmin says:

    SSDs have internal "wear-balancing" software that will intend to use ALL the cells in the drive the same amount of times. That means, you modify an existing file, and it will save the new blocks into a different place of a chip or in a different chip intentionally. No matter what the file system thinks where the file is really located.

    So, even if you defrag them, or use a tool to image the drive and restore it (some of them restore the files in a contiguous fashion) it will fragment big time internally.

  51. Haxxer Jax says:

    RE.

    "Nice summary of graphs and charts in the beginning, but I feel like big "feature" is missing here.

    Perhaps it is not directly related to disk fragmentation (though I though I heard it was back in early Vista days), but what about "aligning" of software for faster loads.

    Great example is during system boot – the OS knows it will need a lot of drivers, registry, executable files (dlls, etc). Aligning all those for one or very few contiguous load would significantly improve startup.

    Granted this is something that can be "prepared" during the initial OS setup, but overtime as you add hardware, update kernel pieces (security, anyone?), what will re-align these pieces for fast load?

    Plus, what about Icons and Background graphics and other such "nonsense" – this is all part of "creating" the user desktop, and the faster it happens the better the experience!

    Tuesday, February 03, 2009 1:14 PM by adir1

    TRY :

    start command prompt Right click and select "run as administrator

    Type defrag c: /b

    Command prompt shows :

    C:Windowssystem32>defrag c: /b

    Microsoft Disk Defragmenter

    Copyright (c) 2007 Microsoft Corp.

    Invoking boot optimization on (C:)…

    Pre-Defragmentation Report:

           Volume Information:

                   Volume size                 = 1,81 TB

                   Free space                  = 1,58 TB

                   Total fragmented space      = 0%

                   Largest free space size     = 920,24 GB

           Note: File fragments larger than 64MB are not included in the fragmentat

    ion statistics.

    The operation completed successfully.

    Post Defragmentation Report:

           Volume Information:

                   Volume size                 = 1,81 TB

                   Free space                  = 1,58 TB

                   Total fragmented space      = 0%

                   Largest free space size     = 20,00 MB

           Note: File fragments larger than 64MB are not included in the fragmentat

    ion statistics.

    C:Windowssystem32>

    Wheeee !!!

  52. Matt Klein says:

    How stupid are you people!?!? So, go figure, I just found out that my Vista Ultimate machine cannot Analyze a Drive to even see if it needs to be defragged and after some searching, I am lead here to find that this was a purposeful decision on the part of someone who must have at least a little intelligence.

    I DO NOT KEEP MY COMPUTER ON 24/7 MORONS! Most home users don’t. I knew enough to check up on whether I needed to Defrag every month or so. Now, I have no choice and no way of telling the progress.  And when I turn my laptop on, on a Wednesday evening apparently and my Defrag is running… I don’t even know it.  Maybe that’s why my system sometimes lags, but it sure would have been nice to have been clued in on why!  And you wonder why people make the jump to Apple!?  You alienate your own users.

  53. pccure says:

    Its very useful information to everyone. am glad to wish your work . thanks for sharing. keep try to updating. its works a lot.

  54. Cristiano C says:

    I wonder why Microsoft’s engineers took so much time to get it and I am not sure if they completely get why other defrag programs are more effective.

    Speed up a disk access (aka reducing total seek-latency by reducing disk seeks) is not related to only the fragment of a files but to the location of multiple files. Where these files are located on the disk (the beginning is usually faster than the end of the disk) and how far are the files used by a process in relation with each other make all the difference. Some years ago I posted clearly here how it should work: http://www.mydefrag.com/forum/index.php?topic=117.0

    Jeroen, the author of MyDefrag (formerly JkDefrag) got it. Other people got it too (see the post).

    Once the previous problem is solved the previous problem, reducing the defragmentation time is another beast. The best solution should aim to find out where all the files should be on the decompressed volume before the defragmentation starts (do it algorithmically in RAM) and the quickest way to get the file there.

    Strategies that are going to plan the disk zones so it will not become quickly fragmented again are other things to consider.

    Finally, incremental solutions vs optimize all at once are also viable solutions but they complicate the scenario. Personally I would prefer to have the computer run the defrag every other month if I am sure that at the end my computer would be really faster and do not get fragmented quickly more than before running the defrag.  

    I hope this help.

    Best,

      Chris

    When will we have a drefrag.

  55. Ed B says:

    You can schedule a defrag in Windows XP:

    Scheduled Tasks –> Browse for a new task and create:–> C:Windowssystem32defrag.exe c: -f

    Yay.

  56. PaoloG says:

    I understand from this post that the defrag doesn’t just run on a Wednesday at 1am but also whenever the system is idle. This I have found to be true on my system – but I have an issue that the defrag runs non-stop 24/7 every day (unless I use the machine – in which case it stops). I find the noise from the computer irritating which let me to look into this. Question – why would it be constantly running when it reports only 3% fragmentation? Could this be to do with the computer have a number of VERY LARGE files 20Gb+ (virtual machines of all sorts) ? I am tempted to turn off defrag as I fear it will burn out my disks with all this activity. Any advice on this would be appreciated. (PS. please delete all angry people comments from this page – they get in the way of those who are professional and trying to be constructive). Good post.

  57. OB says:

    So in W7 defrag

    -you can watch status.

    -move some more files

    -do defrag parallel on disks

    What about files used more often placed at fast part of disk?

  58. KP says:

    I couldn’t help but laugh at this ..

    One thing I was expecting from XP and forward is that I could get a better disk map, extended, showing the files or group of files on each "cluster" or "square" at that zoom level. That tells me where each file is.

    WHY? .. what possible use is this sort of information to the end user??

    I’m happy to say that I have far better things to do with my time than watch silly pointless graphs and worry about this sort of thing!

  59. Keatah says:

    I can sit for hours and hours playing with defragging operations. totaly engrossing! too bad work has stopped on Ultimate Defrag. seems to have been one of the more innovative ones out there recently.

  60. Keatah says:

    it featured such things as file order placement, and a rough graphical rep. of the disk surface, more or less. and it showed where exact files were exactly. And it let you put specific files at the front or back o’the disk for archiving or high performance. Like man, you put all your windows and apps up front, and all your old pst and data and zip files at the back, for either sewper fast axis or slow and lame archive performance.. T’was a good thing!

  61. spenser says:

    A suspicious mind might come to the conclusion that the decision to omit the graphical fragmentation map found in earlier versions of defrag might have to do with the admitted fact that fragments greater than 64MB are now ignored.

    Bad choice. It should have been an option flag.

    Choosing to make a function *less* powerful in order to *improve* the user experience is not really progress.

    BTW, in much earler times, files were actually laid out on the memory drums such that the next required piece of stored data was just reaching the read heads at the time the head was ready for the next sector. This permitted the hardware to mimic, as much as possible, a continuously available data stream.

    Now, *that* is optimisation.

  62. DerekG says:

    If I understand correctly, it sounds as if you are eliminating defrag of files above 64MB in size completely.

    I find this troubling since I manipulate database files as large as 2-3GB in size frequently.  I used O&O to defrag these files

    in XP and it made a noticeable difference in access times.

    I don’t understand the logic of eliminating defrag completely for large files.

  63. Matt says:

    Well the new features of Diskeeper 2010 means 85% of file fragmentation is prevented before it happens.. it also has the smarts to monitor and defrag only when idle resources.  Seems pretty sexy to me!

  64. Oto says:

    I thought that under background defrag there is more than a simple scheduled task 🙁 I have it used since XP when the command line "defrag" utility has been introduced. Simple batch file worked great. Neverless defrag in W7 now is as it should be a long time ago.

  65. Clint says:

    I still want a graphical representation on the data on the disk!  In XP you could see if the pagefile was in 1 contiguous chunk because it was a green chunk (unmovable) (yes, I know there are a couple other files that can also be part of the green space but those can be turned off and removed)  I like my pagefile to have no holes in it for obvious reasons.  So, I disabled virtual memory (and hibernation, system restore, etc) then checked defrag to make sure there is no green blocks – good, now defrag. Now set page file to a set size (same min and max, NOT system managed) and reboot – open defrag to verify that it was written in 1 chunk. good, now turn system restore or whatever else back on.

    So, how can I ensure the page file is contiguous in 7???

  66. Selvan says:

    Can't it be written so it optionally runs at Shutdown?

  67. Brandon says:

    For those of you bashing Linux or Unix. Several people said that Linux/Unix de-fragments as it writes. That is incorrect. Linux/Unix keeps track of what is written on the file system. Afterwords, when it writes it knows where to put everything so it does not have to fragment files. Secondly Linux is more secure than windows. User Privilege Restrictions in Linux are strongly enforced. In windows last time I knew there were serious bugs in DLL files that allowed users that are non-privileged users to do privileged things placing the system at risk.

    Next, Linux does not identify executables by extension. Windows does and this is a big flaw.

    Every one would like to pick on the concept of everything is a file and say that's so old.

    Well those files for devices really do not exist on the hard-drive, they exist in ram as mapped files which have backing in ram.

    Windows devices are also mapped in to ram.  Both are different abstractions but they both are in ram making them equally fast. By Linux/UNIX letting you treat devices as a file, it allows simple read and write commands to manipulate a device. Windows may have separate API commands for manipulating devices which may number in the the hundreds.

    For those of you who say UNIX is so old also… Most operating systems have barely changed..

    All operating systems and processors use the same seven logic gates…. AND, OR, XOR, NOT, !AND, !OR, !XOR for manipulating things… every operating systems and processor uses that… So Technically Windows is as dirt.

  68. Brandon says:

    One more comment.

    When Windows updates servers were being attacked why did Microsoft choose to hide behind Linux Servers?

  69. AL says:

    As far as I understood article:

    Win7 defragmenter is improved and it can even fix MFT fragmentation.

    In my PC I have XP-SP3 NTFS boot disk with 4 fragments in MFT and XP can't fix it,

    no any other files reported as fragmented by XP defragmenter, but I can see red marks

    in defragmenter GUI on XP.

    I took this disk to Win7 PC and run defragmentation there.

    After that I move it back to XP machine and I see:

    MFT fragmentation remains and also I got 10+ fragmented files reported.

    After defragmentation on XP these fragmented files were defragmented OK.

    Questions:

    1. Is it OK ?

    2. Can I be sure that Win7 defragmenter always keep XP compatible NTFS disk

    as XP compatible one ?

  70. Mc says:

    Really enjoyed that, thanks. I need to start leaving my PC on overnight tho!

  71. Rajiv says:

    Why does my Disc Degreg ALWAYS show Last Run status as '0% fragmented' … does that mean it never requires it.

  72. Martin says:

    Windows 7, asus u80, defrag command in save mode command prompt did not work. Only show information about that command. Is it this normal?

  73. Zorsha says:

    I hate not SEEING the process, like we used to! I want to be able to see the difference between the mess I had and the nice tidy disk I'm getting. Now it's just a number ticking. No where near as satisfying. Can this be changed?!   =(

  74. A very interesting read.

    Running Vista I find that turning OFF scheduled defrag and running "as and when" from a command prompt using cmd.exe and entering defrag c: -v-w is by far and away the best. You get a detailed text report on the level of fragmentation, all file sizes are defragmented, and once you have run the routine once subsequent runs are very quick, usually around 2 to 5 minutes maximum.

    I do this every week or so.

    Also as a user of incremental backups it is of course best not to have defrag making changes, no matter how small to the file system as that greatly increase backup time and space.

    The auto defrag is great in theory, in practice its not quite so good for all the various reasons outlined in these pages.  

  75. crokusek says:

    Can tell this is a non-optimum solution due to the sheer number of defrag "competitors".

    The defrag when running solution leaves numerous unmovable files at the capicity end.  Point in case, try to shrink a volume right after a factory install.  I've got a >200Gb drive, using only 40G and can't get it to shrink below 149Gb.  Locked files included Restore Points, Windows search service, hibernate file, memory cache, index.dat under user account, windows update files under windows/SoftwareDistribution, and the UsrJrnl (created by chkdsk).  And those are the ones I was able to manually delete/move recreate.  

    I also attempted to run the defrag service from safe mode and services disabled except disk defrag but it has dependencies and failed to start.

    What is needed is an offline defrag that can run without services or pre/during-boot.

    Anyway, if you ever try to shrink a volume you'll see what the fuss is all about.  Event viewer lists the last file that is "in the way" and have to go from there.

  76. Jess says:

    Great read about defrag there, very interesting to have a little elaboration on some of the technical details. I think what ari9910 says is a good idea though. Obviously you've put a lot of work in to ensure that defrag can be easily turned off even mid-process without any damage to the system. Given the work that has been done on that wouldn't it make sense to make more thorough use of it? I personally have the power options set to never spin down the drives, never go into sleep and never hibernate. It switches the monitors off after 15 minutes if the computer hasn't been in use. I noticed that the indexing process automatically kicked in when the monitors were switched off and that was great. An example of the system being intelligent and carrying out processes associated with improving my experience while I'm not using it. It would make sense to be able to set up a rule for running defrag in a similar way. To be able to set it to automatically run defrag when the computer switches the monitors off contingent on defrag not having been run for a certain length of time or perhaps contingent on disk fragmentation reaching a particular level (or some combination of the two) would be a really smart feature. If this option was present it would also make sense to have the system 'remember' that it had started a defrag process which ought to be finished incase the user 'interrupts' it. $0.02

  77. FishNChips says:

    The question is: if my company uses a server to store data – and therefore data is NOT stored on individual machines – is defragging even necessary on the individual machines? Or would it just be needed on the server? Or would the machines still need defragging but way less often?

  78. Tall_Todd says:

    I set virus scanning and defragmenting to occur every nightat 2am.  Usually I turn the computer off at night.  Once a week I just leave it on at night and the maintenance happens automagically.

  79. Cynthia Hubbard says:

    I am using Windows XP. I have a Disk defragmentation and run it once a week. There is white for space, green for unmovable files, blue for contagious files and the red is for fragment files. Now every time I ran the disk defragmentation to analyze it would say you do not have to defragment this file up until last night. So, I did the defragment but I have about 80% blue contagious file and about 30% unmovable. How do I delete these files to free up more space on my computer?

  80. issue says:

    After scheduled the defragmentation, will the disk defragmentation take place at scheduled time without analysing the fragmented percentage? Will it take place no matter what the fragmented percentage is ?

  81. Where in the registry? says:

    Two things –

    1. Where in the registry are the settings that control the default automatic defrag?  Specifically, that it is enabled and the day of the week and the time when it runs?

    2. Any Eagles fan knows that "Hotel California" is 6 1/2 minutes long, not 3, unless you only listen to some Top 40 station that probably never plays the long version of Light My Fire either.

  82. The genuine source of the music quoted says:

    Dear Friends,

    thank you for your text, and since it is helpful for me, the only way to re-pay you, is from my side of expertise: will you search YouTube with "Beethoven Pathetique Sonata – 2nd Movement". Beside one blonde, you can hear the genuine source of the Hotel California you quoted… The cats would buy Whiskas – but what are buying we?

    Pavel Kozák, CZ, EU

    pavel_kozak@seznam.cz

  83. ZumrickoverWalt says:

    It would sure be nice if the desktop had a status windiow

    which would show each and every program running in the

    background. Like, heres the programs loaded into ram.

    These are running right now ands these are not.

    And make it so no program can avoid being shown.

    So nothing can hide from the administrator

    or machine owner.

  84. G says:

    Why cant it just defrag in the background, using a minimal amount of process power, every time something is deleted?  Why can't Windows use its fancy index to copy and move files into fragmented space?  The guys at Apple figured this out a long time ago, why does Microsoft resist doing the same?

  85. micahel says:

    my problem is my program is not running at all!! i've tryed to do everything but nothing!

  86. The Windows Defrag is Diskeeper says:

    All versions of Microsoft Windows include a tool for disk defragmentation. The Windows Disk Defragmenter tool is a limited version of the Diskeeper program from Diskeeper Corporation. Disk Defragmenter does not include all the features available in the full version of Diskeeper.

    Microsoft Diskeeper partnership: Microsoft Partner Relationship: support.microsoft.com/…/en-us

  87. Bryan says:

    Could you please tell me what 0% consolidated means during defrag

  88. mood3rd says:

    firstly, thank you for a well thought out explanation of defragmentation, on different operating systems.

    I am a gamer, who also uses a games editor, video editor & generally programs which need a lot of system resources.

    so I stop a lot of things running in the background, to achieve this.

    eg. defraging.

    the schedule time is no good to me, as I do not use the pc at set times. & turn it off, when not in use.

    doing backups has the same problems.

    so I turn them off & do manually, as I am about to leave the pc, for a while.

    but I can only do this one task at a time.

    suggestion: it would be good to add a utility to queue as many different programs, to run, in the order of our choice.

    then lock the pc, go out for the day, & come home to all the maintenance tasks completed.

    with a report of tasks completed.

    when putting the tasks into the utility, a time to completion, would also be good.

    so if I will be gone for 1hr 45 mins, I could pick a task or tasks to run in that timeframe.

    one of the problems with backups, is files moving while it is happening.

    so continuous backups become a problem, if I forgot the defrag is about to happen.

    would be good to have a "save as", backup. (when copying large files)

    as files windows are happy being on my pc, it stops backing them up, because some of the files, share the same name.

    well, after coming back to my pc, hours later, to find it was stopped seconds after I left it, is not good.

    I miss the graphical defrag of windows XP.

    sorry for getting off topic, a bit. but all maintenance tasks should all be connected, reguardless of vender.

    eg. my security software updates, scans, followed by defrag, etc. with the backup last.

    so it does a backup of a very healthy pc.

  89. Sherman says:

    Never say never.

    SSD drives can usefully be defragged for empty space consolidation in order to: Create an image to transfer to a smaller drive/partition, truncating it – we want to make sure only empty space is truncated.

    Also, why don't Microsoft listen to their customers, we WANT to see progress graphically, it is compelling, it is soothing, it is re-assuring.

  90. Bob says:

    You said – "automatically run periodically and in the background with minimal impact to foreground activity".   That's not true.  When my Windows 7 laptop is running Defrag and I then start to use the laptop I see that it is consuming 20 to 40 % of the CPU for many many minutes.  The applications that I try to run are very slow until Defrag ends.

  91. thebowmaster says:

    I used one of antivirrus registry defragmentation while my deep freeze is off then cancelled restart process or restart later because im doing something aside from that then i forgot that process and i turned on my deep freeze on and now im stucked in registry defragmentation and start up process over and over. Pls. help me w/ this problem.

  92. Sharoon says:

    The GUI for showing the progress of De-fragmentation process (in the older versions of Windows) was a better idea.