The history of change-packing tools at Microsoft (so far)


Self-proclaimed "Author of unremarkable code" Jay Bazuzi posted a brief history of change-packing tools at Microsoft.

First of all, what is change-packing?

After you make some changes to code, you may want to save those changes without checking them in. Some source control systems have native support for this operation, variously called shelving or stashing. But first lets look at the requirements of change-packing.

First, a packed change needs to take the form of a single file. This allows the file to be added as an attachment to a bug, or saved away in a "Things I abandoned, but which I still want to keep around for reference" directory somewhere.

For example, you might make a change and then realize that it doesn't solve the problem. On the other hand, there was some good stuff in that change, like a new technique you discovered, and you don't want to lose the record of that change, in case you need to use that technique later. Keeping it in a single file makes it easier to manage. (A variant of this is where the bug you're trying to fix was rejected by Ship Room, so you want to save the fix away so you can bring it back at a more propitious time.)

Other use-cases for change packing are to get another developer's opinion on code you've written, either informally as a sanity check before getting in too deep, or formally as a pre-checkin code review. Or you can send the packaged change for pre-checkin validation: In the old days, you would send the changes to your buddy to verify that the code compiles, known as a buddy build. (The most common reason it wouldn't compile is that you forgot to include all the affected files in the package.) Your buddy would unpack the change into their local repository and try to build it. In somewhat newer days, you would send the changes to a dedicated change verification system.

Second, the contents of a packed change must be viewable on a non-developer machine. This means that you cannot assume that there is a local repository, but you are allowed to assume network connectivity. This requirement permits the packed change that you attached to a bug to be opened in Ship Room so everybody can view the change and review it. (For bonus points, your packed change could remove the requirement that it have network access, so that it is usable even when offline.)

In the early days, Microsoft used a homemade source control system formally called Source Library Manager, but which was generally abbreviated SLM, and pronounced slime. It was a simple system that did not support branching. Files were centrally stored, and clients cooperated in updating the files.¹ The Windows team didn't have a change-packing tool for SLM. You typically sent around changes by just copying them to a shared location. If you wanted a code review, you brought someone to your office, and you reviewed the code in person.

Shortly after Windows 2000 shipped, the Windows source code transitioned to a source control system known as Source Depot, which was an authorized fork of Perforce. Microsoft made changes to the code base to do things like improve scalability and add new features.

It was during this era that I wrote a change-packing tool for Source Depot, which I named bbpack because its primary purpose was to pack changes for the purpose of a buddy build. (Its secondary purpose was to facilitate code reviews.) The package itself was a batch file that did the necessary work of applying the changes to the source code on the machine that ran the script, or if running in code review mode, to unpack the files into a temporary directory and then run a diff tool. I chose a batch file because I didn't want to solve the problem by creating a bigger problem: If I wanted somebody to run a buddy build for me, I wanted it to be as simple as possible: "Run this batch file, and then build this directory." Not "First, install this tool..."

Since the two purposes for the bbpack tool cared only about the difference in the package, I didn't pay attention to replaying the pending operations with full fidelity. As long as the source code after unpacking matched the source code that was packed, that was good enough. That means that if the original package was created via an integration (which is the Source Depot name for what everybody else calls a merge), the unpacked file was reported as an edit, not a merge. Similarly for other exotic file states like "undo".

Meanwhile, the Office team were also in the process of transitioning from SLM to Source Depot, and when they learned about this bbpack thing, Office tools magician² Stephan Mueller exchanged email with me to learn about bbpack, its design, its strengths and weaknesses, and based on this and his own powerful brain wrote a comparable change-packing tool for SLM, which he called slmpack. And then when Office completed their transition to Source Depot, he wrote it again, called sdpack. The Office team used these packages for the same thing that Windows did: Sharing changes between developers and viewing proposed changes in the Ship Room meeting.

Okay, here's where Jay Bazuzi enters the story.

One sign that you wrote a useful tool is that people started using it for things that you didn't originally consider. Jay was using my bbpack tool to submit jobs into a gated build system, which was totally not the intended use. More than once, I received reports from people saying, "We're using your bbpack tool to transport changes between machines, and we find that it doesn't work reliably when there are more than 50,000 files in the package," or some other ridiculous thing. Dude, that is nowhere near what bbpack is for. No developer is going to change 50,000 files and send it out for code review.

There was no rivalry among the three of us over whose change-packing tool was best. Indeed, I considered jjpack and sdpack to be "next generation" packing tools and did what I could to deprecate bbpack and steer people toward jjpack and sdpack.

What probably settled the not-really-a-battle between jjpack and sdpack was when another team wrote a code review tool that became wildly popular all over Microsoft. That tool used sdpack files as its interchange format. From then on, sdpack was the change-packing tool of choice, and the other tools were also-rans.

Now that Windows is using git as its source control system,³ combined with Visual Studio Team Services for its online presence, the question arises of how to package changes in git.

  • Sending changes to another developer to get their opinion: Push your changes to a branch on VSTS. The other developer can view the branch online, or fetch it to their local machine for a closer look.
  • Sending changes to another developer to get a buddy build: Push your changes to a branch on VSTS. The other developer can fetch it to their machine to build it. Or even better, let continuous integration do the work of the buddy build.
  • Sending changes to another developer to get a code review: Push your changes to a branch on VSTS, and then create a pull request.
  • Packing changes and attaching it to a bug: Push your changes to a branch on VSTS, create a pull request, and put a link to the pull request in the bug. Ship Room can open the link and view the pull request online. From there, they can add comments to the PR, or approve it via the Web page.

The one case that I still don't have a good solution for is the one where you have some changes that you decided not to commit, but you want to keep them in case they come in handy later.

Option 1: Put a commit at the tip of the branch summarizing what this branch is about, and then abandon the branch but don't delete it. You can always come back to it later. The downside is that the name of all your "things that didn't work" sit there cluttering your git branch output. You can mitigate this by using a naming pattern like w/raymond/archive/blahblah. push the branch to the server, and then delete the local version. But the branches still show up in your daily workflow.

Option 2: Create a patch and save that. The downside of this is that patches decay as the code changes, and eventually one of the hunks will fail to apply, and now you're stuck. Since patches record only a few lines of context on either side of the change, there isn't enough information to figure out where the hunk moved to. And even though the patch is the diff itself, you often want to see more context around the changes than was recorded in the patch. Patches also don't record the base revision from which they were generated; you'll have to add that information to the patch file.

Option 3: Create a bundle. The bundle is a compact representation of the objects related to your branch that aren't in the main branch. However, looking at the contents of a bundle is cumbersome because you have to fetch it into a live repo and then remember the starting point for the diff. You cannot view the contents of a bundle without a live repo.

Option 4: Push the changes to a branch on VSTS, create a pull request, and immediately reject it. Save a link to the pull request. You can delete the branch; the pull request will remain, with your commits. Bonus feature: You can annotate your changes by commenting on the pull request. ("This was my attempt to do X. It basically works, but there's a bug when a Y occurs at the same time. See additional discussion at line 123.")

I haven't found a good answer to this yet. As a stopgap, I have a script that generates an sdpack from a commit (or commit range), and I use the sdpack as my unit of cold storage.

If you have any other ideas, please share.

¹ This source control system somehow got turned into a product, named Microsoft Delta. It was very short-lived.

² If you're lucky, your project has a tools magician. This is the person who somehow has their hands in everything that makes your life better, whether it's improving the build system, managing the gated check-in system, setting up the automated static analysis tool, or fixing build breaks in the middle of the night so your team has a working build in the morning.

³ Another consequence of Windows moving to git as its source control system is that I no longer get email from people asking for help with bbpack. Finally I succeeded in getting people to stop using it!

(Stephan hasn't been so lucky, though. He tells me that he has a customer who is packing and reapplying gigabytes of changes and cares more about speed than package size. So he's making still more changes to sdpack to support that scenario. "The customer is used to jjpack's speed, so it's fun to have that bar to shoot for.")

Comments (57)
  1. Michael Quinlan says:

    Did Microsoft ever use Team Foundation Server internally?

    1. Yes, there are some teams who chose TFS as their source control system. (I don’t know if they’re still on it, or whether they have since switched to git.)

  2. Paul says:

    Re: “I still don’t have a good solution for […] changes that you decided not to commit, but you want to keep […]”

    When using Git, I’ve seen people use personal remotes for this (you have multiple remotes). You push your branch, but it doesn’t show up for anyone else who is using the shared remote in their, as you say, “daily workflow”.

    Personally, I try to avoid keeping anything temporary at all for any reason. I know it’s going to get stale and fill up secondary storage that noone ever cleans up.

  3. Ben Voigt says:

    “Patches also don’t record the base revision from which they were generated; you’ll have to add that information to the patch file.”

    They should, if you use the patch-generation function of your source control tool, instead of a standalone diff program.

    In my experience at least TortoiseSVN and TortoiseGIT, as well as the include base revision information in generated patch files… and I believe this capability comes from the underlying library rather than being specific to the particular client. Just double-checked using the Git for Windows command-line client, and yes the base revision information is there too.

    1. Neil says:

      But not using `git diff`; you have to use something clunkier like `git format-patch` instead.

    2. alegr1 says:

      To apply a patch, git doesn’t really need its commit ID, or its parent’s commit ID . It only needs a blob ID of the base version of the file(s) being patched, and that ID is actually recorded in diff output.

      1. But “git am” doesn’t take that base into account when merging, nor does it show you the standard merge conflict markers. Furthermore, there is no tool (that I see) which takes a patch file and produces the before and after files, suitable for display in your favorite diff tool.

        1. alegr1 says:

          git-am is just a combination of clever parsing of the mail-patch file, and running a sequence of “git apply” commands and “git commit” commands.
          >But “git am” doesn’t take that base into account when merging
          The base blob ID? oh yes, it does, this is how git-apply works.
          >nor does it show you the standard merge conflict markers
          This is what –3way git-am (and git-apply) command option (and git config am.threeWay) is for.

  4. Thanks for the history. I had wondered for a very long time if Source Depot was actually Perforce. Cool story!

  5. alegr1 says:

    Well, once you grok Git, it’s the best thing after sliced bread.
    I’ve lived through a few version control systems. The first one I used was PVCS. It was a terrible kludge, which didn’t even support directories in the project directory (later versions did, though). SourceSafe was quite useable, though could be fragile and terribly slow on non-Windows file servers. Perforce used that strange concept of workspaces, which made collaboration quite awkward. Concept of branches in P4 was so strange I never mastered it.

    1. So how do you pack changes?

      1. alegr1 says:

        >So how do you pack changes?
        It depends. For backup or archiving, I can push as a personal branch to common repo. For collaboration – push as a draft to Gerrit. Useful personal patches I keep in the local repository clone as branches. Don’t use stash – it’s not as manageable as branches. Stashes really are one-commit branches, anyway.

        1. Is the repo you push to the same as the one you regularly work from? How do you keep these “cold storage” changes from cluttering your normal workflow? For the local-only changes, do you worry about a hard drive crash wiping out your archived changes? Pushing to a separate repo is not practical for the Windows repo, seeing as the Windows repo is ridiculously huge.

          1. alegr1 says:

            No. Ridiculously huge is my product repo, which is 80000+ files, 10GB worktree checked out, and 10+ GB .git directory (70000+ commits). Windows rep, though, is humongously huge.

            We push to a master server with Gerrit code review, and a few replica servers (fetch only). Personal branches have refs/heads/personal// path prefix. I push my work as a personal branch as a backup against the hard drive crash. After it’s all debugged, I do some “rebase -i” commit surgery to combine (and split when necessary) changes to clean patches, which I then push to the codebase.

            Anyway, I’m spearheading an effort to split it (by using a custom variant of git-filter-branch command) into smaller self-containing repos per buildable binary, while preserving all history for each deliverable. Microsoft needs to do the same. If you don’t want to renormalize files (normalize all EOLs to LF), it’s pretty fast. It can be done incrementally per deliverable, not necessarily in one big shot.
            The filter-branch command can convert 1-3 commits per second, fewer when re-normalization is done. Full renormalization of this 70000 commit repository takes some 50 hours, only because it cannot be parallelized much. Straight filter-branch for a subdirectory is pretty fast. I’m modifying filter-branch to allow incremental conversion.

          2. alegr1 says:

            Also, all my work machines now have SSD, less likely to crash.

          3. voo says:

            10 GB while not exactly small is anything but “ridiculously huge”. The Windows Kernel is 300 GB and Google’s repo is several TB (yup I know) large.

        2. Jon Chick says:

          >So how do you pack changes?

          Maybe I am missed something in the discussion, but aren’t you creating a diff, storing the diff (text file) and applying the patch later?

          Example:
          git diff commit_a..commit_b > change.diff
          git apply change.diff

          Better write up here: https://www.thegeekstuff.com/2014/03/git-patch-create-and-apply

          Cherry-picking is effective if you have network access the the buddy branch. No pull request required. Or use git to generate pull request text and send it per email, no need to put up a pull request on VSTS and cancel.

          Binary artifacts are a plague on source code control systems. How are binary artifacts, e.g icons, handled in bbpack?

          1. Diffs decay over time. Take a diff generated two years ago to a high-churn file and try to apply it. You’ll likely get a bunch of patch failures, and no way to fix them.

          2. alegr1 says:

            To apply a text patch you need to have the base versions of the changed files, as well. Not necessarily the parent commits, though.

      2. alegr1 says:

        Oh, and I wrote a script to use Git repository as a source server for Windbg. The script runs on every build of a binary. Also installs the PDB to the local store. Very handy when you need to juggle multiple SUTs which can be running different builds of the binary.

  6. prshaw says:

    I actually used Delta, my first Microsoft source control package. And yes it was very short lived. I would be interested in hearing the history of the product. Before Delta I used a product called TLib I think, it had the advantage in that it worked with WORM drives.

    1. GerryC says:

      I remember “evaluating” MS Delta for a smallish Foxpro project circa 1992, and deciding it’s UI was terrible. Instead we went for tool from a small company called One Tree Software. The product was called SourceSafe. We tried to get some additional licenses and found it had been acquired by MS. The local rep had no idea what we were talking about. Then they acquire Foxpro as well.
      I think Delta may have been a bit of overkill!

    2. Randy Orrison says:

      “Some drunk guy at Microsoft”? Sigh.

      Delta was an attempt to productize SLM. SLM being good enough for large scale production use in Microsoft, it was thought that it would be good enough to sell. Of course it was command-line only, and it needed to have a GUI to be marketable. Rather than re-implementing the SLM functionality in Windows, we developed device driver I/O redirection hooks between a hidden DOS box and the GUI so that the commands could be run from from the Windows interface, and the returned text output interpreted. Until recently I had the SLM source around, and I still have a shrink-wrapped copy Delta on my bookshelf.

      Ever hear of Microsoft Tutor? That was an attempt at productizing DOT (Daughter of TED); unlike Delta it didn’t even make it out the door. It was close; I did see pre-production proofs of the manual. That ended when the LSD and THC groups got re-organized.

      Randy

      1. alegr1 says:

        > That ended when the LSD and THC groups got re-organized.
        That was a bad trip, dude.

  7. Mantas says:

    Option 1a would be to use an annotated tag (“git tag -a …”).

    Option 1b, avoid creating a local branch entirely: start with a detached head (e.g. “git checkout –detach” or “git checkout @^{}” if you’re into punctuation), do your closing commit, and finish with “git push origin @:w/raymond/archive/blahblah”.

    Option 1b-prime, if you only want a local copy of that branch, finish with “git branch refs/archive/blahblah” instead. Refs not under “refs/heads/” won’t be considered branches, and won’t show up in the list.

    (“@” being a relatively recent alias for “HEAD”)

    Option 2a, record the original commit (patch base) with –base.

  8. Piotr says:

    I remember sendig the whole source code to my buddy and telling him to “windiff it yourself”

  9. Andy Schott says:

    Have you considered using git stash (https://git-scm.com/docs/git-stash)? The main downside is that stash changes are local only. They are standard commits in git, so they do keep track of the commit they were based off of.

    1. Stashes being local-only means that you can’t attach them to bugs for review in Ship Room.

  10. Sam Harrington says:

    My pack of choice these days is git diff -U99999. It doesn’t have the base commit, but has the benefit of having all the file context, at least. I mostly used sd pack as a poor man’s git branch, though, so I don’t reach for a pack equivalent very often anymore.

  11. thals1992 says:

    I never would have thought Ray would reference WinWorldPC here.
    The details about the application there is great, haha.
    “Vendor – Some drunk guy at Microsoft”

  12. Emil M says:

    VSTS PRs can be opend as CodeFlow reviews and I always assumed that it uses the same format as sdpack – just like with sd repos.
    Btw Raymond: I will be forrever grateful for some tips you authored about how to clone Sd repos faster – I got back a few years from that :)

  13. roeland says:

    “The downside of this is that patches decay as the code changes”

    But that’s a downside in general of storing a packed change, not specific to diffs or any other format. Or what did I miss?

    1. All of the change-packing tools discussed in this article create packages that do not decay over time, so it’s not a problem inherent to packing.

  14. cheong00 says:

    “Shelving” was one of my favorite feature in TFS back in one of my previous companies. It allows me to share sample code on how to solve a blocking issue quickly without checking in.

    And it also allow me to conveniently temporarily shelf all the changes I’m working on while I need to deploy the application. (Yup, we didn’t have a central build machine, and we publish code to production server via *horror* development machine.) This together with XSLT transformation for config files kind of work for us without experiencing any problem.

    1. Ian Yates says:

      TFS source control isn’t bad for the style of system it is – centralised. Shelving is handy. It’s also dead simple, which for a small team is nice.
      I admire GIT, and think it’s awesome for the open source software I follow and for larger teams, but TFS is doing pretty well for us at the moment. Had I started our repo today I probably would have picked Git, but don’t regret (yet) choosing TFS several years ago.

    2. Voo says:

      As far as I can see (and I sadly had to suffer with TFSVC myself in the past) shelving is nothing but an incredibly, incredibly limited version of git’s branch system. You can do anything you want with branches that you can do with shelvesets.

    3. Jaloopa says:

      Shelving is one of the main reasons my team and I are reluctant to move off TFS onto Git as some people are proposing. Every supposed benefit of Git is either irrelevant to us (offline working means nothing when your development machine is permanently connected via a network cable) or is a slightly different way to do something we already do fine. Shelving, though, appears to have no real equivalent in Git and we use it a lot.

      1. voo says:

        I don’t understand why you wouldn’t just use branches for sharing code between people.

        Branches in git are:
        – as fast to create as shelvesets with TFSVC
        – can have more than a single commit
        – applying them to another branch is much easier than doing so with shelvesets (go and try to apply a shelveset if you’ve moved a file, good luck)

        The only reason I can see why TFSVC has shelvesets and branches is because creating new branches for TFSVC takes ages while shelvesets can be created with a single button click. But that’s just applying a bandaid – just make creating new branches as fast as creating a shelveset!

    4. cheong00 says:

      So the “git” equivalent of “shelving” is “stash” ( https://docs.microsoft.com/en-us/vsts/git/tutorial/howto#stash )

      But it doesn’t provide convenient way to share it to others without committing. The closest thing you can get is to push it to remote branch. ( https://superuser.com/questions/409228/how-can-i-share-a-git-stash )

    5. Medinoc says:

      One feature I feel is really missing from TFS’s shelving system is unshelve-and-merge-changes (specifically for the case where you “store some changes for later”, then some other changes are made to the file, and then you decide to unshelve your changes to try and integrate them).

  15. Yuri Khan says:

    When generating patches, I sometimes find it useful to pass a -U9999 option. This preserves all the context. (For ridiculously big files, add a single 9. For humongous files, add two.)

  16. Neil says:

    I’ve used both Mercurial and git. Mercurial has two diff output formats, patch (which doesn’t support binary files etc.) and git. The patch format includes the base revision, but the git format doesn’t (“because git doesn’t”, sadly). The alternative is the export format, but that requires you to commit first, which then means having to uncommit later, if you don’t want it to clutter up your clone. Meanwhile with git you’re basically forced to commit to do anything useful, but at least uncommitting is easier.

    1. Neil says:

      Sorry for the duplicate post; it looked as if the original attempt didn’t go through.

      1. There’s a problem with the blog system right now where all comments are being flagged as spam, and I have to go fish them out of the spam bin. Sorry for the delay.

  17. Voo says:

    Huh, I think I might just be able to help Raymond for once :-)

    Haven’t tried this myself, because all my tools support a folder structure for branches (so having everything in private/name/archive/ works just fine – I just keep the archive folder minimized).

    Git generally works with refs which are pointers to specific commits. Everything under refs/heads is by default considered to be a branch. Simple tags for example are pretty much the same, they’re just listed under refs/tags.

    So one solution might just be to tag your feature branches and then delete the branch – you can easily create a branch from the tag later on. That’s not perfect either if you use tags for real purposes, so instead just create your own folder structure!

    git update-ref refs/hidden/foo foo # create a new ref that won’t show up as a branch from foo branch
    git branch -D foo # delete the foo branch , since we don’t want it any more

    You can see what refs exist with git show-ref.

    You’ll want to push the refs up to the server to make sure they won’t get lost if your computer crashes. To get them back you’ll have to explicitly tell git to fetch them – so it’s not like you’ll create unnecessary garbage for everyone else either.

    As I said, never needed this, but https://git-scm.com/book/en/v2/Git-Internals-The-Refspec should be a good starting point for more details.

  18. Alexander Groß says:

    Did you consider git merge –ours experimental/branch to merge experiments into some kind of storage branch but immediately undo all the effects of experimental/branch? This would allow you to trace back the experiment, but not clutter your (remote) branches with more and more experiments being stashed away.

    git merge –ours works like a normal merge, that also includes the git revert of the merged branch’s changes, so these changes actually won’t end up in your integration branch. You can write a custom commit message making it easier to find things later.

    1. That’s an interesting idea. Each commit in the experimental branch is an archived branch. It does make searching for stuff harder, and you can’t delete an archived branch. This would mess up a workflow where the archived branches are “Bug fixes I want to take once the tree opens back up”, because you would have to have some other way of keeping track of which bug fixes have been taken and which are still waiting to be done.

  19. Voo says:

    Huh, I think I might just be able to help Raymond for once :-)

    Haven’t tried this myself, because all my tools support a folder structure for branches (so having everything in private/name/archive/ works just fine – I just keep the archive folder minimized).

    Git generally works with refs which are pointers to specific commits. Everything under refs/heads is by default considered to be a branch. Simple tags for example are pretty much the same, they’re just listed under refs/tags.

    So one solution might just be to tag your feature branches and then delete the branch – you can easily create a branch from the tag later on. That’s not perfect either if you use tags for real purposes, so instead just create your own folder structure!

    git update-ref refs/hidden/foo foo # create a new ref that won’t show up as a branch from foo branch
    git branch -D foo # delete the foo branch , since we don’t want it any more

    You can see what refs exist with git show-ref.

    You’ll want to push the refs up to the server to make sure they won’t get lost if your computer crashes. To get them back you’ll have to explicitly tell git to fetch them – so it’s not like you’ll create unnecessary garbage for everyone else either.

    https://git-scm.com/book/en/v2/Git-Internals-The-Refspec should be a good starting point for more details.

    1. Even though the branches can be collapsed in the UI, they still show up in “git branch -a”. And the tags trick assumes that your repo allows users to create their own tags. (Often, tag-creation is restricted to release teams.)

  20. Petteri Aimonen says:

    While reading all of this, I was thinking “what’s wrong with a simple diff”. So I guess I don’t understand everything about the problem.

    But as for the reasons mentioned why saving a patch is not enough, it seems to boil down to “not enough context”. So just use “git diff -U20” to get more context?

    Also if later turns out patch doesn’t apply and you can’t figure out what the code used to look like, it will have the git hash in the header so you can find the old revision it was based on.

  21. Evan says:

    > Option 1: Put a commit at the tip of the branch summarizing what this branch is about, and then abandon the branch but don’t delete it.

    Option 1b is to do the same thing with a tag.

    To me that seems to fit the use cases of the two things better in a generic sense, but whether it fits *your* use of those features I can’t answer.

  22. SoonerRoadie says:

    I’m very curious about bbpack. Were the source files embedded within the batch file? How would one go about doing that? It seems like a very useful think to be able to do.

  23. Evan says:

    [I thought I submitted this yesterday, but maybe I didn’t.]

    “Option 1: Put a commit at the tip of the branch summarizing what this branch is about, and then abandon the branch but don’t delete it.’

    Option 1b, very similar, is to use a tag instead of a branch.

    As generic advice that seems to fit the use cases of each better, but whether it does in any individual case is kind of up to the conventions you already have for using them.

  24. Steve says:

    For less complicated things, I just use `git add -p` to walk around my broken junk or cut-save-commit-undo to check in good stuff while keeping broken stuff local.

    For more complex feature, since VS2017’s refactor support is smart enough to read comments, I generally leave in/commit my half-baked broken code commented out in my dev branch. I usually have one or more “cleanup” or “remove stale code” commits before merging to master or doing a PR (and, if needed, will squash those commits in a fast-forwarded/cherry-picked “implementfeature” branch* so as not to advertise my stupid idea), so I’ll remove it all then. In the end, my commented-out code, in whatever branch, will get matched in VS refactoring and won’t go “out of sight, out of mind” unless I decide to stop using the branch with the commented code in it and that branch need never be seen by anyone working off master.

    * As opposed to my usual “implement_feature” name scheme.

  25. Neil says:

    I’ve used both Mercurial and git. Mercurial has two diff output formats, patch (which doesn’t support binary files etc.) and git. The patch format includes the base revision, but the git format doesn’t (“because git doesn’t include it in its diff output”, sadly). The alternative is the export format, but that requires you to commit first, which then means having to uncommit later, if you don’t want it to clutter up your clone. Meanwhile with git you’re basically forced to commit to do anything useful, but at least uncommitting is easier.

Comments are closed.

Skip to main content