I’m in the middle of a massive refactoring and its going quite painfully. It’s ending up as a massive change that hits a large number of files in our code base. The issue is that for a while we were passing around raw BYTE*’s around and I’m trying to instead use a proper object model because of all the issues we’ve had with this. The reasons we had this originally are somewhat historical and relate to how metadata is read out into byte streams with the IMetadataImport API. However, it’s really been far more hassle than it’s worth. Especially with the advent of generics and the need to store strongly typed structured data.
Unfortunately, I just couldn’t figure out a simple way to do this change. I was introducing an entirely new object model into a system that had none, and I was replacing raw reads and writes on the byte stream into safe calls into those objects instead. I was also moving a lot of unnecessary static helper methods scattered all over the place into appropriate virtual methods on the new types.
I think I was biting off too much to try to chew pleasantly. But I’ve done a ton of the work now, and I’m stuck in a hard place. Start over again trying to make smaller steps (which I’m not even sure how to do). Or continue with this huge change and end up fixing lots and lots of stuff because of all the regressions that inevitably happen after a change this large.