A while back I did an experiment where it turned out that allocating objects was better than pooling them. Since then I have encountered a few times where allocating actually turned out to be a bad thing. I've never seen this being a problem in a client application, but in servers allocating a lot of objects can be a problem even if the objects are very short lived. What happened to me was that so many objects were created that the garbage collector wanted to run several times a second to try and clean things up. Each time a few of these short lived objects would escape from generation zero to generation one making any following garbage collections more expensive. This time I was lucky because the object in question was a byte array buffer that could be both reduced and reused with some simple logic.
What might be harder is when you add async/await and tasks into the mix. The reason is that async/await is very good at making your async code easy to understand but at the same time it can create a lot of objects if you just do the naive implementation of what you want to do. However the naive approach is still the preferred one I think since it reduces the risk of creating something that does not do what you want under all circumstances. But it is always good to know what is happening and that is why you should read this article that explains how to use the memory profiler and as an example look at an example with tasks.