I recently attended a talk advocating combining the management of test development and development. Some of the reasoning for this was to force the quality decisions to fall onto one person. This makes a lot of sense. It is important to understand why. Many times in large software development projects we compartmentalize the various roles. This is true not just of test and development but also roles like performance, security, customer service, etc. This compartmentalization has a direct effect on how people operate. When someone is responsible for only one aspect of a product, they will often make the right choices for their aspect but the wrong ones for the product overall.
In the traditional software model, test and development are two distinct silos. The disciplines report to different people and perform different jobs. This creates tension between the roles. This tension arises not just because of the variance of roles but also because of the variance of purpose. Development wants to add features and test wants to constrain them. To ship a quality product, you need to strike a balance. Too many features and the quality will be too low. People won't tolerate it. Too much quality and there won't be enough features to attract customers. Imagine a field. One one side is quality and the other features. Now imagine a line drawn between them. To increase quality, you must decrease features. Each decision is a tradeoff.
Security is also an area that is fraught with tradeoffs. The Windows of old shows what can happen if not enough attention is paid to security. Trusted Solaris shows what happens if you pay too much. Not enough attention and the system becomes a haven for viruses, bots, etc. Too much and the system is years behind, runs slow, and is very hard to use.
Performance can be similar. Many changes that increase performance are intrusive. Making them involves trading stability for performance. Other times the performance wins are not visible to the end user. Are they still worth making? Finally, if you are not allowed to degrade performance, it is very hard to add new features. Assuming your previous implementation was not poorly designed, it can be nigh unto impossible to add functionality without increasing CPU usage.
In each of these cases--and many others--a person or team tasked with improving only one side of the coin will make decisions that are bad for the product. Recall that good engineering is about making the right tradeoffs. To make them, one must consider both what is to be gained and what is to be lost. When we give someone a role of focusing solely on security or performance or adding features, we skew their decisions. We implicitly make one side of the coin trump the other. If a person's review is based only on the performance improvements they made in the product, that person will be disinclined to care about how important the new functionality is. If they are tasked solely with securing a product, they will tend not to consider the functionality they break when plugging a potential hole.
The right decisions can only be made at the place where a person is accountable for both sides of the tradeoff. If the different silos (test, dev, security , performance) are too far separated, this place becomes upper management. This is dangerous because upper management often does not have the time to become involved nor do they have the understanding to make the right decision. Instead, it is better to drive that responsibility lower in the chain. Having engineering leads (not dev leads and test leads) as the talk advocated is one way to accomplish this. One person is responsible for the location of the quality line. Another way is to increase interaction between silos. Personal bonds can overcome a lot of process. Sharing responsibility can work wonders. Consider dividing the silos into virtual teams that cut horizontally across disciplines. Make those people responsible as a group for some part of the product. As is often the case, measuring the right metrics is half of success.