Service Fabric Customer Profile: D+H

Authored by Senthuran Sivananthan and Ed Price from Microsoft, in conjunction with Scott Wadel, Adam Gritt, and Joseph Boschert from D+H.

This post is part of a series focused on customers who’ve worked closely with Microsoft on Service Fabric over the last year. We look at why they chose Service Fabric, and we take a closer look at the design of their application.

In this installment, we profile D+H, their lending application running on Azure, and how they designed the architecture using Service Fabric, Azure SQL Database, Azure Redis Cache, Azure DocumentDB, and Azure API Management.


D+H is a leading financial technology provider that the world's financial institutions rely on every day to help them grow and succeed. Their global payments, lending, and financial solutions are trusted by nearly 8,000 banks, specialty lenders, community banks, credit unions, governments, and corporations.

Partnering with Microsoft Azure was a strategic decision to stay ahead of the curve in today’s ever-changing economy and business landscape. By using the Azure platform as a service, D+H can optimize existing business processes and functions while continuing to identify new market opportunities—all without the overhead of managing infrastructure.

“Our experience with Service Fabric has been very positive. Its features and functionality have provided the core foundation for our next-generation microservice architecture. Feature highlights include the ability to easily scale and roll out changes.”

—Scott Wadel, Group Leader—Product Technology, Digital Lending Platform

Faster to market using an agile, DevOps, microservices approach

An engineering company at the core, D+H has been on the forefront of innovation with its client-server products, web applications, and recently, microservices-based solutions. As technologies have evolved, so have the underlying architectures and the processes and policies used to develop, deploy, and support D+H products.

Over the years, D+H has developed many unique products that run on the Windows and Linux ecosystems. However, as teams and product choices grew, so did the duplication of functionality. Changing regulations and customer demands also brought challenges. As a result, their technical debt increased along with maintenance costs.

As part of their transformational journey, D+H strategically chose three core tenets that would help them deliver value to customers: create reuse within the product suite, increase team dynamics, and deliver software using an automated and repeatable method.

With these three tenets at hand, D+H embarked on developing a new product that focused on small business lending. The market sentiment is that the traditional methods for requesting and securing loans for small businesses are long and cumbersome. By providing a digital solution, financial institutions spend less time on small business loans and empower borrowers to apply for a loan online at any time, from anywhere, using any device.


Figure 1. Small business loan process.

The vision was to deliver value to the customers early and then evaluate assumptions so that features can be evolved organically.

“Historically, scaling of systems to accommodate growing business needs has been challenging. This is not the case with Service Fabric. Components of our application can be scaled up as needed via a few configuration changes.”

Scott Wadel, Group Leader–Product Technology, Digital Lending Platform

Agile mindset, DevOps processes

Using an agile methodology and mindset, D+H is now able to pivot faster in response to market demands, test new products and capabilities, and focus on features that deliver tailored business value getting them into production quickly.

To speed the evolution of tools and processes, D+H made DevOps a core practice of their software development process. DevOps practices enabled engineers to deliver infrastructure as part of their code, document and manage changes to the infrastructure, and create an environment that allows all teams to drive towards repeatable processes. D+H can now deploy identical environments in hours compared to weeks, manage disaster recovery, and lower effort by reusing the same artifacts across different teams and environments.

DevOps is an integral part of D+H’s software release process today. D+H’s DevOps team is small but nimble, quickly scripting new capabilities and creating proof of concepts. After they establish the technology needs, they can automate a deployment in a logical way that doesn’t require deep infrastructure knowledge.

“Service Fabric provides the ability for our teams to remain agile and focus on innovation and delighting our customers through quality and frequent feature releases.”

Scott Wadel, Group Leader–Product Technology, Digital Lending Platform

Using microservices

For the new product, D+H wanted to use microservices to help them move capabilities to market faster and meet all their development goals:

  • Build once and reuse many times with different use cases and applications.
  • Support autonomy so teams can make changes faster.
  • Build solutions faster and more flexibly through composition.

Logical architecture

The new, microservices-based lending application has two points of entry:

  • External traffic originating from customer websites through REST API calls. All external traffic is fronted with an API Management Gateway that allows D+H to gain insights into usage via analytics and to apply policies such as throttling and token validation.
  • Internal traffic originating within the Azure virtual network or from the on-premises datacenter through Azure ExpressRoute.

D+H used Service Fabric to orchestrate the microservices and manage cloud scaling the application.


Figure 2. Logical architecture of the D+H lending application.

Advantages of Service Fabric

Service Fabric offers several key benefits that were leveraged by D+H for this application:

  • Service density: D+H’s products were often deployed on a group of servers, each server specific to a component or a role in the architecture. This led to uneven use of compute resources, leaving some servers underutilized and others overutilized. To stabilize the applications, D+H would scale out the overutilized components to additional servers, thus increasing capital and operational costs. With Service Fabric, D+H is able to maximize their servers by co-locating multiple services on the same servers. In traditional hosting, the burden is on D+H to migrate and manage each service so all servers operate at the optimal efficiency. Service Fabric removes this need.
  • Horizontal scaling: Without a crystal ball, it’s impossible to predict server demands, so servers were over-provisioned to allow for growth. Not only did the over-provisioning increase the initial of cost of the product, but it also proved difficult to manage when the products became popular and additional capacity was needed to support the demand. Service Fabric’s ability to transparently scale out through Virtual Machine Scale Sets is a huge improvement over adding servers, changing load balancers, changing application configurations, and so on. D+H is now able to start with a small footprint of servers (typically 5 to 10 servers) and scale out as they see demand for their products. There are very few application changes, since all services use the naming service to resolve each other.
  • Self-healing: Server failures are inevitable and the rapid recovery of these servers is critical to the overall health of the service. Traditional operations increase the mean time to failure by buying the best-in-class hardware and increasing the time between two failure events. But this approach isn’t not foolproof and failures can still happen. Service Fabric on Azure means that new VMs can be brought online to replace the failed servers. Services can be automatically migrated to new servers, and persistent data can be replicated to healthy nodes without any interruption.
  • Upgrade management: Multiple teams at D+H develop and deploy new services at different times, so the engineers needed a simpler way to upgrade existing services without impacting these teams or their products. Service Fabric’s incremental upgrades simplified the release management workflow at D+H. The long-term vision is to provide a unified environment where multiple products can coexist and reuse each other’s service capabilities.
  • Cost savings: D+H’s goal is to build a single Service Fabric cluster per Azure region that can be used to deploy multiple products (and their own services). Then the organization can lower costs through a shared environment model while encouraging teams to reuse existing services.

Investing in stateless services

D+H chose to develop majority of their services as stateless services. The decision was led by:

  • The need to simplify the onboarding process for its developers. Teams knew the principals of stateless services, and they could easily map their experience to Service Fabric and the Open Web Interface for .NET (OWIN).
  • The ability to leverage existing operational support from database administrators and IT teams. By using data services such as SQL Database, their database administrators can help define, manage, and tune the databases. IT teams can monitor, backup, and build recovery procedures to support production. Using stateful services increases the complexity of managing the data, and it was too much of a learning curve for the teams to take on in the initial stages.
  • Following on the agile theme, the team decided to focus on delivering the minimum viable product (MVP) to the market using the existing experience and knowledge of .NET, Azure, and Service Fabric.

Now that the product in production, the team is evaluating their architecture and making structural changes based on the data they’ve collected from the MVP. Part of the exercise is to revisit the original rationale for using stateless services.

Service Fabric is a data-aware environment, and co-locating data closer to the services that need it means lower latency and increased throughput. For the next evolution in their microservices journey, D+H is now looking at other ways to increase the performance of their application by bring their data from Redis and SQL Database closer to the services that need them.

Implementing security-first engineering

As a processor of financial data, D+H needs to put the security of their users and customers first. Azure provides a platform with the security and compliance features that D+H can build on. To further protect their data, D+H uses Azure Key Vault to store certificates and encryption keys.


Figure 3. Overview of security-related services.

In this flow, Service Fabric is the secure transactions hub:

  • All application and management traffic is controlled by its respective load balancer. Internal traffic is accepted only through known routes, including the on-premises datacenter, via ExpressRoute. The external load balancer is configured to only accept traffic from the API Management Gateways and route it directly to the VMs deployed in the Service Fabric cluster. No other access is allowed through the external load balancer.
  • All management operations and node-to-node communications are secured through certificates that are managed by Azure Key Vault.

Integrating Azure API Management

As more microservices are brought online, the engineering team needed a centralized, policy-based repository to manage and deliver APIs to its own teams. They chose Azure API Management because it provided immediate benefits:

  • The ability to scale out the API Management Gateway to regions that serve their customers. All APIs can be managed through a single portal but deployed to multiple regions, so D+H can react faster to changing markets and support a robust disaster recovery strategy.
  • Virtual network support for running API Management Gateways within a virtual network, securing APIs by avoiding exposure to direct internet traffic. The team can quickly surface REST APIs through API Management’s console and restrict access to the Service Fabric cluster at the same time.
  • Service throttling and policy management capabilities allow D+H to roll out its services in a controlled and phased manner.

Supporting business continuity and disaster recovery

D+H engineers considered how best to test the resiliency of their platform. They already had Azure Resource Manager (ARM) templates that had been deployed many times over in nonproduction environments. The engineering team had a choice. They could create an active/active environment with Traffic Manager to route traffic based on geography, set up a warm standby environment with essential configurations already deployed. Or they could build everything from scratch in a secondary region in response to an outage.

Keeping cost in mind, the engineering team decided to deploy a warm standby environment in a separate region for the pilot launch. To do this, they needed to deploy:

  • All necessary network functionality.
  • Azure Key Vault.
  • Azure SQL Database and enable Active Geo-Replication.
  • DocumentDB with session consistency level.

After the essential services were configured, D+H’s engineers performed deployment testing, identified their mean time to recovery, and verified that it fell within the recovery time objective they gave to their customers.


By making the most of Service Fabric and a microservices architecture, D+H can respond to technology and market changes faster while also reusing its investments across multiple product offerings. Service Fabric gave D+H both a reliable platform for orchestrating stateless services and a persistent, consistent storage technology.

Comments (2)
  1. Shubhra says:

    In Figure 2, what micro services are present? Can you please clarify? There are different data stores but it is not clear what data stores ties back to which business capability.

    1. Senthuran Sivananthan [MSFT] says:

      Hi Shubhra,

      Majority of the microservices are Stateless but we also have some stateful services to improve application performance. The first release focused on fine grain microservices, however we are now consolidating services together based on use cases and the data access patterns.

      The services use Azure SQL as the primary data store for all transactional data, Redis for caching (easily rebuilt) and DocumentDB for application specific configuration that are user defined.

      Hope this helps.


Comments are closed.

Skip to main content