Automating the data matching process in SQL Server Data Quality Services (DQS)

OH22 data has just released a free SSIS transform task for SQL Server Data Quality Services (DQS) matching. The solution is published on codeplex. With this custom transform task, you can use the Knowledge Base (KB) created in DQS to automate data matching through SSIS.  


In this article, I am going to walk through using the new SSIS transform to automate matching project which I described in the previous article: Matching related and duplicate Customer Records using SQL Server Data Quality Services (DQS).


To summarize, we have the following data set which we would like to identify duplicate and related records:



In the previous article, we created a DQS Knowledge Base, MyCustomerKB,  with the following matching rules:

  1. Match By Customer Name: Matching any records with similar CustomerName value.
  2. Match By City State: Matching any records with similar City (50% weight) and State (50% weight)

We were using Data Quality Client tool to (manually) execute a matching project using MyCustomerKB against the data input above. In this article, we will be automating the process using the new SSIS transform task.


The following are the steps to automate:


A. Install DQS Matching codeplex project

B. Create SSIS project and configure DQS matching transform task

C. Execute the SSIS project and review the results


A. Install DQS Matching codeplex project


  1. Run Windows Installer Package (msi) on the SQL Server 2012 (with Integration Services)  server


B. Create SSIS project and configure DQS matching transform task


  1. Create a new Integration Services project using SQL Server Data Tools 
  2. On the Data Flow tab, create a new Data Flow task 
  3. Drag and drop Source Assistant and configure to connect to the customer data you would like to de-duplicate 
  4. Drag and drop DQS Matching transform task 


  1. Connect the output line from the source to the DQS Matching transform task 
  2. Double click DQS Matching to open the configuration editor window
    • Under the Connection Manager tab, connect to the SQL Server Data Quality Service server by clicking New:

    • Select the Data Quality Knowledge Base containing the matching policy - the available matching rules and domains are displayed for you to review
    • Click the Mapping tab to map the columns in your data source with the Knowledge Base domain

      • Note:

        • You must map all domains used by matching rules in your domain. The following error message appears if you click OK without mapping all domains: "Not all domains have been assigned or has been assigned twice. Please assign all domains properly"

        • When you use composite domain as part of your matching rule, you can either map the composite domain or all of the single domains that make up the composite domain. If you map a column to a composite domain, then the single domains removed as drop down options. Conversely, if you select one of the single domains that are part of the composite domain, then the composite domain disappear from the drop down option. You can redo selection by un-selecting the column checkbox from the top section.

        • Optionally, you can also go to the Advanced tab to specify the minimal matching rule threshold. By default, DQS match only records with matching score of 80% and above. You can raise the minimal score by changing it from the advanced tab.


  1. Create destination table. The SSIS DQS Matching produce 2 outputs: Matched Output and Unmatched Output. The
    Unmatched Output will create the same output schema as your input data source. The Matched output add the following columns: 

    • [RecordId] nvarchar(255)

    • [ClusterId] nvarchar(255)

    • [ClusterRecordRelationId] nvarchar(255)

    • [MatchingScore] nvarchar(255)

    • [RuleId] nvarchar(255)

    • [IsPivot] nvarchar(255)

    • [Status] nvarchar(255)

    • [PairId] nvarchar(255)

    • [SiblingId] nvarchar(255)

    • [PivotId] nvarchar(255)

You do not need to include all the metadata above in your matched output table. I suggest to include at least the following metadata:

    • RecordID : Unique record identifier for the data set 
    • ClusterID : Identifier for the matched record; any records with the same ClusterID value are considered matched and may be potential duplicate 
    • RuleID: Identifier for the rule used for the matching 
    • SiblingID : Identifier for another record which was compared to.  
    • Matching Score: The comparison score between the record and the sibling 

 The scripts to create the matched and unmatched table based on the data source we previously used:

CREATE TABLE MyCustomers_Matched
 CustomerID INT,
 CustomerName NVARCHAR(255),
 City NVARCHAR(32),
 Province NVARCHAR(32),
 LastUpdate DATETIME,
 [RecordId] nvarchar(255),   
 [ClusterId] nvarchar(255),  
 [RuleId] nvarchar(255),
 [SiblingId] nvarchar(255),  
 [MatchingScore] nvarchar(255)

CREATE TABLE MyCustomers_UnMatched
 CustomerID INT,
 CustomerName NVARCHAR(255),
 City NVARCHAR(32),
 Province NVARCHAR(32),
 LastUpdate DATETIME

 The final data flow task should look like below:



C. Execute the SSIS project and review the results


  1. Execute the SSIS project
  2. Open SQL Server Management Studio and review the MyCustomersMatched table:



  • The ClusterID indicates the group of matched records. In our example, CustomerID=1 and CustomerID=2 has the same clusterID of 1000000
  • SiblingId indicates the corresponding record that is being compared for a given MatchingScore, for example: CustomerID = 6 has SiblingId of 1000004 (correspond to CustomerID=5), so the matching score of 87.5 is between CustomerID=6 & CustomerID=5
  • The output table also include RuleID, in order to get the Rule name, you first run a query to get an ID for the Knowledge Base you used then use the ID to query for the matching rule :



In this article, I describe how to use the SSIS DQS Matching transform task developed by OH22. Using this transform task, you can automate the DQS record matching using the Knowledge Base created from DQS client. You can download the codeplex project from


Comments (23)

  1. This is a great walk-through. Good job!

  2. Arthur says:

    This component potentially saves a lot of programming effort especially when matching may not be done using database facilities or RegEx. I only wish it could have appeared long ago.

    A big thank you!

  3. Phil says:

    How can we update the KB with the results of the match or is that pointless as they are all auto-approved?

  4. Welly.Lee says:

    DQS does not update KB from matching project, you can only update KB from cleansing project

  5. Chris says:

    Subscribe to our Data Matching services and avail all the benefits and increased ROI.. visit for more information

  6. kelvin miller says:

    Great tips. On sql for detailed DBMS  prospects email lists  <a href="…/dbms_prospects_email_list.php">click here </a>

  7. Jessica Smith says:

    Informative… we provide different data services that enhances your email marketing at <a href="">Data Services at B2B email Listz</a>

  8. Lalitha says:

    Its very nice article.

    while i tried to add data matching to data flow i am getting below error

    The component has detected potential metadata corruption during validation.

    Error at relate [DQS Matching [181]]: System.MissingMethodException: Method not found: 'Void Microsoft.Ssdqs.Component.Common.Utilities.ComponentUtility.FireError(Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSComponentMetaData100, Microsoft.Ssdqs.Component.Common.Messages.ComponentMessage, System.Object[])'.

      at oh22is.SqlServer.DQS.Matching.Validate()

      at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostValidate(IDTSManagedComponentWrapper100 wrapper)

    It may not be possible to recover to a valid state using a component-specific editor. Do you want to use the Advanced Editor dialog box for editing this component?  

  9. Scheidl says:

    Is this implementation or a similar one to be included in SQL Server 2014?

  10. William Jones says:

    Great Blog,,,, Data matching services have provided huge database related information to improve business in different scenarios. for more information on data matching services. please visit:<a href="">click here</a>

  11. Jordan Rhodes says:

    Informative.. Thanks.. For trustworthy Data Quality Services and Data Related Solutions Visit

  12. Jagannathan Santhanam says:

    Excellent illustration! Thanks a bunch! Makes it a lot easier to map an example to a real life situation.

  13. Kelly Moore says:

    Agree…Automating Data Matching process is key in any industry….. To know more about Data Services visit

  14. Rajath Srivastava says:

    Yes Automation in any process is must in this modern era of development of online services…. upgrade your business to newer heights at

  15. kristine johnson says:

    Very nice and Informative, we provide different Email marketing services please visit our site

  16. allisa griffis says:

    Excellent blog, data matching process provides high quality of data which helps to identify replica and maintain the database. For good quality of data matching service visit<a href=””>elisthunter</a>

  17. Patrick Willis says:

    Impressive blog, Data matching service is a best way to maintain database…. To get more details visit<a href=””>b2bmarketing archives</a>

  18. Tiana thomas says:

    Great Post… we provide different data services that enhances your email marketing  at <a href="…/">Data Matching Services</a>

  19. Reeves Dominik says:

    Great post, thank u for the details its nice to gain knowledge from experts to get a Required set of email list of industries visit…/it-industry-email-list

  20. Tiana thomas says:

    Informative Post . to know more about Data Matching visit <a href="…/">Data Matching </a>

  21. Denis says:

    I have big problem with component. I send to input  509980 records, matched out 176061 and unmatfched 315976.

    Sum of matched and unmatched  492037, not 509980!

  22. Not adequate says:

    The DQS Matching transform is inadequate and unstable (while the columns are mapped).  Why do we need to specify minimum Matching Score on the advanced tab when Matching policy , with multiple matching rules, can be already defined in DQS. Why can't this component leverage the matching rules already defined in DQS? This component does not support multiple matching rules. How can Microsoft ignore the automated matching through an SSIS transformation and instead something untrustworthy from CodePlex is being pushed?

Skip to main content