Hello fellow Data Enthusiasts and welcome to my Blog,

recently I’ve had a client with about 50 linked services, which all were pretty similar. A Linked Service in ADF and Synapse serves as a connection configuration to external data sources or services. It includes details like endpoint URLs, connection strings, and authentication credentials. Linked Services support various data stores and platforms, such as databases, file systems, and more.

I’m excited to share the improvements I’ve made to the Linked Services, making them more user-friendly. These enhancements not only simplify the process for me but will also greatly benefit your CI/CD workflows.

First of all, some things that i think are barely documented:

  1. Every Linked Service supports parameters (Some also show them in the GUI but even when they aren’t displayed, they are still supported)
  2. Linked Services allow to retrive information from the Key Vault

If you haven’t set up a Key Vault yet, you’ll need to create one and assign the ‘Key Vault Secrets User’ RBAC role to the system-assigned managed identity of the Synapse Workspace or Azure Data Factory.

Then create a Linked Service to the Key Vault.

Afterwards create a Linked Service for each type (e.g. Storage Account, SQL Server, Oracle, SAP) of source system via GUI, which for example looks something like this:

Check the JSON Code of the Linked Service which will look like this:

{
    "name": "AzureDataLakeStorage",
    "type": "Microsoft.Synapse/workspaces/linkedservices",
    "properties": {
        "annotations": [],
        "type": "AzureBlobFS",
        "typeProperties": {
            "url": "https://st0linkedservice.dfs.core.windows.net/"
        },
        "connectVia": {
            "referenceName": "AutoResolveIntegrationRuntime",
            "type": "IntegrationRuntimeReference"
        }
    }
}

and add the “parameters” property and create as many parameters as you need (Since the Storage Account Linked Service has only 1 relevant property only 1 parameter is needed):

{
    "name": "AzureDataLakeStorage",
    "type": "Microsoft.Synapse/workspaces/linkedservices",
    "properties": {
        "parameters": {
            "P_StorageAccountURLSecretName": {
                "type": "string"
            }
        },
        "annotations": [],
        "type": "AzureBlobFS",
        "typeProperties": {
            "url": "https://st0linkedservice.dfs.core.windows.net/"
        },
        "connectVia": {
            "referenceName": "AutoResolveIntegrationRuntime",
            "type": "IntegrationRuntimeReference"
        }
    }
}

Now the trick is, to set an URL based on an key vault secret. To do so, create a Key Vault Secret wich contains the url (StorageAccount1URL = https://st0linkedservice.dfs.core.windows.net/) and create another secret with the url for each storage account you want to connect to. (StorageAccount2URL = https://st0linkedservice2.dfs.core.windows.net/).

Go back to the Linked Service JSON Code and insert this JSON Code in each place where you have different content depending on the source system:

{
                "type": "AzureKeyVaultSecret",
                "store": {
                    "referenceName": "AzureKeyVault",
                    "type": "LinkedServiceReference"
                },
                "secretName": "@{linkedService().<ParameterName>}"
            }
        }

So in my example it looks like this:

{
    "name": "AzureDataLakeStorage",
    "type": "Microsoft.Synapse/workspaces/linkedservices",
    "properties": {
        "parameters": {
            "P_StorageAccountURLSecretName": {
                "type": "string"
            }
        },
        "annotations": [],
        "type": "AzureBlobFS",
        "typeProperties": {
            "url": {
                "type": "AzureKeyVaultSecret",
                "store": {
                    "referenceName": "AzureKeyVault",
                    "type": "LinkedServiceReference"
                },
                "secretName": "@{linkedService().P_StorageAccountURLSecretName}"
            }
        },
        "connectVia": {
            "referenceName": "AutoResolveIntegrationRuntime",
            "type": "IntegrationRuntimeReference"
        }
    }
}

Now you have the ability to design Datasets, Pipelines and Mapping Dataflows that intelligently adapt to different systems based on specific rules.

For a seamless CI/CD experience, make sure you’ve got the same Key Vault Secrets set up in each environment, each with its unique content. This ensures smooth coordination, especially when dependent systems operate in different environments.

This post should introduce you to the basic concept of dynamic linked services in Azure Data Factory (ADF) and Azure Synapse Analytics. I hope this post will help you to drastically reduce the number of linked services and integration datasets in your further projects.