The Mysterious Case of Managed Identities for Triggers in Azure Government

Out of the box, an Azure Function will setup its connections using an access key to talk with its storage account. If creating a new queue trigger, it’ll just setup the connection to use that same shared access key. That is not ideal and we should be using some form of AAD to authenticate. Let’s take a look at managed identities and how to actually make this work in Azure Government.

Straight out of Microsoft’s documentation If you’re configuring AzureWebJobsStorage using a storage account that uses the default DNS suffix and service name for global Azure, following the https://.blob/queue/file/table.core.windows.net format, you can instead set AzureWebJobsStorage__accountName to the name of your storage account. The endpoints for each storage service will be inferred for this account. This won’t work if the storage account is in a sovereign cloud or has a custom DNS.

Well, Azure Government is nixed from this neat feature, but how do we use an identity based connection? We need to reference in the function configuration a new setting that says to use a managed identity when my trigger is a blob/queue. In my example, I want to use a storage account named sajimstorage with a queue called my-queue. Now, pay attention because here are the undocumented tidbits for Azure Gov. Your value needs to be in the format of AzureWebJobs[storageAccountName]__queueServiceUri. AzureWebJobs is static and must always be there, next comes the name of your storage account then 2 underscores along with queueServiceUri. The value should be the endpoint of your queue with no trailing slash!

Next, we must ensure the function.json connection name is set to the name of your storage account. This ties back to the value above which will be parsed out correctly by the function.

There are more steps, but I figured it would be easier to actually give an example.

  1. Create a new Azure consumption function. I am creating a PowerShell Core runtime for this example.
  2. Create your storage account you want to use the functions managed identity with.
  3. In your function, enable the system assigned managed identity.
  4. In your storage account you want to use the managed identity with, RBAC the roles Storage Queue Data Reader and Storage Queue Data Message Processor
  5. Open up KUDU for your function and navigate in your debug console to site/wwwroot
  6. Edit requirements.psd1 and add ‘Az.Accounts’ = ‘2.12.0’
    Don’t uncomment ‘Az’ = ’10.*’ because it’ll bring every module and take forever. Just use what you need. Also, do not use 2.* for Az.Accounts because it breaks the managed identity. This is a longer discussion, but visit https://github.com/Azure/azure-powershell/issues/21647 and see all the on going issues. I was explicitly told 2.12.0 by Microsoft as well.
  7. Edit profile.ps1 and add -Environment AzureUSGovernment to your connect-azaccount. It should look like this: Connect-AzAccount -Identity -Environment AzureUSGovernment
  8. Back in the Azure Function Configuration, add a new application setting following the format of AzureWebJobs[storageAccountName]__queueServiceUri ie) AzureWebJobssajimstorage__queueServiceUri (2 underscores) and set the value to the endpoint of your queue: https://sajimstorage.queue.core.usgovcloudapi.net
  9. Create your queue function app. It will default to using the shared access key, but let’s change that. Open the Code+Test, select the dropdown from run.ps1 and select function.json. Change the “connection”: “AzureWebJobsStorage” to “connection”: “sajimstorage”
    Change the queue name if you want. Out of the box, it is ps-queue-items, but I am going to change it to my-queue. Save your changes.
  10. Restart your function from the overview.
  11. Add a test message onto your queue and view the output from your function invocations traces.


    Something to note, if you navigate to Integration and select your trigger, the storage account connection will be empty. This is normal.


    It is not as painful as it looks. I mean, maybe when I was trying to figure this all out, but hopefully this saves you some time! Cheers.

Azure Windows Security Baseline

I was designing a deployment around Azure Virtual Desktop utilizing Azure Active Directory, not AADDS or ADDS and when checking a test deploy for compliance against the NIST 800-171 Azure Policy, it showed the Azure Baseline is not being met. In a domain, I wouldn’t worry since group policy will fix this right up, but what about non domain join? What about custom images? Yeah, I guess we could manually set everything then image it, but I prefer a clean base then apply configuration during my image build. Let’s take a look how to hit this compliance checkbox.

I recalled that Microsoft released STIG templates and found the blog post Announcing Azure STIG solution templates to accelerate compliance for DoD – Azure Government (microsoft.com). I was hoping their efforts would make my life a little bit easier, but after a test deploy, I saw 33 items still not in compliance.

Looking at the workflow, it is ideally how i’d like my image process to look in my pipeline.

Deploy a baseline image, apply some scripts and then I can generate a custom image to a shared gallery for use. I didn’t want to reinvent the wheel, so I started researching if anyone has done this already. I found a repo https://github.com/Cloudneeti/os-harderning-scripts/ that looked promising, but it was a year old and I noticed some things incorrect with the script such as incorrect registry paths, commented out DSC snippets, etc. This did do a good bulk, but just needed cleaned up and things added. Looking at the commented code, it was around user rights assignments. Now, the DSC module for user right assessments is old and I haven’t seen a commit in there for years. Playing around, it seems that some settings can not be set. I didn’t want to hack together stuff using secedit, so I found a neat script https://blakedrumm.com/blog/set-and-check-user-rights-assignment/ that I could just pass in the required rights and move on. Everything worked except for SeDenyRemoteInteractiveLogonRight. When the right doesn’t exist in the exported config, it couldn’t add it. So, I just wrote the snippet to add the last right.


$tempFolderPath = Join-Path $Env:Temp $(New-Guid)
New-Item -Type Directory -Path $tempFolderPath | Out-Null
secedit.exe /export /cfg $tempFolderPath\security-policy.inf


#get line number
$file = gci -literalpath "$tempFolderPath\security-policy.inf" -rec | % {
$line = Select-String -literalpath $_.fullname -pattern "Privilege Rights" | select -ExpandProperty LineNumber
}

#add string
$fileContent = Get-Content "$tempFolderPath\security-policy.inf"
$fileContent[$line-1] += "`nSeDenyRemoteInteractiveLogonRight = *S-1-5-32-546"
$fileContent | out-file "$tempFolderPath\security-policy.inf" -Encoding unicode

secedit.exe /configure /db c:\windows\security\local.sdb /cfg "$tempFolderPath\security-policy.inf"
rm -force "$tempFolderPath\security-policy.inf" -confirm:$false

After running PowerShell DSC and script, the Azure baseline comes back fully compliant. I have tested this on Windows Server 2019 and Windows 10.

You can grab the files in my repo https://github.com/jrudley/azurewindowsbaseline

AKS Double Encryption

I have been living in a world of compliance these past few weeks, specifically NIST 800-171. Azure provides an initiative for NIST and one of the checks is to make sure your disks have both a platform and customer managed key. I recently ran into a scenario where you have an application that is StatefulSet in Azure Kubernetes. Let’s talk a bit more around this and NIST.

The Azure Policy was in non compliance for my disks because they were just a managed platform key. Researching the AKS docs, I found an article for using a customer managed key, but this still is not what I need as I need double encryption to meet compliance. After some research in the Kubernetes SIGs repo, I found the Azure Disk CSI driver doc and check it out:

It looks like this document was modified back in May adding support for this feature, so recently new. Upgrade the driver to 1.18 or above and double encryption support should be there.

To implement, create a new storage class that references your disk encryption set id with double encryption.

kind: StorageClass
apiVersion: storage.k8s.io/v1  
metadata:
  name: byok-double-encrpytion
provisioner: disk.csi.azure.com 
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
  skuname: Premium_LRS
  kind: managed
  diskEncryptionType: EncryptionAtRestWithPlatformAndCustomerKeys
  diskEncryptionSetID: "/subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.Compute/diskEncryptionSets/<dek-name>"

Apply this snippet above and reference the storage class in your deployment yaml to have double encryption. This will tick that NIST compliance checkbox for AKS disks.