Azure Policy Mystery: Compute Baseline Applies to Windows 11 MultiSession, Not to Windows 11 Enterprise

It’s been a busy few months over here. With CMMC preparation in full swing, it’s been all about making sure our controls are defensible and our evidence holds up. I typically start from a NIST 800-171 rev.2 baseline so I’ve got a strong foundation to build on for compliance.

While reviewing my Azure Policy posture, I noticed something odd:

  • My AVD Windows 11 multi-session deployments were coming back Compliant.
  • But some test Windows 11 Enterprise VMs showed Not applicable for the guest configuration results.
  • Even more confusing: Azure Policy still appeared to report those Windows 11 Enterprise VMs as Compliant at the policy level.

That mismatch (“Compliant” vs “Not applicable”) is exactly the kind of thing that can cause confusion, or worse, show up during an audit.

What the baseline content says (MOF filters)

My first gut reaction was to look at what the baseline was actually doing. The Windows baseline content uses filters to decide whether a given rule should be evaluated. In the MOF you’ll see both a ServerTypeFilter and an OSFilter, for example:

	ServerTypeFilter = "ServerType = [Domain Controller, Domain Member, Workgroup Member]";
	OSFilter = "OSVersion = [WS2008, WS2008R2, WS2012, WS2012R2, WS2016]";

At face value, that OS filter reads like “Windows Server only” targeting (the WS* values).

What the Guest Configuration agent logs show

Next I went to the Guest Configuration agent logs:

C:\ProgramData\GuestConfig\gc_agent_logs

On the Windows 11 Enterprise VM, the logs clearly show the engine skipping rules due to OS filtering:

Message : [win11ent]: [Audit Other Object Access Events] Not evaluating rule because it was filtered due to OS version
[2025-12-26 17:23:21.749] [PID 7840] [TID 9292] [DSCEngine] [WARNING] ...

On the Windows 11 multi-session VM, the same type of check was actually being processed:

ResourceID: Audit Other Object Access Events
Message : [win11multi]: LCM:  [ Start  Get ]  [Audit Other Object Access Events]
[2025-12-26 17:21:03.877] ... Invoking resource method 'GetTargetResource' ... class name 'ASM_AuditPolicy'

So in my case:

  • Win11 Enterprise: rules get filtered “Not applicable”
  • Win11 multi-session: rules run produces compliance results

Compare OS SKU signals (including ProductType)

To compare what Windows reports about each OS, you can pull basic OS info like this:

Get-CimInstance Win32_OperatingSystem |
  Select-Object Caption, Version, BuildNumber, ProductType

Microsoft documents Win32_OperatingSystem.ProductType as: Microsoft Learn

  • Work Station (1)
  • Domain Controller (2)
  • Server (3)

This is useful context when you’re trying to understand how a configuration engine might be classifying a machine at evaluation time. (It doesn’t prove which internal mapping the baseline uses, but it’s an easy, consistent signal to capture as evidence.)

The documentation clue: this baseline isn’t intended for Windows 10/11

The big “aha” for me was in Microsoft’s baseline reference documentation. The Windows guest configuration baseline documentation explicitly states:

Azure Policy guest configuration only applies to Windows Server SKU and Azure Stack SKU. It does not apply to end user compute like Windows 10 and Windows 11 SKUs. Microsoft Learn

That statement lines up perfectly with why Windows 11 Enterprise would return Not applicable.

What didn’t line up (and what prompted the deeper dive) was why Windows 11 multisession was still producing evaluated results in my environment.

To drive the point of confusion home even futher, let’s take a look at the Guest Assignment. We can see the multi session OS is working, but the Enterprise image is showing compliant, but not applicable to that OS.

Closing the loop with Microsoft support

To close the case, I opened a ticket with Microsoft and shared:

  • the MOF filter behavior,
  • the Guest Configuration agent logs showing OS version filtering on Win11 Enterprise,
  • and the fact that Win11 multisession was still evaluating rules.

Support escalated to product and confirmed (for my scenario) that the baseline behavior I was seeing was expected, and that documentation updates were planned to make this clearer where it works with the multi session OS, but not the Enterprise OS.

Takeaway: don’t assume “Compliant” means “evaluated.” “For audit prep, verify applicability and keep a record of what the agent actually assessed, especially when you’re mixing Windows 11 Enterprise and Windows 11 multi-session in the same compliance scope.

The Mysterious Case of Managed Identities for Triggers in Azure Government

Out of the box, an Azure Function will setup its connections using an access key to talk with its storage account. If creating a new queue trigger, it’ll just setup the connection to use that same shared access key. That is not ideal and we should be using some form of AAD to authenticate. Let’s take a look at managed identities and how to actually make this work in Azure Government.

Straight out of Microsoft’s documentation If you’re configuring AzureWebJobsStorage using a storage account that uses the default DNS suffix and service name for global Azure, following the https://.blob/queue/file/table.core.windows.net format, you can instead set AzureWebJobsStorage__accountName to the name of your storage account. The endpoints for each storage service will be inferred for this account. This won’t work if the storage account is in a sovereign cloud or has a custom DNS.

Well, Azure Government is nixed from this neat feature, but how do we use an identity based connection? We need to reference in the function configuration a new setting that says to use a managed identity when my trigger is a blob/queue. In my example, I want to use a storage account named sajimstorage with a queue called my-queue. Now, pay attention because here are the undocumented tidbits for Azure Gov. Your value needs to be in the format of AzureWebJobs[storageAccountName]__queueServiceUri. AzureWebJobs is static and must always be there, next comes the name of your storage account then 2 underscores along with queueServiceUri. The value should be the endpoint of your queue with no trailing slash!

Next, we must ensure the function.json connection name is set to the name of your storage account. This ties back to the value above which will be parsed out correctly by the function.

There are more steps, but I figured it would be easier to actually give an example.

  1. Create a new Azure consumption function. I am creating a PowerShell Core runtime for this example.
  2. Create your storage account you want to use the functions managed identity with.
  3. In your function, enable the system assigned managed identity.
  4. In your storage account you want to use the managed identity with, RBAC the roles Storage Queue Data Reader and Storage Queue Data Message Processor
  5. Open up KUDU for your function and navigate in your debug console to site/wwwroot
  6. Edit requirements.psd1 and add ‘Az.Accounts’ = ‘2.12.0’
    Don’t uncomment ‘Az’ = ’10.*’ because it’ll bring every module and take forever. Just use what you need. Also, do not use 2.* for Az.Accounts because it breaks the managed identity. This is a longer discussion, but visit https://github.com/Azure/azure-powershell/issues/21647 and see all the on going issues. I was explicitly told 2.12.0 by Microsoft as well.
  7. Edit profile.ps1 and add -Environment AzureUSGovernment to your connect-azaccount. It should look like this: Connect-AzAccount -Identity -Environment AzureUSGovernment
  8. Back in the Azure Function Configuration, add a new application setting following the format of AzureWebJobs[storageAccountName]__queueServiceUri ie) AzureWebJobssajimstorage__queueServiceUri (2 underscores) and set the value to the endpoint of your queue: https://sajimstorage.queue.core.usgovcloudapi.net
  9. Create your queue function app. It will default to using the shared access key, but let’s change that. Open the Code+Test, select the dropdown from run.ps1 and select function.json. Change the “connection”: “AzureWebJobsStorage” to “connection”: “sajimstorage”
    Change the queue name if you want. Out of the box, it is ps-queue-items, but I am going to change it to my-queue. Save your changes.
  10. Restart your function from the overview.
  11. Add a test message onto your queue and view the output from your function invocations traces.


    Something to note, if you navigate to Integration and select your trigger, the storage account connection will be empty. This is normal.


    It is not as painful as it looks. I mean, maybe when I was trying to figure this all out, but hopefully this saves you some time! Cheers.

Azure Windows Security Baseline

I was designing a deployment around Azure Virtual Desktop utilizing Azure Active Directory, not AADDS or ADDS and when checking a test deploy for compliance against the NIST 800-171 Azure Policy, it showed the Azure Baseline is not being met. In a domain, I wouldn’t worry since group policy will fix this right up, but what about non domain join? What about custom images? Yeah, I guess we could manually set everything then image it, but I prefer a clean base then apply configuration during my image build. Let’s take a look how to hit this compliance checkbox.

I recalled that Microsoft released STIG templates and found the blog post Announcing Azure STIG solution templates to accelerate compliance for DoD – Azure Government (microsoft.com). I was hoping their efforts would make my life a little bit easier, but after a test deploy, I saw 33 items still not in compliance.

Looking at the workflow, it is ideally how i’d like my image process to look in my pipeline.

Deploy a baseline image, apply some scripts and then I can generate a custom image to a shared gallery for use. I didn’t want to reinvent the wheel, so I started researching if anyone has done this already. I found a repo https://github.com/Cloudneeti/os-harderning-scripts/ that looked promising, but it was a year old and I noticed some things incorrect with the script such as incorrect registry paths, commented out DSC snippets, etc. This did do a good bulk, but just needed cleaned up and things added. Looking at the commented code, it was around user rights assignments. Now, the DSC module for user right assessments is old and I haven’t seen a commit in there for years. Playing around, it seems that some settings can not be set. I didn’t want to hack together stuff using secedit, so I found a neat script https://blakedrumm.com/blog/set-and-check-user-rights-assignment/ that I could just pass in the required rights and move on. Everything worked except for SeDenyRemoteInteractiveLogonRight. When the right doesn’t exist in the exported config, it couldn’t add it. So, I just wrote the snippet to add the last right.


$tempFolderPath = Join-Path $Env:Temp $(New-Guid)
New-Item -Type Directory -Path $tempFolderPath | Out-Null
secedit.exe /export /cfg $tempFolderPath\security-policy.inf


#get line number
$file = gci -literalpath "$tempFolderPath\security-policy.inf" -rec | % {
$line = Select-String -literalpath $_.fullname -pattern "Privilege Rights" | select -ExpandProperty LineNumber
}

#add string
$fileContent = Get-Content "$tempFolderPath\security-policy.inf"
$fileContent[$line-1] += "`nSeDenyRemoteInteractiveLogonRight = *S-1-5-32-546"
$fileContent | out-file "$tempFolderPath\security-policy.inf" -Encoding unicode

secedit.exe /configure /db c:\windows\security\local.sdb /cfg "$tempFolderPath\security-policy.inf"
rm -force "$tempFolderPath\security-policy.inf" -confirm:$false

After running PowerShell DSC and script, the Azure baseline comes back fully compliant. I have tested this on Windows Server 2019 and Windows 10.

You can grab the files in my repo https://github.com/jrudley/azurewindowsbaseline

AKS Double Encryption

I have been living in a world of compliance these past few weeks, specifically NIST 800-171. Azure provides an initiative for NIST and one of the checks is to make sure your disks have both a platform and customer managed key. I recently ran into a scenario where you have an application that is StatefulSet in Azure Kubernetes. Let’s talk a bit more around this and NIST.

The Azure Policy was in non compliance for my disks because they were just a managed platform key. Researching the AKS docs, I found an article for using a customer managed key, but this still is not what I need as I need double encryption to meet compliance. After some research in the Kubernetes SIGs repo, I found the Azure Disk CSI driver doc and check it out:

It looks like this document was modified back in May adding support for this feature, so recently new. Upgrade the driver to 1.18 or above and double encryption support should be there.

To implement, create a new storage class that references your disk encryption set id with double encryption.

kind: StorageClass
apiVersion: storage.k8s.io/v1  
metadata:
  name: byok-double-encrpytion
provisioner: disk.csi.azure.com 
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
  skuname: Premium_LRS
  kind: managed
  diskEncryptionType: EncryptionAtRestWithPlatformAndCustomerKeys
  diskEncryptionSetID: "/subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.Compute/diskEncryptionSets/<dek-name>"

Apply this snippet above and reference the storage class in your deployment yaml to have double encryption. This will tick that NIST compliance checkbox for AKS disks.