Azure Windows Security Baseline

I was designing a deployment around Azure Virtual Desktop utilizing Azure Active Directory, not AADDS or ADDS and when checking a test deploy for compliance against the NIST 800-171 Azure Policy, it showed the Azure Baseline is not being met. In a domain, I wouldn’t worry since group policy will fix this right up, but what about non domain join? What about custom images? Yeah, I guess we could manually set everything then image it, but I prefer a clean base then apply configuration during my image build. Let’s take a look how to hit this compliance checkbox.

I recalled that Microsoft released STIG templates and found the blog post Announcing Azure STIG solution templates to accelerate compliance for DoD – Azure Government (microsoft.com). I was hoping their efforts would make my life a little bit easier, but after a test deploy, I saw 33 items still not in compliance.

Looking at the workflow, it is ideally how i’d like my image process to look in my pipeline.

Deploy a baseline image, apply some scripts and then I can generate a custom image to a shared gallery for use. I didn’t want to reinvent the wheel, so I started researching if anyone has done this already. I found a repo https://github.com/Cloudneeti/os-harderning-scripts/ that looked promising, but it was a year old and I noticed some things incorrect with the script such as incorrect registry paths, commented out DSC snippets, etc. This did do a good bulk, but just needed cleaned up and things added. Looking at the commented code, it was around user rights assignments. Now, the DSC module for user right assessments is old and I haven’t seen a commit in there for years. Playing around, it seems that some settings can not be set. I didn’t want to hack together stuff using secedit, so I found a neat script https://blakedrumm.com/blog/set-and-check-user-rights-assignment/ that I could just pass in the required rights and move on. Everything worked except for SeDenyRemoteInteractiveLogonRight. When the right doesn’t exist in the exported config, it couldn’t add it. So, I just wrote the snippet to add the last right.


$tempFolderPath = Join-Path $Env:Temp $(New-Guid)
New-Item -Type Directory -Path $tempFolderPath | Out-Null
secedit.exe /export /cfg $tempFolderPath\security-policy.inf


#get line number
$file = gci -literalpath "$tempFolderPath\security-policy.inf" -rec | % {
$line = Select-String -literalpath $_.fullname -pattern "Privilege Rights" | select -ExpandProperty LineNumber
}

#add string
$fileContent = Get-Content "$tempFolderPath\security-policy.inf"
$fileContent[$line-1] += "`nSeDenyRemoteInteractiveLogonRight = *S-1-5-32-546"
$fileContent | out-file "$tempFolderPath\security-policy.inf" -Encoding unicode

secedit.exe /configure /db c:\windows\security\local.sdb /cfg "$tempFolderPath\security-policy.inf"
rm -force "$tempFolderPath\security-policy.inf" -confirm:$false

After running PowerShell DSC and script, the Azure baseline comes back fully compliant. I have tested this on Windows Server 2019 and Windows 10.

You can grab the files in my repo https://github.com/jrudley/azurewindowsbaseline

AKS Double Encryption

I have been living in a world of compliance these past few weeks, specifically NIST 800-171. Azure provides an initiative for NIST and one of the checks is to make sure your disks have both a platform and customer managed key. I recently ran into a scenario where you have an application that is StatefulSet in Azure Kubernetes. Let’s talk a bit more around this and NIST.

The Azure Policy was in non compliance for my disks because they were just a managed platform key. Researching the AKS docs, I found an article for using a customer managed key, but this still is not what I need as I need double encryption to meet compliance. After some research in the Kubernetes SIGs repo, I found the Azure Disk CSI driver doc and check it out:

It looks like this document was modified back in May adding support for this feature, so recently new. Upgrade the driver to 1.18 or above and double encryption support should be there.

To implement, create a new storage class that references your disk encryption set id with double encryption.

kind: StorageClass
apiVersion: storage.k8s.io/v1  
metadata:
  name: byok-double-encrpytion
provisioner: disk.csi.azure.com 
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
  skuname: Premium_LRS
  kind: managed
  diskEncryptionType: EncryptionAtRestWithPlatformAndCustomerKeys
  diskEncryptionSetID: "/subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.Compute/diskEncryptionSets/<dek-name>"

Apply this snippet above and reference the storage class in your deployment yaml to have double encryption. This will tick that NIST compliance checkbox for AKS disks.

AAD Conditional Access What If bug

I wanted to just do a quick post about a bug I discovered in my GCC High tenant. I was recently testing out an access policy to enforce a terms of use prompt. I targeted the policy against a test group and when using the what if tool, it kept showing that none of my users in the group were getting the policy applied.

I was going absolutely nuts trying to figure out what I did wrong configuring this policy. In disbelief, I tried logging in with the user against the specific cloud app and sure enough, the TOS came up. I went back to the what if tool and it kept saying that the policy would not be applied. I thought maybe it was something to do with the TOS and switched it over to MFA in my CA policy. Same issue 😦 The only thing I thought of was that it had something to do with the group. I set the user in the group specifically on the CA policy and bingo, the what if tool worked perfectly.

I starting googling at github for this specific issue, but I could not find any. A quick CSS ticket with some emails back and forth has shown this is a bug and will be fixed, but no hard ETA other than this year. So, if you want to use the what if, make sure to assign the specific user and not depend on the group for your testing. I hope google indexes this page to save you the frustration and time wasted that happened to me 🙂

Missing Microsoft Applications in GCC High

An awesome feature to bring some sanity to Azure VM authentication and authorization is using Microsoft Azure Windows and Linux Virtual Machine Sign-in functionality. You can quickly test this by selecting the Login with Azure AD check box during provisioning.

I wanted to add MFA and User Sign In risk checks using conditional access before a user can actually log into the VM. When setting up my policy, I could not find Microsoft Azure Windows Virtual Machine Sign-in or Microsoft Azure Linux Virtual Machine Sign-in app. I was puzzled, so I quickly checked my commercial tenant and sure enough it existed. I initially thought it was one of those not in gov cloud, but only commercial cloud situation. I created a ticket to support and they came back noting that they have seen Microsoft applications missing in GCC High tenants. The quick fix is just to manually add the missing applications. Once they told me the application Id’s are the same, we can quickly just create it.

New-AzureADServicePrincipal -AppId '372140e0-b3b7-4226-8ef9-d57986796201' #Microsoft Azure Windows Virtual Machine Sign-in
New-AzureADServicePrincipal -AppId 'ce6ff14a-7fdc-4685-bbe0-f6afdfcfa8e0' #Microsoft Azure Linux Virtual Machine Sign-In

After running those PowerShell cmdlet’s in my cloud shell, I can now successfully see the apps during conditional access creation.

Azure VM Applications

Azure has Template Specs which lets you create a self service infrastructure as code model for your end users. You can use RBAC and they can deploy versioned templates. Microsoft introduced VM applications which lets your end users do something very similar to template specs, but with applications installed inside your VM. Let’s look at a quick demo and some things to watch out for.

Assuming you have an Azure compute gallery deployed, you need to create an application then a version of that application. I pasted a snippet below to get us started.

$applicationName = 'visualStudioCode-linux'
New-AzGalleryApplication `
  -ResourceGroupName $rgName `
  -GalleryName $galleryName `
  -Location $location `
  -Name $applicationName `
  -SupportedOSType Linux `
  -Description "Installs Visual Studio Code on Linux."
 
 
$version = '1.0.0'
New-AzGalleryApplicationVersion `
   -ResourceGroupName $rgName `
   -GalleryName $galleryName `
   -GalleryApplicationName $applicationName `
   -Name $version `
   -PackageFileLink $sasVscode `
   -Location $location `
   -Install "mv visualStudioCode-linux vscode.sh && bash vscode.sh install" `
   -Remove "bash vscode.sh remove" `
   -Update "mv visualStudioCode-linux vscode.sh && bash vscode.sh update" 

The cmdlet I want to focus on is New-AzGalleryApplicationVersion. The parameter PackageFileLink is required. Not only is it required, it must be a readable storage page blob aka you cannot use a raw github link to a file. I tried using a public repo for an install script, but when running this cmdlet, it just hangs. Now, I will get to a workaround on that, but let’s continue. The Install and Remove parameter are required, but update is optional. With that, I thought a simple framework can be used.

if [ $1 == "install" ];
then
    echo "Installing...";
   <code> 
elif [ $1 == "remove" ];
then
    echo "Removing...";
    <code>
elif [ $1 == "update" ]
then
    echo "Updating...";
    <code>
else
    echo "Incorrect argument passed. Please use install, remove or update";
fi

Now I can easily just call an argument for install, remove and update. Reading about VS Code, we can use snap to handle our application installation.

if [ $1 == "install" ];
then
    echo "Installing...";
    sudo snap install --classic code 
elif [ $1 == "remove" ];
then
    echo "Removing...";
    sudo snap remove code
elif [ $1 == "update" ]
then
    echo "Updating...";
    sudo snap refresh --classic code
else
    echo "Incorrect argument passed. Please use install, remove or update";
fi

Now, going back where I said there is a workaround on referencing a public repo. This is partially true. What you can do is reference a valid url dummy file in the PageFileLink parameter then execute your commands directly in the Install, Update and Remove parameter.

   -Install "apt-get update && apt-get install ubuntu-gnome-desktop xrdp gnome-shell-extensions -y && reboot" `
   -Remove "apt-get --purge remove ubuntu-gnome-desktop xrdp gnome-shell-extensions -y && reboot" 

A couple of other things to note. Once a user starts an application install, it executes fast, but the status could use some work. If I am an end user in the portal and click install for a published application, I have to click back on the extension to see the status. The status isn’t in the VM application tab. Also, development is some what painful. If the install, update or remove section fails, it seems to go in this endless loop and a giant pain to make it stop. I couldn’t figure it out. The documentation states to uninstall the extension, which I did, but it still keeps it on the VM. It is still in preview, so I can’t complain too much. Lastly, unlike template spec’s which lets you select a spec in another subscription in the portal, this does not exist for VM Apps. You need to make sure a compute gallery is in the subscription, then it will show for the end user. This limitation does not exist when using AZ Cli, Rest or AZ Powershell, as long as you have the correct permissions to talk with the compute gallery in another subscription.

I think VM Apps has a lot of potential to make the end user experience better. Think of apps typically installed for developers to test with. We can now ensure approved and validated applications are installed which I think it is a great win!

Guest Configuration Extension Broke in Azure Gov for RHEL 8.x+

UPDATE 4/11/2022 This has been fixed!

UPATE 4/3/2022 Still broke…waiting on product to fix.

UPDATE 3/4/2022 Microsoft product group will be pushing a fix out in 2 weeks to Azure Gov. I asked what the cause was, but nothing yet.

One of the great features of Azure Policy is the capability to audit OS settings for security baselines and compliance checking. I was deploying RHEL 8.4 and noticed the Guest Assignment was always hung in the pending state. I had no issues with Ubuntu, so it had to be something happening on the RHEL vm.

I navigated to /var/lib and saw the GuestConfig folder created, but when I was inside, it was empty. Hrm, this should be populated with folders and MOF files.

[root@rhel84 GuestConfig]# pwd
/var/lib/GuestConfig
[root@rhel84 GuestConfig]# ls -al
total 4
drwxr--r--.  2 root root    6 Feb 26 22:13 .
drwxr-xr-x. 41 root root 4096 Feb 26 22:13 ..

Next step was to tail the messages log to see if anything can pin point what is actually happening.

[root@rhel84 GuestConfig]# tail -f /var/log/messages | grep -i GuestConfiguration
Feb 26 22:27:36 rhel84 systemd[7442]: gcd.service: Failed at step EXEC spawning /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.25.5/GCAgent/GC/gc_linux_service: Permission denied
Feb 26 22:27:46 rhel84 systemd[7458]: gcd.service: Failed at step EXEC spawning /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.25.5/GCAgent/GC/gc_linux_service: Permission denied

Alright, a permission denied. It’s something to start looking into, but I was confused why this is happening. I headed over to Azure commercial and spun up a RHEL 8.4 vm with the same Azure Policy to execute my security baseline. Well, to my surprise, everything worked just fine. Looking at /var/lib/GuestConfig showed the Configuration folder with mof files. Looking at the Guest Assignments, it was showing NonCompliant, so I know it is OK there. I did notice the Guest Extension in commercial is using 1.26.24 and gov is using 1.25.5. I tried deploying that version with no auto upgrade in gov, but same error.

After some research, I set selinux to permissive mode and instantly the Configuration folder was created and starting pulling the mof files down. OK, now I am really puzzled. Working with Azure support, they were able to reproduce this same issue in Gov, but not in commercial. I was shocked no other cases have been open. I am not sure when this problem started happening, but this means security baselines on RHEL 8.x+ are not working.

While I wait for Microsoft to investigate more why this is happening, I tried to find a workaround. Knowing it is selinux causing the issue, I thought I could just create a policy allowing the execution of the gc_linux_service.

I tested first by making sure selinux is set to Enforcing then using chcon to set the selinux context:

[root@rhel84 GuestConfig]# getenforce
Enforcing
chcon -t bin_t /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.25.5/GCAgent/GC/gc_linux_service

We’re all good. No error’s in the messages log. Since this could revert by a restorecon command being ran later, I added it to the selinux policy by running:

semanage fcontext -a -t bin_t /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.25.5/GCAgent/GC/gc_linux_service
restorecon -v /var/lib/waagent/Microsoft.GuestConfiguration.ConfigurationforLinux-1.25.5/GCAgent/GC/gc_linux_service

I will update my post once Microsoft comes back with a reason why this is only happening in Azure Gov and see what proposed solution they have. For now, i’d not depend on the Guest Extension to perform your compliance checking for RHEL 8.x until a fix has been pushed.

Azure CycleCloud Slurm Scheduler CentOS Fix

Azure CycleCloud is one of those products that shines, but slowly gets the care it needs. I was deploying a Slurm Scheduler and left the defaults for the scheduler, hpc and htc operating system selection. You can see the default is CentOS 8 which has been EOL as of Dec 31st, 2021. Ubuntu is an option, but if you want to continue using CentOS 8, keep reading.

When starting the cluster up, it eventually errored out trying to install the perl-switch RPM. Looks like this package has moved.

The great thing with CycleCloud is how flexible it is. Edit the cluster and select advanced settings to set the cloud-init section. Paste the following in to use a valid repo.

#cloud-config
runcmd:
    - cd /tmp
    - wget https://repo.almalinux.org/almalinux/8.4/PowerTools/x86_64/os/Packages/perl-Switch-2.17-10.el8.noarch.rpm
    - yum -y install perl-Switch-2.17-10.el8.noarch.rpm

Success! The scheduler created 🙂

Now, I am sure the question you asked “why has Microsoft not updated CycleCloud?” I have no idea. Competing priorities? Hopefully, the next release will fix this, use Ubuntu in the drop down or just create your own CycleCloud template for a scheduler and select that during deployment with whatever OS image you prefer.

Azure Bastion Standard Sku Autoscale?

The standard sku of Azure Bastion fixed a lot of the pain points of the basic sku. Things like setting up multiple instances and setting the port to use for Linux. The one thing I did not see was autoscale. The Microsoft doc’s state Each instance can support 10 concurrent RDP connections and 50 concurrent SSH connections. The number of connections per instances depends on what actions you are taking when connected to the client VM. For example, if you are doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, an additional scale unit (instance) is required. Imagine the scenario that we are using a hub and spoke topology with a bastion sitting in our hub. We would need to setup monitoring around concurrent sessions and alert us when session connectivity was getting close, but why not autoscale it?

I was curious why this setting was missing, so I spun up a test environment with 2 RDP sessions. Remember that the default deployment has 2 bastions deployed. Looking at the metric for session count, we can see the following:

Now, I was totally confused why it kept showing 1 to .44ish every few minutes. I understand the 1 for average since its 2 sessions across 2 instances, but couldn’t understand why it kept dipping.

Here is the graph using sum as my aggregation. Same thing! At this point, I tried to split the graph on instance:

Seems to be a scale set internally running bastion if I had to guess. That 0 on vm000000 screwing my metric count up! Now that I had an understanding of the metrics, how could I scale this automatically? I could setup an alert rule that fires a webhook when the session count is above X or below Y. I just didn’t feel comfortable with these metrics as it could provision multiple scaleset instances of 0 and I wouldn’t know. I started doing some research and found an API call for getActiveSessions https://docs.microsoft.com/en-us/rest/api/virtualnetwork/get-active-sessions/get-active-sessions which would return my session count. This is ideally what I wanted, so I started going down this path. I figured I could create an Azure function or runbook that runs every so often and scales the bastion out by +1 or -1 based on some switch.

$restUri = "https://management.azure.com/subscriptions/$((Get-AzContext).Subscription.Id)/resourceGroups/$bastionResourceGroupName/providers/Microsoft.Network/bastionHosts/$bastionHostName/getActiveSessions?api-version=2021-03-01"
$getStatus = Invoke-webrequest -UseBasicParsing -uri $restUri -Headers $authHeader -Method Post
$asyncUri = "https://management.azure.com/subscriptions/$((Get-AzContext).Subscription.Id)/providers/Microsoft.Network/locations/$bastionResourceGroupLocation/operationResults/$($getStatus.headers['x-ms-request-id'])?api-version=2020-11-01"
$sessions = invoke-restmethod -uri $asyncUri -Headers $authHeader
while ($sessions -eq 'null' ) {
    start-sleep -s 2
    $sessions = invoke-restmethod -uri $asyncUri -Headers $authHeader
}
 
write-output "Current session count is: $($sessions.count)"

The docs made it seem like this was a sync call, but it is actually async. You need to query out operation results to pull back the session count. For more information, check out this article https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/async-operations

Now that I have my session count, I could do a simple switch statement on setting my bastion instance count. I started with these numbers below:

$bastionObj = Get-AzBastion -ResourceGroupName $bastionResourceGroupName -Name $bastionHostName
switch ($sessions.count)
{
    #2 instances by default. Each can hold up to 12 sessions
    {0..22 -contains $_} {Set-AzBastion -InputObject $bastionObj -Sku "Standard" -ScaleUnit 2 -Force  }
    {23..34 -contains $_} {Set-AzBastion -InputObject $bastionObj -Sku "Standard" -ScaleUnit 3 -Force  }
    {35..45 -contains $_} {Set-AzBastion -InputObject $bastionObj -Sku "Standard" -ScaleUnit 4 -Force  }
    {46..58 -contains $_} {Set-AzBastion -InputObject $bastionObj -Sku "Standard" -ScaleUnit 5 -Force  }
    Default {Set-AzBastion -InputObject $bastionObj -Sku "Standard" -ScaleUnit 2 -Force}
 
}

When I started to test the autoscale, I noticed one big problem! When setting the scaleunit count, it disconnects all sessions. That is a horrible end user experience. I am thinking this is why Microsoft did not implement autoscale 🙂

Well, next best scenario is resizing at the end of the working day to keep costs low. Add the code to authenticate into Azure via runbook or function and set it to run on a schedule. Maybe 8pm at night we resize based on user session count and before the work day starts we would resize to an instance count that fits our requirements. I’d imagine Microsoft will implement autoscale, but they need to figure out how to move existing sessions gracefully to another bastion host.

Can’t add an Azure budget after a new subscription?

Create a budget automatically after provisioning a new subscription

I am sure you ran into the situation where you create a new subscription, but want to add an Azure budget to help monitor and control spend. As you know, it can take some time for the subscription to sync with the EA portal. Here is a snippet from https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/tutorial-acm-create-budgets stating If you have a new subscription, you can’t immediately create a budget or use other Cost Management features. It might take up to 48 hours before you can use all Cost Management features. I don’t want to wait or try and remember adding a budget the next day. Let’s use Azure tools to solve this problem to automatically create a budget for us.

When I first read that statement above, I was thinking how to keep track of the new subscription details and have it automatically create the budget. I thought, why not use an Azure storage queue? I can start a runbook that creates the subscription, pops a message on the queue and will try every so often to create the budget. If successful, remove the message from the queue, but if not, keep it on and retry a few hours later. Let’s take a look at a snippet of the relevant code below.


$storageAccount = get-AzStorageAccount -ResourceGroupName $resourceGroup -Name $storageAccountName 
$ctx = $storageAccount.Context
 
# Retrieve a specific queue
$queue = Get-AzStorageQueue –Name $queueName –Context $ctx
 
#create message
# Create a new message using a constructor of the CloudQueueMessage class
$queueMessage = [Microsoft.Azure.Storage.Queue.CloudQueueMessage]::new("$subName;$ownerupn")
 
# Add a new message to the queue
$queue.CloudQueue.AddMessage($queueMessage,$null)

The code above is self explanatory. Get the queue information and pop a message with the subscription name and owner. We can create another runbook that runs every few hours to process messages on the queue.

 
$storageAccount = get-AzStorageAccount -ResourceGroupName $resourceGroup -Name $storageAccountName 
 
$ctx = $storageAccount.Context
$invisibleTimeout = [System.TimeSpan]::FromSeconds(60)
$queue = Get-AzStorageQueue –Name $queueName –Context $ctx

if ($queue.QueueProperties.ApproximateMessagesCount -gt 0) {
 
    $queueMessage = $queue.CloudQueue.GetMessageAsync($invisibleTimeout, $null, $null)
    $msg = $queueMessage.Result.AsString
    Select-AzSubscription $msg.Split(';')[0]
 
    New-AzConsumptionBudget -ErrorAction SilentlyContinue -ErrorVariable cmdletError -Amount 1000 -Name "$($msg.Split(';')[0])-budget" -Category Cost -TimeGrain Monthly -StartDate (Get-Date -Format yyyy-MM).ToString() -ContactEmail 'IT@contoso.com', $($msg.Split(';')[1]) -NotificationKey Key1 -NotificationThreshold 90 -NotificationEnabled 

    if ($cmdletError) {
        $cmdletError
        Write-Warning "Subscription $($msg.Split(';')[0]) might still be provisioning to ea portal. Will try again in a couple of hours..."
    }
    else {
        $queue.CloudQueue.DeleteMessageAsync($queueMessage.Result.Id, $queueMessage.Result.popReceipt)
 
    }
}

The runbook will check if the queue has a message, process the message, select into the newly create Azure subscription and create a new budget. If it throws an exception, write a warning and keep the message on the queue to try again later. If it does create a budget, we can safely delete the message.

It’s simple and does the job. There are 10 ways to solve a challenge and this is just one of them. Hope it helps!

Reactivate an Azure Subscription via API – Gov Cloud Edition

I recently had to reactivate an Azure subscription that was cancelled, but I noticed the instructions https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/subscription-disabled#the-subscription-was-accidentally-canceled do not work in Azure Gov Cloud. There is no button to reactivate, so I was forced to submit a ticket to Microsoft and they fixed me up. Typically, if a subscription was cancelled, it was done by mistake and the end user needs access ASAP. I didn’t want to wait hours by submitting a ticket to Microsoft in the future, so I started figuring out how I could do this self service style in Azure gov.

I started to research the AZ CLI and PowerShell cmdlets, but nothing was coming up. As a last resort, I look at the API documentation and to my surprise, I found the POST call to enable a subscription https://docs.microsoft.com/en-us/rest/api/subscription/2019-03-01-preview/subscriptions/enable If you noticed, I linked to API version 2019-03-01-preview. The latest version of 2020-09-01 was not working in management.usgovcloudapi.net. I put a code snippet below:

$azContext = Get-AzContext
$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
$profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
$token = $profileClient.AcquireAccessToken($azContext.Subscription.TenantId)
 
$authHeader = @{
    'Content-Type'='application/json'
    'Authorization'='Bearer ' + $token.AccessToken 
}

#commercial uri management.azure.com
#gov uri management.usgovcloudapi.net
$restUri = "https://management.usgovcloudapi.net/subscriptions/$($subscriptionId)/providers/Microsoft.Subscription/enable?api-version=2019-10-01-preview"
Invoke-RestMethod -uri $restUri -Method POST -Headers $authHeader

In larger organizations, this code could be used towards Service Now automation, Azure Automation, Azure Functions, etc to get the client up and running faster. I hope this helps you with your Azure journey. 🙂