Skip to main content

Updating Azure VM Data Disk Sizes


Summary

Hope you are doing great, this time I came up with a simple azure DevOps solution for updating VM disk sizes. The current project that I'm in has a bunch of virtual machines. So, we need a way to update the VM disk with minimal administrative effort and changes. 

In this scenario, we have used

  1. Bicep as the IAC language
  2. Azure DevOps pipelines
  3. YAML variable files 

Here is the high-level workflow for a particular VM in the solution




YAML Pipeline file got two workflows, firstly the VM build pipeline, and the second is the disk update one.

If you focus on the green arrow and the purple arrow, basically I'm modifying the same bicep module file and passing the same set of variables. You may wonder why we cannot use the same flow to build the VM and update the disk later, that's because for the disk updates VM needs to be in a shutdown state, and other components in the 1st flow need the VM up and running especially the extension modules

YAML Variable Files

I have decided to use YML variable files for my bicep modules mainly for one reason, that is because a given parameter needs to be defined only once, we can reuse the variable values anywhere by defining the variable files in the YML pipeline file.

So, our related variable is the disk defines as below





disk definition details are separated using pipes and multiple disks are separated using "commas"




PowerShell Task

One of my colleagues came up with the PowerShell script to convert this particular YAML variable to JSON format as below, we are calling the given PowerShell as part of the pipeline


    condition: eq('${{ parameters.isVMDiskUpdate }}', 'true')
    strategy:
      runOnce:
        deploy:
          steps:
          - checkout: self

          - powershell: |
              $dataDiskDefinitions = scripts\Init-DiskArray.ps1 -vmDisks "$(apAppDMZVmDisks)"
              Write-Host "##vso[task.setvariable variable=dataDiskDefinitions]$dataDiskDefinitions"



param (
    # format: comma separated list of: diskType|diskSizeGB|caching|createOption
    # ex: 'Premium_LRS|128|ReadOnly|Empty,Premium_LRS|3128|ReadOnly|Empty'
    [string] $vmDisks  
)
   
$diskArray = New-Object System.Collections.ArrayList

foreach($disk in $vmDisks.Split(',')) {
    $diskConfig = $disk.Split('|')
    [void]$diskArray.Add(
        [PSCustomObject]@{
            diskType = $diskConfig[0]
            diskSize = $diskConfig[1]
            caching = $diskConfig[2]
            createOption = $diskConfig[3]
        }
    )
}

$json = ConvertTo-Json -Compress -InputObject @($diskArray)
$json = $json -replace "`t|`n|`r","" -replace '"',@"
\"
"@

return $json


YAML Pipeline

YAML pipeline contains both flows and is separated using a pipeline variable. During the pipeline run, we need to specify whether the run is a disk update or not. If it's a disk update it will only run a particular deployment task which is carried out below
  1. shutting down the VM
  2. Update the disk using the bicep template
  3. Start the VM backup

  - deployment: 'Deploy_VM_Disk_Updates'
    displayName: 'Deploy_VM_Disk_Updates'
    environment: Azure-IAC
    condition: eq('${{ parameters.isVMDiskUpdate }}', 'true')
    strategy:
      runOnce:
        deploy:
          steps:
          - checkout: self

          - powershell: |
              $dataDiskDefinitions = scripts\Init-DiskArray.ps1 -vmDisks "$(apAppDMZVmDisks)"
              Write-Host "##vso[task.setvariable variable=dataDiskDefinitions]$dataDiskDefinitions"

              # $tags = scripts\Init-Tags.ps1 -tags "$(apAppDmzTags)"
              # Write-Host "##vso[task.setvariable variable=tags]$tags"
     
          - task: AzureCLI@2
            inputs:
              azureSubscription: ${{ variables.azureServiceConnection }}
              scriptType: ps
              scriptLocation: inlineScript
              inlineScript: |
                az --version
                az account set -s ${{ variables.apSubscriptionId }}
                az vm stop --name $(apprefix) --resource-group $(apRgName)
                az vm deallocate --name $(apprefix) --resource-group $(apRgName)

                az group deployment create -g $(apRgName) `
                --template-file 'bicep/modules/v2/virtual-machine/virtual-machine-datadisk-update.bicep' `
                --parameters `
                vmNameSuffix=$(apprefix) `
                dataDisksDefinition='$(dataDiskDefinitions)'

                az vm start --name $(apprefix) --resource-group $(apRgName)



Bicep Module Template

The next thing is the bicep module, the key highlight is I'm using the same bicep module to update the disks, and I'm passing the same set of parameters to it compared to the mail build bicep file.

below is the disk module

@description('Virtual machine name. Do not include numerical identifier.')
@maxLength(14)
param vmNameSuffix string

@description('Virtual machine location.')
param location string = resourceGroup().location


@description('Array of objects defining data disks, including diskType and size')
@metadata({
  note: 'Sample input'
  dataDisksDefinition: [
    {
      diskType: 'StandardSSD_LRS'
      diskSize: 64
      caching: 'none'
    }
  ]
})
param dataDisksDefinition array


resource dataDisk 'Microsoft.Compute/disks@2020-12-01' = [for (item, j) in dataDisksDefinition: {
  name: '${vmNameSuffix}_datadisk_${j}'
  location: location
  properties: {
    creationData: {
      createOption: item.createOption
    }
    diskSizeGB: item.diskSize
  }
  sku: {
    name: item.diskType
  }
}]


//${format('{0:D2}', 1)}



Conclusion

The reason for this approach is

I need a way to update the VM disk using the same variables that I used for VM build or other updates If not I won't be able to run the build/update workflow unless I do manual modifications. Now I can run VM build pipeline also at any point in time after a disk update. You now may be able to realize this is become pretty much seamless because of using the same variable for both workflows. This also means we can get the VM configurations right out of the variable files at any given point in time

Also, I wanted a way to do it using a pipeline. This means good news for the administrators too. they only need to update the single location to update the disk configuration (Lesser administration effort)

As always there may be many ways to do but I think this particular method suits this environment and scenario better.






Comments

Popular posts from this blog

Deploying an Automation Account with a Runbook and Schedule Using Bicep

Introduction Automation is a key component in many organizations' cloud strategy. Azure Automation allows you to automate the creation, deployment, and management of resources in your Azure environment. In this post, we will walk through the process of deploying an Automation Account with a Runbook and Schedule using Bicep, a new domain-specific language for deploying Azure resources. Intention My intention at the  end is to run a PowerShell  script to start and shutdown Azure VMs based on tag values. PowerShell  script that I have used is from below l ink.  And two  of me   collogue s ( Michael Turnley   and Saudh Mohomad helped to modify the  PowerShell  script. Prerequisites Before we begin, you will need the following: An Azure subscription The Azure CLI installed on your machine. The Azure Bicep extension for the Azure CLI Creating the Automation Account The first step in deploying an Automation Account with a R...

Securing Azure Services with Fetian FIDO

Hey Folks  Here again with another security topic with Fetian Fido. And once again Fetian devices proved their excellent quality and stability. For this I choose Fetian K33 -  AllinPass FIDO Security Key – FEITIAN (ftsafe.com) and  K39 -  Single-button FIDO Security Keys | FEITIAN (ftsafe.com) Use case  In an organization following changes needs to be implemented.  1. Update the password policy 2. Update the user session time out to 30 minutes Once these changes being implemented, the following issues need to be addressed 1. Users' complaint new passwords need to be so long 2. Users complain sessions time out makes them work so much slower with the longer passwords 3. Etc... Solution  One of my friends reached out to me to help solve this problem. All I could think of was using passwordless auth with FIDO devices. We have decided to use Fido2 keys for better security and flexibility for the users. The FIDO (Fast IDentity Online) Alliance helps to pro...

Migrating Azure DevOps Variable Groups

Howdy Folks, I was working on an application modernization project. And there was a requirement to migrate application deployments from one project to another in Azure DevOps. deployment pipelines were heavily dependent on variable groups. So, we wanted to migrate these variables group to the new project. Couldn't find any solutions in internet for this, so came up with the below scripts. You can grab the scripts from the below GitHub URL. DaniduWeerasinghe911/Migrate-Azure-DevOps-Variable-Groups: This Repo Include PowerShell Scripts relating to Migrating Azure DevOps Variable Groups (github.com) Azure DevOps Variable Groups Azure DevOps Variable Groups are a way to store and manage sets of variables that can be used across multiple pipelines in Azure DevOps. These variables can include secrets, connection strings, and other sensitive information that is needed for builds and releases. Variable Groups provide a centralized way to manage these variables and ensure that they are cons...