When you use azure-cli (2.x) you know it uses Python 2 by default. Nevertheless Python 2.7 will not be maintained from January 1, 2020. This means Python 2 will retire soon. You can find some details and a counter for this special day here: https://pythonclock.org/
Don’t worry, Python 3 is here with standard libraries and higher speed. I am sure you will like it. Now you are thinking about whether how you can rewrite your existing Python2 codes. Luckily there are lot of useful tools for this operation:
In this spring I wrote a post about CI/CD with Git, Azure and Jenkins where I showed you an easy configuration of a CI/CD process. This post contained a chapter where we created a Service Principal in Azure for automation. Since that I received many request from your side to make a demo about Service Principal creation because that would me useful for you.
Today is a milestone because I will provide you a “real-life” solution for a “real-life” scenario with Ansible in Azure. Why is it so important? Since I started to learn Ansible I found several examples for different scenarios but as I realized nobody has provided a really good solution for that situation when you would like to deploy a multi NICs environment to Azure with Ansible. Therefore I did it and I would like to share it with you.
Multi NICs environment that architecture where you can manage and use your services on secure way.
Azure architecture for this solution
As you can see there is another ingredient in this architecture. The Virtual Network and the NSGs are in a separated resource group inside your subscription. Why? Because they are “shared” resources and in this way we can use them for different services. Additionally our architecture stays easy to understand and managed.
Virtual Machines in this scenario
According to the draw above we will create a simple architecture with the following VMs and roles:
Web servers: 2
DB server: 1
Notes:
Web servers have 2 NICs
DB server has only 1 NIC (in BackEnd subnet)
DB server does not have Public IP
Ansible package
With this architecture is easily deployed with Ansible. Nevertheless you have to be sure you use the right version of Ansible. although Ansible supports Azure since version 2.4 the most of required functionality is quite new. The main 2 feature are available according to my requests because I was facing some issue during the solution development. You can find these bugs here:
When you installed the right package to your computer you can pull the required codes from git (201_multi_nic_vm).
# Navigate to git directory
cd /data/git
# Clone azansible from git
git clone https://github.com/the1bit/azansible.git
# Go to 201_multi_nic_vm solution directory
cd 201_multi_nic_vm
Configure azansible
Before you start the deployment you have to prepare it.
In this week I would like to show you my latest automation solution for Azure. This is able to start a VM in Azure.
Everybody knows the automatic VM shutdown feature (Microsoft.DevTestLab/schedules) in Azure. It had been debutated in 2016. I love to use it for Azure developer servers because it can save cost and time for me. Nevertheless there is a small gap here. Why can’t start Azure my developer machine when I arrive to the office?
Now I make a ‘FIX” for this gap.
My Azure VM Manager solution helps you to start your VM in Azure at that time when you schedule. So when you arrive to your workplace your VM is up and running every time. ð
Prerequisites
At the moment the v18.6.0 supports only Linux machines. Especially I have only tested on CentOS 7.
{
"vmName": "<name of your vm>",
"vmResourceGroup": "<vm resource group>",
"azure": {
"cloudName": "AzureCloud",
"clientID": "<Service Principal ID>",
"clientSecret": "<Service Principal Secret>",
"tenant": "<Tenant ID>",
"subscriptionID": "<Subscription ID>"
}
}
Save configuration file
Configure crontab according to your update requirement
# Edit crontab settings
vim /etc/crontab
### Configure to start your vm at 9AM every weekdays
0 9 * * mon,tue,wed,thu,fri root cd /root/scripts/azvmmanager;bash azvmmanager.sh;
Wait for the required time then check the logs according to execution in /var/log/azvmmanager directory
less /var/log/azvmmanager/azvmmanager20180614090001.log
part of log:
Thu Jun 14 09:00:29 CEST 2018 : # Login success
Thu Jun 14 09:00:29 CEST 2018 : # Set default subscription
Thu Jun 14 09:00:38 CEST 2018 : # Default subscription has been set
Thu Jun 14 09:00:38 CEST 2018 : # Start VM: xxxxxxxxx
Let’s check your VM status in Azure. You can see it is up and running…
Please do not hesitate to contact me if you have any questions or feedback about this solution or Azure. ð
In the world of clouds there are some home servers and on-premise servers which work hard to do their daily tasks for their owners. The people who own these machines often try to reach them through internet. Due to internet suppliers – who provide dynamic IP for their customers – this is a real challenge sometimes. Luckily there are several good and free dynamic dns sites where we can register our home server to reach it with a dns name through the Internet. Here is a quite fresh list about the most popular: 17 Popular Sites Like No-ip
I used No-ip but I did not like the 30-days confirmation of my host there. I know, this is not a big deal. Additionally I am interested in Azure so the solution – I would like to show you – is a simple step on this way.
I had decided to make a solution on Azure basis which can replace No-ip client on my home server. And now that is ready and enough stable for “PROD” usage.
And now…I would like to introduce an alternative for dynamic DNS which works with Azure DNS zone. Sounds good? Let’s see..
This solution helps you to update your home server public IP dynamically. This is not 100% free. The monthly cost in case of a “Pay-AS-YOU-GO” subscription is about 1 EUR/month. Additionally you have to register a domain which you can use in Azure (you can do it in Azure).
Configure crontab according to your update requirement
# Edit crontab settings
vim /etc/crontab
### Configure to execute at 7AM and 7PM every day
0 7 * * * root cd /root/scripts/azdns;bash azdns.sh;
0 19 * * * root cd /root/scripts/azdns;bash azdns.sh;
Wait for the required time then check the logs according to execution in /var/log/azdns directory
less /var/log/azdns/azdns20180614070001.log
part of log file:
...
Thu Jun 14 07:00:29 CEST 2018 : # Login success
Thu Jun 14 07:00:29 CEST 2018 : # Set default subscription
Thu Jun 14 07:00:38 CEST 2018 : # Default subscription has been set
Thu Jun 14 07:00:38 CEST 2018 : # Get current Puplic IP from Internet
...
This means you have your own dynamic DNS solution with Azure DNS Zone. I think this is quite cool…
Please do not hesitate to contact me if you have any questions or feedback about this solution or Azure. ð
In March I started a serie about Ansible. Now I would like to show you the first real code and solution how you can create Azure resources with Ansible. I know this is only the second part of the serie therefore I will show a simple and easy-to-understand example which can work in a live environment as well. Let’s start…
I hope you read the latest article and now you have a basic knowledge about Ansible.
Scenario
Our example for Today is a solution which creates the followings:
Resource Group
Virtual Network
FrontEnd subnet for Virtual Network
Network Security Group for FrontEnd subnet
Some Network Cards which connect to FrontEnd subnet
Simple but covers some real life requests. ð
Prepapare our Ansible computer
Before we start the real scripting we have to install some packages to our system. We will use the Ansible 2.5.x for our example.
Here we create a Service Principal in Azurre and the credential file for Azure access.
Create Service Principal
Login to Azure (and set the default subscription)
az cloud update
az cloud set -n AzureCloud
# Login with your account
az login -u <your username>
# Set the required subsctiption
az account set --subscription <subscriptionID>
Create the service principal for automation
az ad sp create-for-rbac --name Automation_ResourceManager --query '{"client_id": appId, "secret": password, "tenant": tenant}'
Please write down and store in a secret place the “secret” because you cannot receive again from system anymore.
Required permission for this SP on subscription: Contributor
You have to be owner or co-administrator privileges on Azure to be able to create Service Principal
Make credential file
When you execute a playbook with Ansible it requires the Azure login data. For this we have to create a file.
# Create directory
mkdir ~/.azure
# Create the azure file for credential
vim ~/.azure/credentials
[default]
subscription_id=53455...
client_id=7f37...
secret=ft56...
tenant=987d...
Ansible solution
Ansible files
Regarding Ansible there are 2 very important group of files.
Inventory files: Contain all parameters for our solution or Azure environment. (more information)
hosts: this is the main file which contains the basic parameters and the basic groups. This is an INI-like file.
all.yml: Contains the global variables for plays
other yaml files which match with the group names in hosts file. These contains the group related information.
Playbooks: Playbooks are Ansibleâs configuration, deployment, and orchestration language. They can describe a policy you want to enforce. (more information)
Create Inventory files
Create these files into inventory directory.
hosts
As you read above this is a INI-like file and this is the file where we will define the groups we would like to deploy in Azure.
[vnet]: In our example we will deploy Virtual Network related resources (VNET, Subnet, NSG) therefore we have to put here a [vnet] group. Because all the information will be used later we can manage the Virtual Network parameters as global variables. (so we will define them later) Here we just put a vnet value which is a suffix for our Virtual Network name.
[vms]: As I mentioned above we will deploy some NICs so this group name could be [nics] but in the near future I will show you a whole VM deployment so this group name is fine for us. In this group we define the name of VMs (which will be part of the NICs’ names) and the NICs related FrontEnd IPv4 addresses.
This file is a [vms] group specific file where we could define things for [vms] group related activities. In our example this is an empty file.
---
group_vars/all.yml
This is the file which contains the global variables for plays and playbooks. Therefore we will define here all global variables and the [vnet] group related parameters.
Of course you can create some other files and variables…
Create Playbooks
And now we will create the playbooks for the different steps. For this scenario only file would be enough. Nevertheless on this way it will be easy to understand.
The parameters from inventory files will be used like in MVC app. I mean when you would like to use the location variable from all.yml you can do that on this way: "{{location}}"
Then if you would like to use a variable from hosts file from the first column (where there is no variable name) such as “cust-03” from [vms] group you can do this on this way: "{{inventory_hostname}}".
connection: where we would like to execute this play. We will do it on this machine, so we use local
vars: here you can create custom variables or create variables from concatenated variables
tasks: tasks in this play (we have 1 task in this play, but this task will be executed multiple times according to the number of lines in [vms] group in hosts file)
Create virtual network interface card for FrontEnd subnet
We are ready for execution.
Execute playbooks
Just only one step left: execute playbooks.
Please be sure you are out of inventory and playbooks directory. Then you have to execute the following commands
after -i you have to put the root directory of inventory files
then you put the path of network playbook
Result:
Final result in Azure:
As you can see with some simple configuration you can make some quite cool things. Nevertheless this is merely a fundamental for your future with Ansible.
About a month ago I wrote a post about a Bug in azure-cli 2.0.30. That bug affects some amaretto related functions and features. As I forecasted and the MS promised me the fix is here. This is a really good news. Today I will show you how this fix works then I provide a collection of affected materials of fix.
Last week I showed you How you can integrate Git and Jenkins. Inside that post I did not provide script part for Azure related operation. Today I would like to show it.
In Step 4.4.5 we configured a file which is located on our Git. (pipeline/Jenkinsfile). This file is the “link” which can call an upload-to-azure method script. I know you ask: How?
At first I have a good news AMArETTo supports these operations from v0.0.2.9. AMArETTo is available on Git and on PyPi. ð
This is the best position for you to create a cool automation solution at your company.
And now let’s see how can we implement the Azure functionality to our Jenkins pipeline.
This step is quite easy because we merely should follow the installation steps for AMArETTo.
# Install from bash
sudo pip install amaretto
Step 2: Create Python script which calls AMArETTo
In this step we will create a small python script which execute the upload function from AMArETTo.
Create uploadtoazure.py file into pipeline directory under your GitLab project’s root.
pipeline/uploadtoazure.py
Write a short code which get some external parameters
#!/usr/bin/python
# import amaterro
import amaretto
from amaretto import amarettostorage
# import some important packages
import sys
import json
# Get arguments
fileVersion = str(sys.argv[1])
storageaccountName = str(sys.argv[2])
sasToken = str(sys.argv[3])
filePath = str(sys.argv[4])
modificationLimitMin = str(sys.argv[5])
print "--- Upload ---"
uploadFiles = amaretto.amarettostorage.uploadAllFiles(fileVersion = fileVersion, storageaccountName = storageaccountName, sasToken = sasToken, filePath = filePath, modificationLimitMin = modificationLimitMin)
try:
result = json.loads(uploadFiles)
print "--- Upload files' result: '{0}' with following message: {1}".format(result["status"], result["result"])
except:
print "--- Something went wrong during uploading files."
print "-----------------------------"
 Create the Jenkinsfile into pipeline directory under your GitLab project’s root.
pipeline/Jenkinsfile
Write a valid and lightweight Jenkinsfile code for Python which call our uploadtoazure.py with the right parameters.
pipeline {
agent any
environment {
FILE_VERSION = "1.0.0.0"
AZURE_SA_NAME = "thisismystorage"
AZURE_SA_SAS = "?sv=..."
FILE_PATH = "./upload/"
MODIFICATION_LIMIT_IN_MINUTES = "30"
}
stages {
stage('Build') {
steps {
withCredentials([azureServicePrincipal('c66gbz87-aabb-4096-8192-55d554565fff')]) {
sh '''
# Login to Azure with ServicePrincipal
az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET --tenant $AZURE_TENANT_ID
# Set default subscription
az account set --subscription $AZURE_SUBSCRIPTION_ID
# Execute upload to Azure
python pipeline/uploadtoazure.py "$FILE_VERSION" "$AZURE_SA_NAME" "$AZURE_SA_SAS" "$FILE_PATH" "$MODIFICATION_LIMIT_IN_MINUTES"
# Logout from Azure
az logout --verbose
'''
}
}
}
}
}
Let me explain the Jenkinsfile. As you can see there is a unfamiliar part above bash code withCredentials(). This comes from Jenkins and this contains the Azure Service Principal related data for our Storage Account. (this was configured in the Step 2 in the post from last week) When you use this credential you have well configured variables which contain the related values such as AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID and AZURE_SUBSCRIPTION_ID. These are fully enough to login Azure.
Step 3: Push files to Git
Finaly we have to push these files to our Git
Then push to Build now button in Jenkins
And check the result ð
I hope together with previous post you can improve your own Pipeline and provide a cool solution to your management. ð
“How to prepare our CI/CD process?” – This could be the subtitle of this article. Why? Because I will show you how you can start to build a fully automated CI/CD process.
What is CI/CD? You can read it on Wikipedia. Nevertheless this is a very important and useful thing nowadays when we work in a DevOps model.
Scenario
In my scenario I would like to copy files from Git to Azure with Jenkins when a commit/push happens to my GitLab. As you can see this is quite complex therefore it’s a good practice example.
Important to know, purpose of this post to show you how can you integrate within some minutes your GitLab and your Jenkins. (So we will use our personal git account for configure connection and we will create the connection between Jenkins and Git over https – and not SSH) This means due to testing purpose we won’t create a very secure integration. ð
Integration
Assumptions and prerequisites
You have a Jenkins environment for automation which is a general used tool. Installation steps are here.
You have a configured GitLab environment
You configure your environment only for testing purpose. In other case you have to use different parameters or ssh keys during configuration from security point of view.
You store your application in Git.
You configured the pipeline solution in our Git.
You have an Azure subscription with owner privileges.
Step 1: Configure Jenkins
Here you have to install some plugins to Jenkins.
Login to your Jenkins server
Navigate to “Manage Jenkins > Manage Plugins”
From the “Available” tab, find and select the following plugins:
GitLab Plugin
Azure CLI Plugin
Azure Credentials
Click the “Download now and install after restart” button to download it.
Once the plugin has been downloaded, click the “Restart Jenkins…” checkbox and wait for Jenkins to restart.
When Jenkins restarted navigate to “Manage Jenkins > Configure System”
Find the “Git plugin” section and configure Git basic values
Save configuration
Step 2: Service Principal in Azure
To be able to upload our files to Azure we have to create a Service Principal which has enough privileges to make it.
Login to a computer where the Azure-Cli 2.0 is installed
Login to the subscription where you would like to create the Service Principal
# check relevant cloud infra where you want to login (i.e. AzureGermanCloud, AzureCloud, AzureChinaCloud, ...)
az cloud set --name <name of Cloud>
# Please login to your azure account
az login -u <useraccount>
# Select your subscription
az account set --subscription <subscription ID>
Create a Service Principal with the following command
az ad sp create-for-rbac --name <Service Principal name in Azure. eg. JenkinsGitAzure-the1bithu> --query '{"client_id": appId, "secret": password, "tenant": tenant}'
Copy these values to a safety place because we will use it in Jenkins!
Step 3: Credentials in Jenkins
Now set some credentials such as Git, Azure.
Navigate to “Credentials > System”
Choose the “Global credentials” domain
Click on “Add credentials” button
Create GitLab user credential
Kind: Username with password
Username: <your git username>
Password: <your git password>
ID: <An internal unique ID by which these credentials are identified from jobs and other configuration. Normally left blank, in which case an ID will be generated, which is fine for jobs created using visual forms. Useful to specify explicitly when using credentials from scripted configuration. >
Then click OK button
Create Azure Service Principal credential (We need the data from Step 2)
Client Secret: <Azure Service Principal Client Secret>
Tenant ID <Azure Service Principal Tenant ID>
Azure Environment: <choose one according to your subscription location>
ID: <An internal unique ID by which these credentials are identified from jobs and other configuration. Normally left blank, in which case an ID will be generated, which is fine for jobs created using visual forms. Useful to specify explicitly when using credentials from scripted configuration. >
Then click OK button
Create Azure Storage Account SAS token credential (because our pipeline solution requires the SAS token we have to store somewhere on secure way)
Kind: Secret text
Secret: <paste here the SAS token for storage account. It begins with ‘?sv=’>
ID: <eg. sasTokenAzure | An internal unique ID by which these credentials are identified from jobs and other configuration. Normally left blank, in which case an ID will be generated, which is fine for jobs created using visual forms. Useful to specify explicitly when using credentials from scripted configuration. >
Then click OK button
Create Azure Storage Account Account Key credential (because our pipeline solution requires the Account Key we have to store somewhere on secure way)
Kind: Secret text
Secret: <paste here the Account Key for storage account.>
ID: <eg. storageKeyAzure | An internal unique ID by which these credentials are identified from jobs and other configuration. Normally left blank, in which case an ID will be generated, which is fine for jobs created using visual forms. Useful to specify explicitly when using credentials from scripted configuration. >
Then click OK button
Step 4: Create Jenkins project
Click on ‘New Item’
Type a name to “Enter an item name” field and choose “Pipeline project” then click on OK button
Build Triggers
Tick “Build when a change is pushed to GitLab. GitLab webhook URL:”
Click on “Advanced” button
Choose the “Filter branches by name” and write in the include filed your branch name If you receive an error please ignore because it will be fixed when you will integrate your project with Git. (in Step 5)
Then click on “Generate” button to genetate token for this project.
Pipeline
Select “Piepeline script from SCM” at definition
SCM: Git
Repositories
Paste the clonable url into “Repository URL”. (eg. https://gitlab.com/*****/*****.git)
Credentials. Choose the credential which was created in Step 3.4.
Branches to build. Put here your baranch instead of master. (eg. */master)
Script Path (where the Jenkins file is stored in Git): pipeline/Jenkinsfile
Click Save
Click “Build Now” to test it from Jenkins
Check Console output
Step 5: Integrate GitLab with Jenkins project
Login to GitLab
Step into your project
Navigate to “Settings > Integrations”
Paste the Jenkins project URL (Step 4.3.1)Â and Token (Step 4.3.4) to the first two fields
Choose the required Triggers
Uncheck the “Enable SSL verification” (if you use self-signed certificate on Jenkins)
Click “Add webhook” button
Scroll down and find your newly created webhook at middle of screen
Click the “Push events” under the “Test” button dropdown menu.
If you receive a HTTP 200 message with blue background the integration was success
Step 6: Test integration
Modify your project and commit that to this branch
Open Jenkins and check the project
Status after start by Git
Changes where you can see the commit message
Check Console
Awesome…As you can see it works. ð
Please kindly notes this is a very basic implementation. If you would like to use it in production you have to configure impersonated accounts for git connections and you have to configure the pipeline solution according to your storage account related data. Additionally the SSH based integration could be better later.
In this week I would like to inform you about a bug in azure-cli 2.0.30 which can cause some inconveniences when you want to copy blobs in Azure storage accounts.
Some days ago I started to create a solution for copy files inside storage account (this is related a git pipeline solution) and I was facing an issue when I wanted to use ” az storage blob copy start” command with “–sas-token” parameter. The command was quite simple:
The specified resource does not exist.ErrorCode: CannotVerifyCopySource
<?xml version="1.0" encoding="utf-8"?><Error><Code>CannotVerifyCopySource</Code><Message>The specified resource does not exist.
RequestId:fe725383-701e-0002-5aae-ccdd02000000
Time:2018-04-05T07:20:26.8836489Z</Message></Error>
Traceback (most recent call last):
File "/usr/lib64/az/lib/python2.7/site-packages/knack/cli.py", line 197, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 347, in execute
six.reraise(*sys.exc_info())
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 319, in execute
result = cmd(params)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 180, in __call__
return super(AzCliCommand, self).__call__(*args, **kwargs)
File "/usr/lib64/az/lib/python2.7/site-packages/knack/commands.py", line 109, in __call__
return self.handler(*args, **kwargs)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/__init__.py", line 420, in default_command_handler
result = op(**command_args)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/multiapi/storage/v2017_07_29/blob/baseblobservice.py", line 3032, in copy_blob
False)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/multiapi/storage/v2017_07_29/blob/baseblobservice.py", line 3102, in _copy_blob
return self._perform_request(request, _parse_properties, [BlobProperties]).copy
File "/usr/lib64/az/lib/python2.7/site-packages/azure/multiapi/storage/v2017_07_29/common/storageclient.py", line 354, in _perform_request
raise ex
AzureMissingResourceHttpError: The specified resource does not exist.ErrorCode: CannotVerifyCopySource
<?xml version="1.0" encoding="utf-8"?><Error><Code>CannotVerifyCopySource</Code><Message>The specified resource does not exist.
RequestId:fe725383-701e-0002-5aae-ccdd02000000
Time:2018-04-05T07:20:26.8836489Z</Message></Error>
@the1bit Thanks for bringing this to our attention. #6041 will apply the sas token specified by –sas-token for the source as well as the destination and will be available in our next release.
For now, please use –source-sas to apply the same sas towards your source, as –sas-token currently only applies towards the destination.
I tested again then I was sure there is a bug in the code. I used this command:
AzureHttpError: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.ErrorCode: CannotVerifyCopySource
<?xml version="1.0" encoding="utf-8"?><Error><Code>CannotVerifyCopySource</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
@the1bit I’ve raised a new issue for the bug you found: #6073
Thanks for finding this!
Workaround
There is a bug in “az storage blob copy start” with “–source-sas” parameter in azure-cli 2.0.30. I am sure they will fix this soon. MEanwhile you can apply the following workarounds:
use your sas token in “–source-sas” parameter without ‘?’
use storage account key instead of sas token.
I hope this helps to avoid some struggling until the fix will be here.
I know…this is another restore related post for unmanaged disk. I promise this is the last in this period… ð
So the concept is here for restore and you can find here the previous unmanaged disk related restore with several scripts. (the managed disk related steps are here) Now I would like to show you the amaretto related restore steps forl unmanaged disk based VM.
Some useful information before you start the restore:
Required naming convention
OS disk and Data disks related vhds must be in the following format:
OS disk:
[vmname]-osdisk.vhd
Data disk:
[vmname]-datadisk-[diskid].vhd (where the diskid represents the value of lun)
(example for 1st data disk: myvm-datadisk-0.vhd)
Prerequisites
Linux OS
Azure-Cli 2.x
Python 2.7
amaretto (Azure management tools by the1bit) package for python. You can download it from pypi and git as well.
Amaretto
In amaretto you can also find a restoreUnmanagedDiskFromVhd function in amarettorestore module which âdo your jobâ regarding restore procedure.
What does this function do?
Get restore file from restored container
Download deploy file (generally this is a config.json with UTF-16 encoding(!))
Check file encoding â if necessary it converts from UTF-16 to UTF-8
Deallocate VM
Delete VM object (ONLY)
Get os disk information (restored vhd’s url)
Delete old unmanaged disk
Copy os disk to its original location
Get data disk information (restored vhds’ url)
Delete old unmanaged disk one-by-one
Copy data disk to its original location
Check restore result (whether all disks are restored or not)
And now let’s see the steps one-by-one:
1. Restore VM’s VHDs from backup vault
Choose the right restore point from Recovery Services vaults which belongs to target VM and Restore OSDisk and DataDisks to your storage account.
2. Configure and execute “restoreUnmanagedDiskFromVhd” function from amaretto
You have to execute the following commands with your VM related parameters from python:
# Your VM name
vmName = "thisismyserver-1"
# resource group name where the VM is located
resourceGroup = "thisismyrg"
# location where the resources are located. (westeurope, germanycentral, ...)
location = "westeurope"
# storage account name where the VM's restored vhds are stored
sourceStorageAccount = "thisismystorage"
# 1st or 2nd access key for sourceStorageAccount
sourceSecretKey = "d22j/rr+a7br7LW6KDKV8KZkO2wCIe3m0MTKVr3Tt9B9NMZZsYxny8bvWvPwUGgZpDkE8gyAePjWCVu2IZ4LYw=="
# name of container where the restored vhds are stored
sourceContainer = "vhd6bdda0e88c88408299246c468784656546a"
# Execute restore function
amaretto.amarettorestore.restoreUnmanagedDiskFromVhd(vmName, resourceGroup, location, storageAccount, secretKey, sourceContainer)
3. Re-create target VM with your ARM Template
In this step you merely redeploy your vm from that ARM template you had created for original vm creation.
Next week I will provide a new topic to you… ð
If you need some help regarding ARM Template for restore or other scenarios do not hesitate to contact me.
Last week I provided the unmanaged disk related restore description which based on the concept for restore and now I show the managed disk related restore steps on technical level.
Some useful information before you start the restore:
Required naming convention
OS disk and Data disks related vhds must be in the following format:
OS disk:
[vmname]-osdisk
Data disk:
[vmname]-datadisk-[diskid]Â (where the diskid represents the value of lun)
(example for 1st data disk: myvm-datadisk-0)
Prerequisites
Linux OS
Azure-Cli 2.x
Python 2.7
amaretto (Azure management tools by the1bit) package for python. You can download it from pypi and git as well.
For this type of restore we can use the previously introduced amaretto python package. This package contains the restore steps related functions one-by-one. Additionally you can also find a restoreManagedDiskFromVhd function in amarettorestore module which “do your job” regarding restore procedure.
What does this function do?
Get restore file from restored container
Download deploy file (generally this is a config.json with UTF-16 encoding(!))
Check file encoding – if necessary it converts from UTF-16 to UTF-8
Deallocate VM
Delete VM object (ONLY)
Get os disk information (size, tags – it is important from billing and categorization point of view)
Delete old managed disk
Convert vhd to managed disk (with tags and sizes from step 6)
Get data disks information (size, tags – it is important from billing and categorization point of view)
Delete old managed disks one-by-one
Convert vhds to managed disks (with tags and sizes from step 9)
Check restore result (whether all disks are restored or not)
And now let’s see the restore steps one-by-one:
1. Restore VM’s VHDs from backup vault
Choose the right restore point from Recovery Services vaults which belongs to target VM and Restore OSDisk and DataDisks to your storage account.
Note: In case of manage disk restore: Template based restore for managed disks is planned in June 2018. Thus far that we should identify the restored VHDs. This means after the restore you have vhd files instead of managed disks. ð
2. Configure and execute “restoreManagedDiskFromVhd” function from amaretto
You have to execute the following commands with your VM related parameters from python:
# Your VM name
vmName = "thisismyserver-2"
# resource group name where the VM is located
resourceGroup = "thisismyrg"
# location where the resources are located. (westeurope, germanycentral, ...)
location = "westeurope"
# storage account name where the VM's restored vhds are stored
sourceStorageAccount = "thisismystorage"
# 1st or 2nd access key for sourceStorageAccount
sourceSecretKey = "d22j/rr+a7br7LW6KDKV8KZkO2wCIe3m0MTKVr3Tt9B9NMZZsYxny8bvWvPwUGgZpDkE8gyAePjWCVu2IZ4LYw=="
# name of container where the restored vhds are stored
sourceContainer = "vhd6bdda0e88c88408299246c468784656546a"
# managedDiskAccountType (optional): sku of disk. Possible values: Standard_LRS or Premium_LRS. Default value: Standard_LRS
# managedDiskAccountType = "Standard_LRS"
# Execute restore function
amaretto.amarettorestore.restoreManagedDiskFromVhd(vmName, resourceGroup, location, sourceStorageAccount, sourceSecretKey, sourceContainer)
Result:
>>> amaretto.amarettorestore.restoreManagedDiskFromVhd(vmName, resourceGroup, location, sourceStorageAccount, sourceSecretKey, sourceContainer)
2018-02-22 13:18:40 - FUNCTION Restore Managed disk based VM's vhds
2018-02-22 13:18:40 - Get restore file from restored container
2018-02-22 13:18:43 - Download deploy file
Finished[#############################################################] 100.0000%
2018-02-22 13:18:45 - Check config.json file encoding
2018-02-22 13:18:45 - Deallocate VM: thisismyserver-2
2018-02-22 13:18:48 - Delete VM object: thisismyserver-2
2018-02-22 13:18:50 - - OS DISK
2018-02-22 13:18:50 - Get os disk information
2018-02-22 13:18:50 - Delete old managed disk: thisismyserver-2-osdisk
2018-02-22 13:18:54 - Convert os vhd to its original location
2018-02-22 13:19:00 - - DATA DISKS
2018-02-22 13:19:00 - Get data disks information
2018-02-22 13:19:00 - Delete old data disk
2018-02-22 13:19:05 - Convert data disk to its original location: thisismyserver-2-datadisk-0
2018-02-22 13:19:11 - OS and Data disks are restored
True
3. Re-create target VM with your ARM Template
In this step you merely redeploy your vm from that ARM template you had created for original vm creation.
I hope it helps to solve your restore VM problem. ð
If you need some help regarding ARM Template for restore or other scenarios do not hesitate to contact me.
In the last some weeks I have been working an Azure tool collection which could be useful for everyone who wants to do cool things in Azure. The result of it a python package which will be expanded day-by-day.
In this article I would like to provide it to you for testing or everyday usage.
You can find it on pypi and git as well. Download and documentation links are here:
# Test amaretto
print amaretto.showMessage('Your message')
Notes:
Install without cache from shell:Â pip install amaretto âno-cache-dir
After the update please execute the following command from shell pip show amaretto If you can see that not the latest version is installed, please execute pip uninstall amaretto to unistall it.
Please use it, test is then send me feedback. Thanks!
As I promised last week I provide some materials for unmanaged disk based VM restore from Recovery Services vault. Last week I posted the concept for restore and now I show the most important steps on technical level.
Some useful information before you start the restore:
Required naming convention
OS disk and Data disks related vhds must be in the following format:
OS disk:
[vmname]-osdisk.vhd
Data disk:
[vmname]-datadisk-[diskid].vhd (where the diskid represents the value of lun)
(example for 1st data disk: myvm-datadisk-0.vhd)
Prerequisites
Azure-Cli 2.x
Python 2.7
And now let’s see the steps one-by-one:
1. Stopped (deallocated) the target VM
This step is executable from PowerShell, Azure-Cli and Portal as well.
# Stop (deallocated) vm from Azure-Cli 2.x
az vm deallocate --name <vm name> --resource-group <resource group name> --verbose
Be sure the VM is in Stopped (deallocated) status!
2.Delete necessary objects
In this step we have to delete target VM object (ONLY the virtual machine object) and vhd files which belong to target VM.