Christmas Wishes
Thank you for your attention and support in 2018.
We would like to wish you Merry Christmas and a very Happy New Year for 2019.
Thank you for your attention and support in 2018.
We would like to wish you Merry Christmas and a very Happy New Year for 2019.
Nowadays the automation related topics are much more popular than ever. Just think of artificial intelligence, machine learning, and robotics. Thousands of people learn and work on projects day by day from these areas to make millions of people’s life easier and better. This is cool. 🙂
There are several platforms (especially cloud providers) which support AI and ML for long time. Nevertheless there is not a real (public) platform for support robotics….until now.
Last Monday (26.11.2018.) AWS announced to expand its services which really supports Robotics from now on public cloud. 🙂 Announcing AWS RoboMaker: A New Cloud Robotics Service
Why is it so important? To start to build and programing a great and useful robot is big effort also in time and money. You have to learn the basics of robotics, buy a starter-kit, create a development and testing environment with tools, etc. And here AWS could help you to save time and money. Additionally you have a good chance to choose a platform which is widely available for everyone (so you could make highly compatible robots and features). Obviously this is related all other fancy cloud services such as IOT.
But let’s see some official information from AWS:
It is really interesting, isn’t it?
For more information you can learn more here:
I suggest to read about these topics if you are interested in robotics or automation. Additionally I can provide you a great page which belongs to Hungarian Robot Builders Association. This is a club where you can start to learn the basics of robotics.
There is nothing left behind let’s check the possibilities and opportunities. 🙂
Time by time every cloud provider improve its services and resources. This means you are facing several serious or less serious changes in your environment which is related the cloud provider.
1st of December (tomorrow) is one of those days in Azure. This is a quite importand deadline for you and your business if your resources are in Azure. You shouldn’t miss it. Maybe you received an information mail some weeks ago some of these changes but I think you should know the full list of them. Therefore I would like to share with you the elements of this list which was published by Microsoft in the last 1 month.
My suggestion to check the list and the links to avoid the issues regarding the changes from 1st of December 2018.
Effective December 1, 2018, the resource GUIDs and names for Azure Blob Storage will change. Please review complete details.
Effective December 1, 2018, the price and resource GUIDs for Azure SQL Database Managed Instance Business Critical will change. View pricing details and visit the Azure updates webpage for resource GUID changes.
Effective December 1, 2018, the naming for Azure Standard Load Balancer will change. Please review complete details.
Effective December 1, 2018, the price for Linux on App Service Environment will change. Please review complete details.
Effective December 1, 2018, the price for Azure SQL Database Backup Storage will change. Please review complete details.
Effective December 1, 2018, the price for Azure Search Cognitive Search Image Extraction will change. Please review complete details.
Effective December 1, 2018, the resource GUIDs and names for Azure Bandwidth Data Transfer In and Out US Government will change. Please review complete details.
From November 15, 2018, Lv2-series virtual machines have been available in the following regions: Asia Pacific Southeast, Europe West, US East, and US East 2.
As a result, effective January 1, 2019, pricing will change. Please review complete details.
I hope this helps to be a greet weekend and Christmas. 🙂
In this spring I wrote a post about CI/CD with Git, Azure and Jenkins where I showed you an easy configuration of a CI/CD process. This post contained a chapter where we created a Service Principal in Azure for automation. Since that I received many request from your side to make a demo about Service Principal creation because that would me useful for you.
Today I happily share with you a turorial video from my Youtube channel about how to Create Service Principal in Azure (with azure-cli 2.x).
I hope this helps you to increase the automated processes rate.
In this summer after some long days at the beach I was looking at the clouds in the sky and I thought that I should start a new cloud related adventure. And a spark came into my mind: Start to learn AWS. Then: “Am I crazy? I have tons of Azure experience and I have never seen AWS before.” Finally the answer was so easy: “Why not? I have enough motivation to do this moreover it is a good opportunity to make a comparison between Azure and AWS.”
Therefore I set a goal. My goal was quite simple: Become an AWS Certified Person within 2 month.
Then I started to collect the required training sources and materials for this adventure. I had decided to use the most familiar – for me – online training portals: Udemy and Linux Academy. Although you can find several Free materials there, you should accept it when you would like to get great knowledge on a new area you have to invest – money – into your improvement.
Luckily I was sponsored a little bit at Linux Academy due to my previous professions, so I could use it almost free. Nevertheless it costs $100 for two month ($49/month). Then you can use all materials there without any limitation.
The registration here is free and there are several Free materials. Nevertheless if you need a really good course it costs between $10 – $30 per course.
Amazon also provides you some opportunity to buy practice exam for $20. I know this is not free and not full exam (merely a slice of an exam) BUT – in comparison – Microsoft doesn’t provide any opportunity you could see a real exam related questions.
When you try to find some free sources you will see hundreds of dump related pages. These pages offers free demos, 100% money-back guarantee and other things which are look good but…who knows whether are they reliable or not. My personal suggestion to skip these sources.
At first we skip “free” dumps due to reliability reasons.
So check the others. Although Udemy is a great portal to learning, my personal experience: AWS related trainings are not so reliable. I mean there are tons of great materials such as Azure, Ansible, Development, etc. The quality of trainings strongly depend on the instructor. The good news is that, you can find here some Linux Academy related online trainings. Hence I merely chose some practice tests from Udemy.
Accordingly my main knowledge source was Linux Academy. Luckily in middle of this year they updated the “AWS Certified Solutions Architect – Associate” related materials. If you are new in AWS I strongly recommend to start with AWS essential or AWS concept course at Linux Academy before you jump into the Architect training.
Finally the AWS training and certification site where you have chance to schedule the exam and buy practice test.
Accordingly I have chosen the followings trainings and material for learning:
Name and link | Description | Level | Source | Comment |
|---|---|---|---|---|
| AWS Concepts | Very useful and short course about concept of AWS | Beginner | Linux Academy | Recommended if you are new in AWS |
| AWS Essentials | It introduces you the AWS resources and services. | Beginner | Linux Academy | Recommended if you are new in AWS |
| AWS Certified Solutions Architect - Associate Level (2018) | Intermediate | Linux Academy |
When you have chosen the right trainings I suggest to make a learning plan. This plan is required because it helps you in the right progress. This could be very efficient if you follow your plan. 🙂
There are some practice to make it manually such as you decide to learn 3-5 hours per day or make some calendar events to allocate your own time for learning AWS. Nevertheless Linux Academy provides you its great Course Scheduler function:
We have trainings and a great plan so nothing to left…Let’s start to learn.
The most important things to increase your efficiency during the learning period:
After you watched all videos and made all task according to hands-on labs it’s time to check your knowledge and improve your chance to pass the exam. The practice test helps you to organize your knowledge regarding AWS.
Here you can find some useful practice tests related information where you can test your knowledge and get better understanding regarding the real exam questions:
Name and link | Description | Source | Comment |
|---|---|---|---|
| AWS Solutions Architect – Associate Feb 2018 Practice Exam | Real questions and scenarios with references and explanations. | Udemy | More than 300 questions. Lifetime access. |
| 2018 Practice Test AWS Solutions Architect Associate | Real questions and scenarios with references and explanations. | Udemy | More than 180 questions. Lifetime access. |
| AWS Certified Solutions Architect - Associate - Practice - Practice Exam | Real questions | AWS training and certification | 25 questions - exam duration: 30 minutes. One-time access! |
I have spent more than 2 weeks with practice and improvement my understanding about exam. I take the practice exams and tests more than 3 times each. Method: fill the test – check answers – fine tuning my knowledge – repeat
When you know everything about AWS you merely should schedule the exam via AWS training and certification site.
If you need some tips for exam here are my suggestions:
You can find here some other tips for the preparation and the exam.
Finally, I hope you can see the next message on the screen when you click on “End exam” button: “Congratulation! You have passed…”
I’ve started to learn the AWS 7-8 weeks ago due to some reasons:
During this journey I met a DevOps Essentials course at Linux Academy where there is a very fancy and useful material about DevOps tools. I am sure this will be useful for you as well:
Use it happily. 🙂
In the middle of summer we was informed about three brand new exams for Azure Administrators. As I mentioned in that article this is a good opportunity for IT experts who needs role-based Azure cert. Nevertheless this role-based approach could be strange from Microsoft. Hence we felt there would be some serious changes around Microsoft Certs, and there you are. Microsoft says: Shake it up!
Three days ago a new new was came up: “The current Azure certification that have been providing the Azure focused core to the MCSA: Cloud Platform and MCSE: Cloud Platform and Infrastructure certification paths are going to be retired December 31, 2018. However, the MCSA and MCSE certifications are not being retired, but rather transformed instead.”
This means the following exams will be retired by the end of this year:
Additionally the existing / old MCSA and MCSE certifications based on them are also being retired.
Good news or bad news?
Oogway (from Kung Fu Panda): Ah, Shifu. There is just news. There is no good or bad.
I guess the is a new career path for that people who would like to choose Azure exam according to their role.
What’s next? Merely we should get used to the new certification model and logo…
…and start to be prepared to the new role-based exams.
The next 6 job roles have role-based certification paths soon:
For more details please read the following articles:
In the near future I hope we will have as many as information we need for taking these new exams and transition exams. 🙂
“Calling All Azure Administrators!” – This is the title of the blogpost of Microsoft where they privode a very good opportunity and some coupon for 3 brand new Azure exams.
The target audence is the Azure Administrator group. Please hurry up because you have only some weeks for applying the coupons. “This is NOT a private access code. This code is only valid for exam dates on or before August 9, 2018.”
For the details and coupons please read the full article here: Calling All Azure Administrators! Introducing a New Certification Just for You…
Good luck for these exams. 🙂
Today is a milestone because I will provide you a “real-life” solution for a “real-life” scenario with Ansible in Azure. Why is it so important? Since I started to learn Ansible I found several examples for different scenarios but as I realized nobody has provided a really good solution for that situation when you would like to deploy a multi NICs environment to Azure with Ansible. Therefore I did it and I would like to share it with you.
Multi NICs environment that architecture where you can manage and use your services on secure way.
As you can see there is another ingredient in this architecture. The Virtual Network and the NSGs are in a separated resource group inside your subscription. Why? Because they are “shared” resources and in this way we can use them for different services. Additionally our architecture stays easy to understand and managed.
According to the draw above we will create a simple architecture with the following VMs and roles:
Notes:
With this architecture is easily deployed with Ansible. Nevertheless you have to be sure you use the right version of Ansible. although Ansible supports Azure since version 2.4 the most of required functionality is quite new. The main 2 feature are available according to my requests because I was facing some issue during the solution development. You can find these bugs here:
Therefore the required Ansible version for this solution on Today: 2.7dev You can install it on this way:
sudo pip install git+https://github.com/ansible/ansible.git@devel
When you installed the right package to your computer you can pull the required codes from git (201_multi_nic_vm).
# Navigate to git directory cd /data/git # Clone azansible from git git clone https://github.com/the1bit/azansible.git # Go to 201_multi_nic_vm solution directory cd 201_multi_nic_vm
Before you start the deployment you have to prepare it.
--- env_id: "the1bit" location: "westeurope" public_key: "ssh-rsa AAAAB3NzaC1yc2EAAAAB..." vnet_rg: "common" vnet_address_prefix: "79.0.0.0/23" vnet_fe_subnet_address_prefix: "79.0.0.0/24" vnet_be_subnet_address_prefix: "79.0.1.0/24"
Finally only one thing left: Start the deployment
# Start Ansible playbooks ansible-playbook -i inventory/ -e mainpath="/data/git/azansible/201_multi_nic_vm" playbooks/main.yml
and check the result…
After the successful deployment you can use your environment. Merely some additional configuration is required:
In the near future I will expand this solution with loadbalancer and other features. Therefore please follow me on Twitter, Facebook or Git.
In this week I would like to show you my latest automation solution for Azure. This is able to start a VM in Azure.
Everybody knows the automatic VM shutdown feature (Microsoft.DevTestLab/schedules) in Azure. It had been debutated in 2016. I love to use it for Azure developer servers because it can save cost and time for me. Nevertheless there is a small gap here. Why can’t start Azure my developer machine when I arrive to the office?
Now I make a ‘FIX” for this gap.
My Azure VM Manager solution helps you to start your VM in Azure at that time when you schedule. So when you arrive to your workplace your VM is up and running every time. 🙂
At the moment the v18.6.0 supports only Linux machines. Especially I have only tested on CentOS 7.
{
"vmName": "<name of your vm>",
"vmResourceGroup": "<vm resource group>",
"azure": {
"cloudName": "AzureCloud",
"clientID": "<Service Principal ID>",
"clientSecret": "<Service Principal Secret>",
"tenant": "<Tenant ID>",
"subscriptionID": "<Subscription ID>"
}
}
# Edit crontab settings vim /etc/crontab ### Configure to start your vm at 9AM every weekdays 0 9 * * mon,tue,wed,thu,fri root cd /root/scripts/azvmmanager;bash azvmmanager.sh;
less /var/log/azvmmanager/azvmmanager20180614090001.log
part of log:
Thu Jun 14 09:00:29 CEST 2018 : # Login success Thu Jun 14 09:00:29 CEST 2018 : # Set default subscription Thu Jun 14 09:00:38 CEST 2018 : # Default subscription has been set Thu Jun 14 09:00:38 CEST 2018 : # Start VM: xxxxxxxxx
Let’s check your VM status in Azure. You can see it is up and running…
Please do not hesitate to contact me if you have any questions or feedback about this solution or Azure. 🙂
In the world of clouds there are some home servers and on-premise servers which work hard to do their daily tasks for their owners. The people who own these machines often try to reach them through internet. Due to internet suppliers – who provide dynamic IP for their customers – this is a real challenge sometimes. Luckily there are several good and free dynamic dns sites where we can register our home server to reach it with a dns name through the Internet. Here is a quite fresh list about the most popular: 17 Popular Sites Like No-ip
I used No-ip but I did not like the 30-days confirmation of my host there. I know, this is not a big deal. Additionally I am interested in Azure so the solution – I would like to show you – is a simple step on this way.
I had decided to make a solution on Azure basis which can replace No-ip client on my home server. And now that is ready and enough stable for “PROD” usage.
And now…I would like to introduce an alternative for dynamic DNS which works with Azure DNS zone. Sounds good? Let’s see..
This solution helps you to update your home server public IP dynamically. This is not 100% free. The monthly cost in case of a “Pay-AS-YOU-GO” subscription is about 1 EUR/month. Additionally you have to register a domain which you can use in Azure (you can do it in Azure).
At the moment the v18.6.0 supports only Linux machines. Especially I have only tested on CentOS 7.
{
"zoneName": "<domain name>",
"aRecordName": "<subdomain>",
"dnsResourceGroup": "<DNS Zone resource group>",
"azure": {
"cloudName": "AzureCloud",
"clientID": "<Service Principal ID>",
"clientSecret": "<Service Principal Secret>",
"tenant": "<Tenant ID>",
"subscriptionID": "<Subscription ID>"
}
}
# Edit crontab settings vim /etc/crontab ### Configure to execute at 7AM and 7PM every day 0 7 * * * root cd /root/scripts/azdns;bash azdns.sh; 0 19 * * * root cd /root/scripts/azdns;bash azdns.sh;
less /var/log/azdns/azdns20180614070001.log
part of log file:
... Thu Jun 14 07:00:29 CEST 2018 : # Login success Thu Jun 14 07:00:29 CEST 2018 : # Set default subscription Thu Jun 14 07:00:38 CEST 2018 : # Default subscription has been set Thu Jun 14 07:00:38 CEST 2018 : # Get current Puplic IP from Internet ...
This means you have your own dynamic DNS solution with Azure DNS Zone. I think this is quite cool…
Please do not hesitate to contact me if you have any questions or feedback about this solution or Azure. 🙂
The biggest news in the world now: “Microsoft to buy GitHub for $7.5 billion”. Microsoft confirms it’s acquiring GitHub. You can read the official blog posts regarding this breaking news A bright future for GitHub from Chris Wanstrath and Microsoft + GitHub = Empowering Developers from Satya Nadella.
When I heard this news thousands of questions come up in my mind. I think this is a good news and I am quite exciting about the future of Github with Microsoft. I am sure there are numerous people who are not so happy about this news. (I hope the won’t delete their codes from Github) 🙂
“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” Microsoft CEO Satya Nadella said in a statement.
“We have been on a journey with open source, and today we are active in the open source ecosystem, we contribute to open source projects, and some of our most vibrant developer tools and frameworks are open source.” Satya Nadella added.
“We both believe GitHub needs to remain an open platform for all developers. No matter your language, stack, platform, cloud, or license, GitHub will continue to be your home—the best place for software creation, collaboration, and discovery.” Wanstrath said in his post.
“I’m extremely proud of what GitHub and our community have accomplished over the past decade, and I can’t wait to see what lies ahead. The future of software development is bright and I’m thrilled to be joining forces with Microsoft to help make it a reality.” Wanstrath wrote.
Nevertheless in the next some weeks several topics will be clarified and everyone would get reassuring news and information about Github’s future.
In March I started a serie about Ansible. Now I would like to show you the first real code and solution how you can create Azure resources with Ansible. I know this is only the second part of the serie therefore I will show a simple and easy-to-understand example which can work in a live environment as well. Let’s start…
I hope you read the latest article and now you have a basic knowledge about Ansible.
Our example for Today is a solution which creates the followings:
Simple but covers some real life requests. 🙂
Before we start the real scripting we have to install some packages to our system. We will use the Ansible 2.5.x for our example.
# update PIP on your computer sudo pip install --upgrade pip # Install/Update azure sudo pip install azure --upgrade # Install/Update msrestazure sudo pip install msrestazure --upgrade # Install/update packaging sudo pip install packaging --upgrade # Install/Update cryptography sudo pip install cryptography --upgrade # Install/Update azure module for Ansible sudo pip install ansible[azure] --upgrade
Here we create a Service Principal in Azurre and the credential file for Azure access.
az cloud update az cloud set -n AzureCloud # Login with your account az login -u <your username> # Set the required subsctiption az account set --subscription <subscriptionID>
az ad sp create-for-rbac --name Automation_ResourceManager --query '{"client_id": appId, "secret": password, "tenant": tenant}'
When you execute a playbook with Ansible it requires the Azure login data. For this we have to create a file.
# Create directory mkdir ~/.azure # Create the azure file for credential vim ~/.azure/credentials [default] subscription_id=53455... client_id=7f37... secret=ft56... tenant=987d...
Regarding Ansible there are 2 very important group of files.
.
├── inventory
│ ├── group_vars
│ │ ├── all.yml
│ │ └── vms.yml
│ └── hosts
└── playbooks
├── azure_network.yml
└── azure_vnet.yml
Create these files into inventory directory.
As you read above this is a INI-like file and this is the file where we will define the groups we would like to deploy in Azure.
Here is our hosts file:
[vnet] vnet [vms] cust-01 vm_fe_ip="79.0.0.11" cust-02 vm_fe_ip="79.0.0.12" cust-03 vm_fe_ip="79.0.0.13" cust-04 vm_fe_ip="79.0.0.14"
This file is a [vms] group specific file where we could define things for [vms] group related activities. In our example this is an empty file.
---
This is the file which contains the global variables for plays and playbooks. Therefore we will define here all global variables and the [vnet] group related parameters.
--- env_id: "the1bit" location: "westeurope" resource_group: "custom" vnet_address_prefix: "79.0.0.0/23" vnet_fe_subnet_address_prefix: "79.0.0.0/24"
Of course you can create some other files and variables…
And now we will create the playbooks for the different steps. For this scenario only file would be enough. Nevertheless on this way it will be easy to understand.
The parameters from inventory files will be used like in MVC app. I mean when you would like to use the location variable from all.yml you can do that on this way: "{{location}}"
Then if you would like to use a variable from hosts file from the first column (where there is no variable name) such as “cust-03” from [vms] group you can do this on this way: "{{inventory_hostname}}".
Create these files into playbooks directory.
This file contains the following creation steps:
- name: Create VirtualNetwork
hosts: vnet
connection: local
vars:
rgName: "{{env_id}}-rg-{{resource_group}}"
vnet_name: "{{env_id}}-{{inventory_hostname}}"
tasks:
- name: Create resource Group - {{env_id}}-rg-{{resource_group}}
shell: az group create -n "{{rgName}}" --location "{{location}}"
- name: Create virtual network - {{vnet_name}}
azure_rm_virtualnetwork:
resource_group: "{{rgName}}"
name: "{{vnet_name}}"
address_prefixes: "{{vnet_address_prefix}}"
- name: Add FE subnet {{vnet_name}}-subnet-fe
azure_rm_subnet:
resource_group: "{{rgName}}"
name: "{{vnet_name}}-subnet-fe"
address_prefix: "{{vnet_fe_subnet_address_prefix}}"
virtual_network: "{{vnet_name}}"
- name: Create Network Security Group - FE - {{vnet_name}}-nsg-fe
azure_rm_securitygroup:
resource_group: "{{rgName}}"
name: "{{vnet_name}}-nsg-fe"
Here we have some exciting sections:
This file contains the following creation steps:
- name: Create Network
hosts: vms
connection: local
vars:
rgName: "{{env_id}}-rg-{{resource_group}}"
vnet_name: "{{env_id}}-vnet"
tasks:
- name: Create virtual network interface card - FE
azure_rm_networkinterface:
resource_group: "{{rgName}}"
name: "{{env_id}}-{{inventory_hostname}}-nic-fe"
virtual_network: "{{vnet_name}}"
subnet: "{{vnet_name}}-subnet-fe"
security_group: "{{vnet_name}}-nsg-fe"
ip_configurations:
- name: ipconfig1
private_ip_allocation_method: Static
private_ip_address: "{{vm_fe_ip}}"
primary: true
Here we have some exciting sections:
We are ready for execution.
Just only one step left: execute playbooks.
Please be sure you are out of inventory and playbooks directory. Then you have to execute the following commands
1. Create VNET related resources
ansible-playbook -i inventory/ playbooks/azure_vnet.yml
Where:
2. Create NETWORK related resources
ansible-playbook -i inventory/ playbooks/azure_network.yml
Where:
Result:
Final result in Azure:
As you can see with some simple configuration you can make some quite cool things. Nevertheless this is merely a fundamental for your future with Ansible.
You can find this and another exciting solutions for Ansible on my azansible Git repository.
About a month ago I wrote a post about a Bug in azure-cli 2.0.30. That bug affects some amaretto related functions and features. As I forecasted and the MS promised me the fix is here. This is a really good news. Today I will show you how this fix works then I provide a collection of affected materials of fix.
Last week I showed you How you can integrate Git and Jenkins. Inside that post I did not provide script part for Azure related operation. Today I would like to show it.
In Step 4.4.5 we configured a file which is located on our Git. (pipeline/Jenkinsfile). This file is the “link” which can call an upload-to-azure method script. I know you ask: How?
At first I have a good news AMArETTo supports these operations from v0.0.2.9. AMArETTo is available on Git and on PyPi. 🙂
This is the best position for you to create a cool automation solution at your company.
And now let’s see how can we implement the Azure functionality to our Jenkins pipeline.
# Install from bash sudo pip install amaretto
In this step we will create a small python script which execute the upload function from AMArETTo.
pipeline/uploadtoazure.py
#!/usr/bin/python
# import amaterro
import amaretto
from amaretto import amarettostorage
# import some important packages
import sys
import json
# Get arguments
fileVersion = str(sys.argv[1])
storageaccountName = str(sys.argv[2])
sasToken = str(sys.argv[3])
filePath = str(sys.argv[4])
modificationLimitMin = str(sys.argv[5])
print "--- Upload ---"
uploadFiles = amaretto.amarettostorage.uploadAllFiles(fileVersion = fileVersion, storageaccountName = storageaccountName, sasToken = sasToken, filePath = filePath, modificationLimitMin = modificationLimitMin)
try:
result = json.loads(uploadFiles)
print "--- Upload files' result: '{0}' with following message: {1}".format(result["status"], result["result"])
except:
print "--- Something went wrong during uploading files."
print "-----------------------------"
pipeline/Jenkinsfile
pipeline {
agent any
environment {
FILE_VERSION = "1.0.0.0"
AZURE_SA_NAME = "thisismystorage"
AZURE_SA_SAS = "?sv=..."
FILE_PATH = "./upload/"
MODIFICATION_LIMIT_IN_MINUTES = "30"
}
stages {
stage('Build') {
steps {
withCredentials([azureServicePrincipal('c66gbz87-aabb-4096-8192-55d554565fff')]) {
sh '''
# Login to Azure with ServicePrincipal
az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET --tenant $AZURE_TENANT_ID
# Set default subscription
az account set --subscription $AZURE_SUBSCRIPTION_ID
# Execute upload to Azure
python pipeline/uploadtoazure.py "$FILE_VERSION" "$AZURE_SA_NAME" "$AZURE_SA_SAS" "$FILE_PATH" "$MODIFICATION_LIMIT_IN_MINUTES"
# Logout from Azure
az logout --verbose
'''
}
}
}
}
}
Let me explain the Jenkinsfile. As you can see there is a unfamiliar part above bash code withCredentials(). This comes from Jenkins and this contains the Azure Service Principal related data for our Storage Account. (this was configured in the Step 2 in the post from last week) When you use this credential you have well configured variables which contain the related values such as AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID and AZURE_SUBSCRIPTION_ID. These are fully enough to login Azure.

I hope together with previous post you can improve your own Pipeline and provide a cool solution to your management. 😉
“How to prepare our CI/CD process?” – This could be the subtitle of this article. Why? Because I will show you how you can start to build a fully automated CI/CD process.
What is CI/CD? You can read it on Wikipedia. Nevertheless this is a very important and useful thing nowadays when we work in a DevOps model.
In my scenario I would like to copy files from Git to Azure with Jenkins when a commit/push happens to my GitLab. As you can see this is quite complex therefore it’s a good practice example.
Important to know, purpose of this post to show you how can you integrate within some minutes your GitLab and your Jenkins. (So we will use our personal git account for configure connection and we will create the connection between Jenkins and Git over https – and not SSH) This means due to testing purpose we won’t create a very secure integration. 😉
Here you have to install some plugins to Jenkins.



To be able to upload our files to Azure we have to create a Service Principal which has enough privileges to make it.
# check relevant cloud infra where you want to login (i.e. AzureGermanCloud, AzureCloud, AzureChinaCloud, ...) az cloud set --name <name of Cloud> # Please login to your azure account az login -u <useraccount> # Select your subscription az account set --subscription <subscription ID>
az ad sp create-for-rbac --name <Service Principal name in Azure. eg. JenkinsGitAzure-the1bithu> --query '{"client_id": appId, "secret": password, "tenant": tenant}'
Now set some credentials such as Git, Azure.















Awesome…As you can see it works. 🙂
Please kindly notes this is a very basic implementation. If you would like to use it in production you have to configure impersonated accounts for git connections and you have to configure the pipeline solution according to your storage account related data. Additionally the SSH based integration could be better later.
As you can read in the subject this is a huge step in the last period in Azure. Since I have been working with Azure there was a feature which always missed and caused some inconviniences during VMs administration. You have no console access to VMs so when something happened during the boot you were not able to manage by yourself. Merely you could cross your fingers and wait for the login prompt.
And now a new time begins becasue Serial console is here – in preview – for Linux and Windows VMs.
I suggest to try it and if you have any observations you can share with me or Microsoft to ensure this great feature will be available in production with full functionality. You can leave feedback about this feature when you click on Feedback button on top of the screen. (You can see here the opened bugs as well)
When you click on the Serial Console (Preview) button you have to wait 1-2 minutes for initialization then it seems it stops. And here I can see a small bug – I think this is acceptable now. So when you hit an enter it asks immediatelly the password.
Of course because you did not type account name you don’t know which password you should type here. So you just simply hit an other enter it says “Login incorrect” then you can type the username. 🙂
Then of course you can login with the right user and password.
I am sure this is a great step and useful feature from Microsoft. I hope the Linux gurus also could appreciate this new function. My opinion is absolute positive regarding Serial Console
I suggest to test and open bugs because this is the best support for Microsoft and you. 🙂
In this week I would like to inform you about a bug in azure-cli 2.0.30 which can cause some inconveniences when you want to copy blobs in Azure storage accounts.
Some days ago I started to create a solution for copy files inside storage account (this is related a git pipeline solution) and I was facing an issue when I wanted to use ” az storage blob copy start” command with “–sas-token” parameter. The command was quite simple:
az storage blob copy start --account-name mystorageacc --sas-token "?sv=2017-07-29&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-04-04T15:43:14Z&st=2018-04-03T23:43:14Z&spr=https&sig=***************" --destination-container "files" --source-container "files" --source-blob "new/arm-template.json" --destination-blob "archive/arm-template.json"
and I received the following error:
The specified resource does not exist.ErrorCode: CannotVerifyCopySource
<?xml version="1.0" encoding="utf-8"?><Error><Code>CannotVerifyCopySource</Code><Message>The specified resource does not exist.
RequestId:fe725383-701e-0002-5aae-ccdd02000000
Time:2018-04-05T07:20:26.8836489Z</Message></Error>
Traceback (most recent call last):
File "/usr/lib64/az/lib/python2.7/site-packages/knack/cli.py", line 197, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 347, in execute
six.reraise(*sys.exc_info())
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 319, in execute
result = cmd(params)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 180, in __call__
return super(AzCliCommand, self).__call__(*args, **kwargs)
File "/usr/lib64/az/lib/python2.7/site-packages/knack/commands.py", line 109, in __call__
return self.handler(*args, **kwargs)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/__init__.py", line 420, in default_command_handler
result = op(**command_args)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/multiapi/storage/v2017_07_29/blob/baseblobservice.py", line 3032, in copy_blob
False)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/multiapi/storage/v2017_07_29/blob/baseblobservice.py", line 3102, in _copy_blob
return self._perform_request(request, _parse_properties, [BlobProperties]).copy
File "/usr/lib64/az/lib/python2.7/site-packages/azure/multiapi/storage/v2017_07_29/common/storageclient.py", line 354, in _perform_request
raise ex
AzureMissingResourceHttpError: The specified resource does not exist.ErrorCode: CannotVerifyCopySource
<?xml version="1.0" encoding="utf-8"?><Error><Code>CannotVerifyCopySource</Code><Message>The specified resource does not exist.
RequestId:fe725383-701e-0002-5aae-ccdd02000000
Time:2018-04-05T07:20:26.8836489Z</Message></Error>
I started to check the possible reasons but finally I registered an issue on Azure/azure-cli on Git to MS: az storage blob copy start issue when I use “sas token”
Some days later I received this answer:
@the1bit Thanks for bringing this to our attention.
#6041 will apply the sas token specified by –sas-token for the source as well as the destination and will be available in our next release.
For now, please use –source-sas to apply the same sas towards your source, as –sas-token currently only applies towards the destination.
I tested again then I was sure there is a bug in the code. I used this command:
az storage blob copy start --account-name mystorageacc --sas-token "?sv=2017-07-29&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-04-04T15:43:14Z&st=2018-04-03T23:43:14Z&spr=https&sig=***************" --destination-container "files" --source-container "files" --source-blob "new/arm-template.json" --destination-blob "archive/arm-template.json" --source-account-name mystorageacc --source-sas "?sv=2017-07-29&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-04-04T15:43:14Z&st=2018-04-03T23:43:14Z&spr=https&sig=***************" --debug
And the error was a new one:
AzureHttpError: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.ErrorCode: CannotVerifyCopySource <?xml version="1.0" encoding="utf-8"?><Error><Code>CannotVerifyCopySource</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Then I found a strange thing in debug output:
'x-ms-copy-source': 'https://mystorageacc.blob.core.cloudapi.de/files/new/arm-template.json??sv=2017-07-29&ss=bfqt&srt=sco&sp=rwdlacup&se=2118-04-10T16:23:59Z&st=2018-04-10T08:23:59Z&spr=https&sig=4dCQpoGDZnHZY%2FCk0TXKXtH6I%2BzZP%2BTW2ZkErE1LgjQ%3D',
As you can see there is a double ‘?’ around SAS token:
.json??sv=
Then I tested the –source-sas without ‘?’:
--source-sas "sv=2017-07-29&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-04-04T15:43:14Z&st=2018-04-03T23:43:14Z&spr=https&sig=***************"
and the copy works:
{
"completionTime": null,
"id": "077b06d7-d843-4fc3-ba8b-e48091753869",
"progress": null,
"source": null,
"status": "success",
"statusDescription": null
}
I sent this to MS and some another days later:
@the1bit I’ve raised a new issue for the bug you found: #6073
Thanks for finding this!
There is a bug in “az storage blob copy start” with “–source-sas” parameter in azure-cli 2.0.30. I am sure they will fix this soon. MEanwhile you can apply the following workarounds:
I hope this helps to avoid some struggling until the fix will be here.
Since I am working with Azure one of biggest problem was the connection across subscriptions. Although you can use several features to achieve this state such as Site-To-Site VPN, vNet-To-vNet peering, they have some serious limitations.
From my side the most relevant is vNet-To-vNet peering whose biggest limitations the regions where you can make a connection between two subscriptions. I mean You weren’t able to create without any difficulties VNet peering between subscriptions in US and Europe. Additionally You cannot create VNet peering between a subscription in AzureCloud and a subscription in AzureGermany cloud.
Luckily the good news was arrived on end of this March from Microsoft side: Global VNet Peering is now generally available.
This is awesome… 🙂
This was a huge missing feature and I feel this is a beginning of a bright future where we do not need to create VPN connection – which is far expensive than VNet peering – between our worldwide subscriptions.
Of course at the moment this feature is available in some regions but I am sure this list will be expanded soon.
You can now peer across the following regions:
For more information and description please read the following article: Global VNet Peering is now generally available
Before you start to replace your all of your existing VPN connection to VNet peering please check the pricing of VNet peering.
Automation. It is a nice topic and it is important day by day to make our life easier. There are several very good tools for automation such as Puppet, Chef, Ansible.
I would like to start a serie which covers several topics regarding Azure management with Ansible. This is the first article in this serie. Here I provide some external articles as fundamentals of knowledge. Then I will provide additional topics, scenarios, case studies and examples wit Ansible. Some of these articles will be published as Technical Thursday related articles and some of them will be published as standalone posts. 🙂
According to the official site: “Working in IT, you’re likely doing the same tasks over and over. What if you could solve problems once and then automate your solutions going forward? Ansible is here to help.”
Nevertheless during my tasks I often meet with Ansible related topics and solutions. In second hand Azure offers some options for this. For more information you can read the official documentation from Microsoft here.
As I mentioned you can find some basic articles for Ansible which are great fundamentals for start learning.
So all together it’s time to start learning Ansible…