In this spring I wrote a post about CI/CD with Git, Azure and Jenkins where I showed you an easy configuration of a CI/CD process. This post contained a chapter where we created a Service Principal in Azure for automation. Since that I received many request from your side to make a demo about Service Principal creation because that would me useful for you.
In this summer after some long days at the beach I was looking at the clouds in the sky and I thought that I should start a new cloud related adventure. And a spark came into my mind: Start to learn AWS. Then: “Am I crazy? I have tons of Azure experience and I have never seen AWS before.” Finally the answer was so easy: “Why not? I have enough motivation to do this moreover it is a good opportunity to make a comparison between Azure and AWS.”
Step 1 – Goals
Therefore I set a goal. My goal was quite simple: Become an AWS Certified Person within 2 month.
8-9 weeks: If I made it hard and focused that would be enough.
Then I started to collect the required training sources and materials for this adventure. I had decided to use the most familiar – for me – online training portals: Udemy and Linux Academy. Although you can find several Free materials there, you should accept it when you would like to get great knowledge on a new area you have to invest – money – into your improvement.
Linux Academy
Luckily I was sponsored a little bit at Linux Academy due to my previous professions, so I could use it almost free. Nevertheless it costs $100 for two month ($49/month). Then you can use all materials there without any limitation.
Udemy
The registration here is free and there are several Free materials. Nevertheless if you need a really good course it costs between $10 – $30 per course.
AWS training and certification
Amazon also provides you some opportunity to buy practice exam for $20. I know this is not free and not full exam (merely a slice of an exam) BUT – in comparison – Microsoft doesn’t provide any opportunity you could see a real exam related questions.
“Free” dumps?
When you try to find some free sources you will see hundreds of dump related pages. These pages offers free demos, 100% money-back guarantee and other things which are look good but…who knows whether are they reliable or not. My personal suggestion to skip these sources.
Step 3 – Choose right online trainings
At first we skip “free” dumps due to reliability reasons.
So check the others. Although Udemy is a great portal to learning, my personal experience: AWS related trainings are not so reliable. I mean there are tons of great materials such as Azure, Ansible, Development, etc. The quality of trainings strongly depend on the instructor. The good news is that, you can find here some Linux Academy related online trainings. Hence I merely chose some practice tests from Udemy.
Accordingly my main knowledge source was Linux Academy. Luckily in middle of this year they updated the “AWS Certified Solutions Architect – Associate” related materials. If you are new in AWS I strongly recommend to start with AWS essential or AWS concept course at Linux Academy before you jump into the Architect training.
When you have chosen the right trainings I suggest to make a learning plan. This plan is required because it helps you in the right progress. This could be very efficient if you follow your plan. 🙂
There are some practice to make it manually such as you decide to learn 3-5 hours per day or make some calendar events to allocate your own time for learning AWS. Nevertheless Linux Academy provides you its great Course Scheduler function:
Step 5 -Learn and focus
We have trainings and a great plan so nothing to left…Let’s start to learn.
The most important things to increase your efficiency during the learning period:
Please get rid of multitasking and focus on ONLY the training in that time you spend with online trainings.
Make an AWS account to make resources and services that instructors show you.
Make and follow the labs which are included in trainings.
Make some notes/flash cards about magic words and services.
Make some special services and scenarios in AWS which related with your job role or ideas.
Step 6- Practice, practice, practice
After you watched all videos and made all task according to hands-on labs it’s time to check your knowledge and improve your chance to pass the exam. The practice test helps you to organize your knowledge regarding AWS.
Here you can find some useful practice tests related information where you can test your knowledge and get better understanding regarding the real exam questions:
I have spent more than 2 weeks with practice and improvement my understanding about exam. I take the practice exams and tests more than 3 times each. Method: fill the test – check answers – fine tuning my knowledge – repeat
I’ve started to learn the AWS 7-8 weeks ago due to some reasons:
Personal reasons
I would like to make a comparison between Azure and AWS (I will provide some articles about it soon.)
I would like to make my knowledge wider
AWS is the second biggest cloud provider nowadays
I wanted to know the capabilities of AWS
During this journey I met a DevOps Essentials course at Linux Academy where there is a very fancy and useful material about DevOps tools. I am sure this will be useful for you as well:
In the middle of summer we was informed about three brand new exams for Azure Administrators. As I mentioned in that article this is a good opportunity for IT experts who needs role-based Azure cert. Nevertheless this role-based approach could be strange from Microsoft. Hence we felt there would be some serious changes around Microsoft Certs, and there you are. Microsoft says: Shake it up!
Three days ago a new new was came up: “The current Azure certification that have been providing the Azure focused core to the MCSA: Cloud Platform and MCSE: Cloud Platform and Infrastructure certification paths are going to be retired December 31, 2018. However, the MCSA and MCSE certifications are not being retired, but rather transformed instead.”
Affected certifications
This means the following exams will be retired by the end of this year:
70-532: Developing Microsoft Azure Solutions
70-533: Implementing Microsoft Azure Solutions
70-535: Architecting Microsoft Azure Solutions
Additionally the existing / old MCSA and MCSE certifications based on them are also being retired.
Good news or bad news?
Oogway (from Kung Fu Panda): Ah, Shifu. There is just news. There is no good or bad.
I guess the is a new career path for that people who would like to choose Azure exam according to their role.
Future mode of certification
What’s next? Merely we should get used to the new certification model and logo…
…and start to be prepared to the new role-based exams.
The next 6 job roles have role-based certification paths soon:
Azure Administrator
Azure Developer
Azure Solutions Architect
Azure DevOps Engineer
Microsoft 365 Modern Desktop Administrator
Microsoft 365 Enterprise Administrator
For more details please read the following articles:
“Calling All Azure Administrators!” – This is the title of the blogpost of Microsoft where they privode a very good opportunity and some coupon for 3 brand new Azure exams.
The target audence is the Azure Administrator group. Please hurry up because you have only some weeks for applying the coupons. “This is NOT a private access code. This code is only valid for exam dates on or before August 9, 2018.”
Today is a milestone because I will provide you a “real-life” solution for a “real-life” scenario with Ansible in Azure. Why is it so important? Since I started to learn Ansible I found several examples for different scenarios but as I realized nobody has provided a really good solution for that situation when you would like to deploy a multi NICs environment to Azure with Ansible. Therefore I did it and I would like to share it with you.
Multi NICs environment that architecture where you can manage and use your services on secure way.
Azure architecture for this solution
As you can see there is another ingredient in this architecture. The Virtual Network and the NSGs are in a separated resource group inside your subscription. Why? Because they are “shared” resources and in this way we can use them for different services. Additionally our architecture stays easy to understand and managed.
Virtual Machines in this scenario
According to the draw above we will create a simple architecture with the following VMs and roles:
Web servers: 2
DB server: 1
Notes:
Web servers have 2 NICs
DB server has only 1 NIC (in BackEnd subnet)
DB server does not have Public IP
Ansible package
With this architecture is easily deployed with Ansible. Nevertheless you have to be sure you use the right version of Ansible. although Ansible supports Azure since version 2.4 the most of required functionality is quite new. The main 2 feature are available according to my requests because I was facing some issue during the solution development. You can find these bugs here:
When you installed the right package to your computer you can pull the required codes from git (201_multi_nic_vm).
# Navigate to git directory
cd /data/git
# Clone azansible from git
git clone https://github.com/the1bit/azansible.git
# Go to 201_multi_nic_vm solution directory
cd 201_multi_nic_vm
Configure azansible
Before you start the deployment you have to prepare it.
In this week I would like to show you my latest automation solution for Azure. This is able to start a VM in Azure.
Everybody knows the automatic VM shutdown feature (Microsoft.DevTestLab/schedules) in Azure. It had been debutated in 2016. I love to use it for Azure developer servers because it can save cost and time for me. Nevertheless there is a small gap here. Why can’t start Azure my developer machine when I arrive to the office?
Now I make a ‘FIX” for this gap.
My Azure VM Manager solution helps you to start your VM in Azure at that time when you schedule. So when you arrive to your workplace your VM is up and running every time. 🙂
Prerequisites
At the moment the v18.6.0 supports only Linux machines. Especially I have only tested on CentOS 7.
{
"vmName": "<name of your vm>",
"vmResourceGroup": "<vm resource group>",
"azure": {
"cloudName": "AzureCloud",
"clientID": "<Service Principal ID>",
"clientSecret": "<Service Principal Secret>",
"tenant": "<Tenant ID>",
"subscriptionID": "<Subscription ID>"
}
}
Save configuration file
Configure crontab according to your update requirement
# Edit crontab settings
vim /etc/crontab
### Configure to start your vm at 9AM every weekdays
0 9 * * mon,tue,wed,thu,fri root cd /root/scripts/azvmmanager;bash azvmmanager.sh;
Wait for the required time then check the logs according to execution in /var/log/azvmmanager directory
less /var/log/azvmmanager/azvmmanager20180614090001.log
part of log:
Thu Jun 14 09:00:29 CEST 2018 : # Login success
Thu Jun 14 09:00:29 CEST 2018 : # Set default subscription
Thu Jun 14 09:00:38 CEST 2018 : # Default subscription has been set
Thu Jun 14 09:00:38 CEST 2018 : # Start VM: xxxxxxxxx
Let’s check your VM status in Azure. You can see it is up and running…
Please do not hesitate to contact me if you have any questions or feedback about this solution or Azure. 🙂
In the world of clouds there are some home servers and on-premise servers which work hard to do their daily tasks for their owners. The people who own these machines often try to reach them through internet. Due to internet suppliers – who provide dynamic IP for their customers – this is a real challenge sometimes. Luckily there are several good and free dynamic dns sites where we can register our home server to reach it with a dns name through the Internet. Here is a quite fresh list about the most popular: 17 Popular Sites Like No-ip
I used No-ip but I did not like the 30-days confirmation of my host there. I know, this is not a big deal. Additionally I am interested in Azure so the solution – I would like to show you – is a simple step on this way.
I had decided to make a solution on Azure basis which can replace No-ip client on my home server. And now that is ready and enough stable for “PROD” usage.
And now…I would like to introduce an alternative for dynamic DNS which works with Azure DNS zone. Sounds good? Let’s see..
This solution helps you to update your home server public IP dynamically. This is not 100% free. The monthly cost in case of a “Pay-AS-YOU-GO” subscription is about 1 EUR/month. Additionally you have to register a domain which you can use in Azure (you can do it in Azure).
Configure crontab according to your update requirement
# Edit crontab settings
vim /etc/crontab
### Configure to execute at 7AM and 7PM every day
0 7 * * * root cd /root/scripts/azdns;bash azdns.sh;
0 19 * * * root cd /root/scripts/azdns;bash azdns.sh;
Wait for the required time then check the logs according to execution in /var/log/azdns directory
less /var/log/azdns/azdns20180614070001.log
part of log file:
...
Thu Jun 14 07:00:29 CEST 2018 : # Login success
Thu Jun 14 07:00:29 CEST 2018 : # Set default subscription
Thu Jun 14 07:00:38 CEST 2018 : # Default subscription has been set
Thu Jun 14 07:00:38 CEST 2018 : # Get current Puplic IP from Internet
...
This means you have your own dynamic DNS solution with Azure DNS Zone. I think this is quite cool…
Please do not hesitate to contact me if you have any questions or feedback about this solution or Azure. 🙂
The biggest news in the world now: “Microsoft to buy GitHub for $7.5 billion”. Microsoft confirms it’s acquiring GitHub. You can read the official blog posts regarding this breaking news A bright future for GitHub from Chris Wanstrath and Microsoft + GitHub = Empowering Developers from Satya Nadella.
When I heard this news thousands of questions come up in my mind. I think this is a good news and I am quite exciting about the future of Github with Microsoft. I am sure there are numerous people who are not so happy about this news. (I hope the won’t delete their codes from Github) 🙂
“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” Microsoft CEO Satya Nadella said in a statement.
“We have been on a journey with open source, and today we are active in the open source ecosystem, we contribute to open source projects, and some of our most vibrant developer tools and frameworks are open source.” Satya Nadella added.
“We both believe GitHub needs to remain an open platform for all developers. No matter your language, stack, platform, cloud, or license, GitHub will continue to be your home—the best place for software creation, collaboration, and discovery.” Wanstrath said in his post.
“I’m extremely proud of what GitHub and our community have accomplished over the past decade, and I can’t wait to see what lies ahead. The future of software development is bright and I’m thrilled to be joining forces with Microsoft to help make it a reality.” Wanstrath wrote.
Nevertheless in the next some weeks several topics will be clarified and everyone would get reassuring news and information about Github’s future.
In March I started a serie about Ansible. Now I would like to show you the first real code and solution how you can create Azure resources with Ansible. I know this is only the second part of the serie therefore I will show a simple and easy-to-understand example which can work in a live environment as well. Let’s start…
I hope you read the latest article and now you have a basic knowledge about Ansible.
Scenario
Our example for Today is a solution which creates the followings:
Resource Group
Virtual Network
FrontEnd subnet for Virtual Network
Network Security Group for FrontEnd subnet
Some Network Cards which connect to FrontEnd subnet
Simple but covers some real life requests. 🙂
Prepapare our Ansible computer
Before we start the real scripting we have to install some packages to our system. We will use the Ansible 2.5.x for our example.
Here we create a Service Principal in Azurre and the credential file for Azure access.
Create Service Principal
Login to Azure (and set the default subscription)
az cloud update
az cloud set -n AzureCloud
# Login with your account
az login -u <your username>
# Set the required subsctiption
az account set --subscription <subscriptionID>
Create the service principal for automation
az ad sp create-for-rbac --name Automation_ResourceManager --query '{"client_id": appId, "secret": password, "tenant": tenant}'
Please write down and store in a secret place the “secret” because you cannot receive again from system anymore.
Required permission for this SP on subscription: Contributor
You have to be owner or co-administrator privileges on Azure to be able to create Service Principal
Make credential file
When you execute a playbook with Ansible it requires the Azure login data. For this we have to create a file.
# Create directory
mkdir ~/.azure
# Create the azure file for credential
vim ~/.azure/credentials
[default]
subscription_id=53455...
client_id=7f37...
secret=ft56...
tenant=987d...
Ansible solution
Ansible files
Regarding Ansible there are 2 very important group of files.
Inventory files: Contain all parameters for our solution or Azure environment. (more information)
hosts: this is the main file which contains the basic parameters and the basic groups. This is an INI-like file.
all.yml: Contains the global variables for plays
other yaml files which match with the group names in hosts file. These contains the group related information.
Playbooks: Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want to enforce. (more information)
Create Inventory files
Create these files into inventory directory.
hosts
As you read above this is a INI-like file and this is the file where we will define the groups we would like to deploy in Azure.
[vnet]: In our example we will deploy Virtual Network related resources (VNET, Subnet, NSG) therefore we have to put here a [vnet] group. Because all the information will be used later we can manage the Virtual Network parameters as global variables. (so we will define them later) Here we just put a vnet value which is a suffix for our Virtual Network name.
[vms]: As I mentioned above we will deploy some NICs so this group name could be [nics] but in the near future I will show you a whole VM deployment so this group name is fine for us. In this group we define the name of VMs (which will be part of the NICs’ names) and the NICs related FrontEnd IPv4 addresses.
This file is a [vms] group specific file where we could define things for [vms] group related activities. In our example this is an empty file.
---
group_vars/all.yml
This is the file which contains the global variables for plays and playbooks. Therefore we will define here all global variables and the [vnet] group related parameters.
Of course you can create some other files and variables…
Create Playbooks
And now we will create the playbooks for the different steps. For this scenario only file would be enough. Nevertheless on this way it will be easy to understand.
The parameters from inventory files will be used like in MVC app. I mean when you would like to use the location variable from all.yml you can do that on this way: "{{location}}"
Then if you would like to use a variable from hosts file from the first column (where there is no variable name) such as “cust-03” from [vms] group you can do this on this way: "{{inventory_hostname}}".
connection: where we would like to execute this play. We will do it on this machine, so we use local
vars: here you can create custom variables or create variables from concatenated variables
tasks: tasks in this play (we have 1 task in this play, but this task will be executed multiple times according to the number of lines in [vms] group in hosts file)
Create virtual network interface card for FrontEnd subnet
We are ready for execution.
Execute playbooks
Just only one step left: execute playbooks.
Please be sure you are out of inventory and playbooks directory. Then you have to execute the following commands
after -i you have to put the root directory of inventory files
then you put the path of network playbook
Result:
Final result in Azure:
As you can see with some simple configuration you can make some quite cool things. Nevertheless this is merely a fundamental for your future with Ansible.
About a month ago I wrote a post about a Bug in azure-cli 2.0.30. That bug affects some amaretto related functions and features. As I forecasted and the MS promised me the fix is here. This is a really good news. Today I will show you how this fix works then I provide a collection of affected materials of fix.
Last week I showed you How you can integrate Git and Jenkins. Inside that post I did not provide script part for Azure related operation. Today I would like to show it.
In Step 4.4.5 we configured a file which is located on our Git. (pipeline/Jenkinsfile). This file is the “link” which can call an upload-to-azure method script. I know you ask: How?
At first I have a good news AMArETTo supports these operations from v0.0.2.9. AMArETTo is available on Git and on PyPi. 🙂
This is the best position for you to create a cool automation solution at your company.
And now let’s see how can we implement the Azure functionality to our Jenkins pipeline.
This step is quite easy because we merely should follow the installation steps for AMArETTo.
# Install from bash
sudo pip install amaretto
Step 2: Create Python script which calls AMArETTo
In this step we will create a small python script which execute the upload function from AMArETTo.
Create uploadtoazure.py file into pipeline directory under your GitLab project’s root.
pipeline/uploadtoazure.py
Write a short code which get some external parameters
#!/usr/bin/python
# import amaterro
import amaretto
from amaretto import amarettostorage
# import some important packages
import sys
import json
# Get arguments
fileVersion = str(sys.argv[1])
storageaccountName = str(sys.argv[2])
sasToken = str(sys.argv[3])
filePath = str(sys.argv[4])
modificationLimitMin = str(sys.argv[5])
print "--- Upload ---"
uploadFiles = amaretto.amarettostorage.uploadAllFiles(fileVersion = fileVersion, storageaccountName = storageaccountName, sasToken = sasToken, filePath = filePath, modificationLimitMin = modificationLimitMin)
try:
result = json.loads(uploadFiles)
print "--- Upload files' result: '{0}' with following message: {1}".format(result["status"], result["result"])
except:
print "--- Something went wrong during uploading files."
print "-----------------------------"
Create the Jenkinsfile into pipeline directory under your GitLab project’s root.
pipeline/Jenkinsfile
Write a valid and lightweight Jenkinsfile code for Python which call our uploadtoazure.py with the right parameters.
pipeline {
agent any
environment {
FILE_VERSION = "1.0.0.0"
AZURE_SA_NAME = "thisismystorage"
AZURE_SA_SAS = "?sv=..."
FILE_PATH = "./upload/"
MODIFICATION_LIMIT_IN_MINUTES = "30"
}
stages {
stage('Build') {
steps {
withCredentials([azureServicePrincipal('c66gbz87-aabb-4096-8192-55d554565fff')]) {
sh '''
# Login to Azure with ServicePrincipal
az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET --tenant $AZURE_TENANT_ID
# Set default subscription
az account set --subscription $AZURE_SUBSCRIPTION_ID
# Execute upload to Azure
python pipeline/uploadtoazure.py "$FILE_VERSION" "$AZURE_SA_NAME" "$AZURE_SA_SAS" "$FILE_PATH" "$MODIFICATION_LIMIT_IN_MINUTES"
# Logout from Azure
az logout --verbose
'''
}
}
}
}
}
Let me explain the Jenkinsfile. As you can see there is a unfamiliar part above bash code withCredentials(). This comes from Jenkins and this contains the Azure Service Principal related data for our Storage Account. (this was configured in the Step 2 in the post from last week) When you use this credential you have well configured variables which contain the related values such as AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID and AZURE_SUBSCRIPTION_ID. These are fully enough to login Azure.
Step 3: Push files to Git
Finaly we have to push these files to our Git
Then push to Build now button in Jenkins
And check the result 🙂
I hope together with previous post you can improve your own Pipeline and provide a cool solution to your management. 😉
“How to prepare our CI/CD process?” – This could be the subtitle of this article. Why? Because I will show you how you can start to build a fully automated CI/CD process.
What is CI/CD? You can read it on Wikipedia. Nevertheless this is a very important and useful thing nowadays when we work in a DevOps model.
Scenario
In my scenario I would like to copy files from Git to Azure with Jenkins when a commit/push happens to my GitLab. As you can see this is quite complex therefore it’s a good practice example.
Important to know, purpose of this post to show you how can you integrate within some minutes your GitLab and your Jenkins. (So we will use our personal git account for configure connection and we will create the connection between Jenkins and Git over https – and not SSH) This means due to testing purpose we won’t create a very secure integration. 😉
Integration
Assumptions and prerequisites
You have a Jenkins environment for automation which is a general used tool. Installation steps are here.
You have a configured GitLab environment
You configure your environment only for testing purpose. In other case you have to use different parameters or ssh keys during configuration from security point of view.
You store your application in Git.
You configured the pipeline solution in our Git.
You have an Azure subscription with owner privileges.
Step 1: Configure Jenkins
Here you have to install some plugins to Jenkins.
Login to your Jenkins server
Navigate to “Manage Jenkins > Manage Plugins”
From the “Available” tab, find and select the following plugins:
GitLab Plugin
Azure CLI Plugin
Azure Credentials
Click the “Download now and install after restart” button to download it.
Once the plugin has been downloaded, click the “Restart Jenkins…” checkbox and wait for Jenkins to restart.
When Jenkins restarted navigate to “Manage Jenkins > Configure System”
Find the “Git plugin” section and configure Git basic values
Save configuration
Step 2: Service Principal in Azure
To be able to upload our files to Azure we have to create a Service Principal which has enough privileges to make it.
Login to a computer where the Azure-Cli 2.0 is installed
Login to the subscription where you would like to create the Service Principal
# check relevant cloud infra where you want to login (i.e. AzureGermanCloud, AzureCloud, AzureChinaCloud, ...)
az cloud set --name <name of Cloud>
# Please login to your azure account
az login -u <useraccount>
# Select your subscription
az account set --subscription <subscription ID>
Create a Service Principal with the following command
az ad sp create-for-rbac --name <Service Principal name in Azure. eg. JenkinsGitAzure-the1bithu> --query '{"client_id": appId, "secret": password, "tenant": tenant}'
Copy these values to a safety place because we will use it in Jenkins!
Step 3: Credentials in Jenkins
Now set some credentials such as Git, Azure.
Navigate to “Credentials > System”
Choose the “Global credentials” domain
Click on “Add credentials” button
Create GitLab user credential
Kind: Username with password
Username: <your git username>
Password: <your git password>
ID: <An internal unique ID by which these credentials are identified from jobs and other configuration. Normally left blank, in which case an ID will be generated, which is fine for jobs created using visual forms. Useful to specify explicitly when using credentials from scripted configuration. >
Then click OK button
Create Azure Service Principal credential (We need the data from Step 2)
Client Secret: <Azure Service Principal Client Secret>
Tenant ID <Azure Service Principal Tenant ID>
Azure Environment: <choose one according to your subscription location>
ID: <An internal unique ID by which these credentials are identified from jobs and other configuration. Normally left blank, in which case an ID will be generated, which is fine for jobs created using visual forms. Useful to specify explicitly when using credentials from scripted configuration. >
Then click OK button
Create Azure Storage Account SAS token credential (because our pipeline solution requires the SAS token we have to store somewhere on secure way)
Kind: Secret text
Secret: <paste here the SAS token for storage account. It begins with ‘?sv=’>
ID: <eg. sasTokenAzure | An internal unique ID by which these credentials are identified from jobs and other configuration. Normally left blank, in which case an ID will be generated, which is fine for jobs created using visual forms. Useful to specify explicitly when using credentials from scripted configuration. >
Then click OK button
Create Azure Storage Account Account Key credential (because our pipeline solution requires the Account Key we have to store somewhere on secure way)
Kind: Secret text
Secret: <paste here the Account Key for storage account.>
ID: <eg. storageKeyAzure | An internal unique ID by which these credentials are identified from jobs and other configuration. Normally left blank, in which case an ID will be generated, which is fine for jobs created using visual forms. Useful to specify explicitly when using credentials from scripted configuration. >
Then click OK button
Step 4: Create Jenkins project
Click on ‘New Item’
Type a name to “Enter an item name” field and choose “Pipeline project” then click on OK button
Build Triggers
Tick “Build when a change is pushed to GitLab. GitLab webhook URL:”
Click on “Advanced” button
Choose the “Filter branches by name” and write in the include filed your branch name If you receive an error please ignore because it will be fixed when you will integrate your project with Git. (in Step 5)
Then click on “Generate” button to genetate token for this project.
Pipeline
Select “Piepeline script from SCM” at definition
SCM: Git
Repositories
Paste the clonable url into “Repository URL”. (eg. https://gitlab.com/*****/*****.git)
Credentials. Choose the credential which was created in Step 3.4.
Branches to build. Put here your baranch instead of master. (eg. */master)
Script Path (where the Jenkins file is stored in Git): pipeline/Jenkinsfile
Click Save
Click “Build Now” to test it from Jenkins
Check Console output
Step 5: Integrate GitLab with Jenkins project
Login to GitLab
Step into your project
Navigate to “Settings > Integrations”
Paste the Jenkins project URL (Step 4.3.1) and Token (Step 4.3.4) to the first two fields
Choose the required Triggers
Uncheck the “Enable SSL verification” (if you use self-signed certificate on Jenkins)
Click “Add webhook” button
Scroll down and find your newly created webhook at middle of screen
Click the “Push events” under the “Test” button dropdown menu.
If you receive a HTTP 200 message with blue background the integration was success
Step 6: Test integration
Modify your project and commit that to this branch
Open Jenkins and check the project
Status after start by Git
Changes where you can see the commit message
Check Console
Awesome…As you can see it works. 🙂
Please kindly notes this is a very basic implementation. If you would like to use it in production you have to configure impersonated accounts for git connections and you have to configure the pipeline solution according to your storage account related data. Additionally the SSH based integration could be better later.
As you can read in the subject this is a huge step in the last period in Azure. Since I have been working with Azure there was a feature which always missed and caused some inconviniences during VMs administration. You have no console access to VMs so when something happened during the boot you were not able to manage by yourself. Merely you could cross your fingers and wait for the login prompt.
And now a new time begins becasue Serial console is here – in preview – for Linux and Windows VMs.
I suggest to try it and if you have any observations you can share with me or Microsoft to ensure this great feature will be available in production with full functionality. You can leave feedback about this feature when you click on Feedback button on top of the screen. (You can see here the opened bugs as well)
First impression
Username prompt is hidden
When you click on the Serial Console (Preview) button you have to wait 1-2 minutes for initialization then it seems it stops. And here I can see a small bug – I think this is acceptable now. So when you hit an enter it asks immediatelly the password.
Of course because you did not type account name you don’t know which password you should type here. So you just simply hit an other enter it says “Login incorrect” then you can type the username. 🙂
Then of course you can login with the right user and password.
Notes
The console work correctly. Not the best but this is only preview. 🙂
You can do everything you want. Nevertheless the copy/paste oprartions are not too comfortable.
<End> sometimes work sometimes not.
The WALinuxAgent sometimes lost the connection with console.
Summary
I am sure this is a great step and useful feature from Microsoft. I hope the Linux gurus also could appreciate this new function. My opinion is absolute positive regarding Serial Console
I suggest to test and open bugs because this is the best support for Microsoft and you. 🙂
In this week I would like to inform you about a bug in azure-cli 2.0.30 which can cause some inconveniences when you want to copy blobs in Azure storage accounts.
Some days ago I started to create a solution for copy files inside storage account (this is related a git pipeline solution) and I was facing an issue when I wanted to use ” az storage blob copy start” command with “–sas-token” parameter. The command was quite simple:
The specified resource does not exist.ErrorCode: CannotVerifyCopySource
<?xml version="1.0" encoding="utf-8"?><Error><Code>CannotVerifyCopySource</Code><Message>The specified resource does not exist.
RequestId:fe725383-701e-0002-5aae-ccdd02000000
Time:2018-04-05T07:20:26.8836489Z</Message></Error>
Traceback (most recent call last):
File "/usr/lib64/az/lib/python2.7/site-packages/knack/cli.py", line 197, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 347, in execute
six.reraise(*sys.exc_info())
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 319, in execute
result = cmd(params)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 180, in __call__
return super(AzCliCommand, self).__call__(*args, **kwargs)
File "/usr/lib64/az/lib/python2.7/site-packages/knack/commands.py", line 109, in __call__
return self.handler(*args, **kwargs)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/cli/core/__init__.py", line 420, in default_command_handler
result = op(**command_args)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/multiapi/storage/v2017_07_29/blob/baseblobservice.py", line 3032, in copy_blob
False)
File "/usr/lib64/az/lib/python2.7/site-packages/azure/multiapi/storage/v2017_07_29/blob/baseblobservice.py", line 3102, in _copy_blob
return self._perform_request(request, _parse_properties, [BlobProperties]).copy
File "/usr/lib64/az/lib/python2.7/site-packages/azure/multiapi/storage/v2017_07_29/common/storageclient.py", line 354, in _perform_request
raise ex
AzureMissingResourceHttpError: The specified resource does not exist.ErrorCode: CannotVerifyCopySource
<?xml version="1.0" encoding="utf-8"?><Error><Code>CannotVerifyCopySource</Code><Message>The specified resource does not exist.
RequestId:fe725383-701e-0002-5aae-ccdd02000000
Time:2018-04-05T07:20:26.8836489Z</Message></Error>
@the1bit Thanks for bringing this to our attention. #6041 will apply the sas token specified by –sas-token for the source as well as the destination and will be available in our next release.
For now, please use –source-sas to apply the same sas towards your source, as –sas-token currently only applies towards the destination.
I tested again then I was sure there is a bug in the code. I used this command:
AzureHttpError: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.ErrorCode: CannotVerifyCopySource
<?xml version="1.0" encoding="utf-8"?><Error><Code>CannotVerifyCopySource</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
@the1bit I’ve raised a new issue for the bug you found: #6073
Thanks for finding this!
Workaround
There is a bug in “az storage blob copy start” with “–source-sas” parameter in azure-cli 2.0.30. I am sure they will fix this soon. MEanwhile you can apply the following workarounds:
use your sas token in “–source-sas” parameter without ‘?’
use storage account key instead of sas token.
I hope this helps to avoid some struggling until the fix will be here.
Since I am working with Azure one of biggest problem was the connection across subscriptions. Although you can use several features to achieve this state such as Site-To-Site VPN, vNet-To-vNet peering, they have some serious limitations.
From my side the most relevant is vNet-To-vNet peering whose biggest limitations the regions where you can make a connection between two subscriptions. I mean You weren’t able to create without any difficulties VNet peering between subscriptions in US and Europe. Additionally You cannot create VNet peering between a subscription in AzureCloud and a subscription in AzureGermany cloud.
This was a huge missing feature and I feel this is a beginning of a bright future where we do not need to create VPN connection – which is far expensive than VNet peering – between our worldwide subscriptions.
Of course at the moment this feature is available in some regions but I am sure this list will be expanded soon.
Automation. It is a nice topic and it is important day by day to make our life easier. There are several very good tools for automation such as Puppet, Chef, Ansible.
I would like to start a serie which covers several topics regarding Azure management with Ansible. This is the first article in this serie. Here I provide some external articles as fundamentals of knowledge. Then I will provide additional topics, scenarios, case studies and examples wit Ansible. Some of these articles will be published as Technical Thursday related articles and some of them will be published as standalone posts. 🙂
Why Ansible?
According to the official site: “Working in IT, you’re likely doing the same tasks over and over. What if you could solve problems once and then automate your solutions going forward? Ansible is here to help.”
Nevertheless during my tasks I often meet with Ansible related topics and solutions. In second hand Azure offers some options for this. For more information you can read the official documentation from Microsoft here.
Ansible articles
As I mentioned you can find some basic articles for Ansible which are great fundamentals for start learning.
In this week I would like to show a sensitive and important thing in GoLang which answers some questions and solve your problems if you just started development in Go(Lang).
When I started this – parellel with python – I was confused and I had some concerns about this language. (I still have some concerns…)
At first Iwould like to inform you this topic is not a brand new story and I am sure you can find many articles on internet on this area. Additionally there are severeal excellent documentation for Go.
We have a main file which will use our modules. Our modules contain many useful and reusable functions and codes for different goals. We wouldn’t like to put every and each functions to a file because that is not so professional and not so efficient for the future.
This contains the basic packages we would like to use during our implementation.
import (
"encoding/json"
"fmt"
)
We use here the JSON encoding and the “fmt”.
3. Result section – struct
We have to define a struct for result structure for JSON.
// FResult is a type of function results
type FResult struct {
Status string `json:"status"`
Message string `json:"message"`
}
4. Functions section
Now this will solve our business requirements which is a simple JSON result function according to string input.
// GetJSONResult for json management
func GetJSONResult(inputStr string) FResult {
result, err := json.Marshal(FResult{
Status: "success",
Message: fmt.Sprintf("%s", inputStr),
})
if err != nil {
panic(err)
}
var f FResult
err = json.Unmarshal(result, &f)
if err != nil {
panic(err)
}
return f
}
This waits a string and gives back a “FResult” type result. Of course this is not so easy. Before that you have to fill your FResult struct related data with the related data, check errors then convert the whole object from/to JSON. (I know this is not a real scenario) And the trick here is the following:
Use json.Marshall because Marshal returns the JSON encoding
Then use json.Unmarshall because Unmarshal parses the JSON-encoded data and stores the result in the value pointed to by FResult If v is nil or not a pointer, Unmarshal returns an InvalidUnmarshalError.
Check the error during convert
Finally it returns with the required result: return f
Now we can use it from main.go
main.go
This is our main file with the main function which will call our core related function.
1. Package section
Package name
package main
2. Import section
This contains the basic packages we would like to use during our implementation.
import (
"firstgo/lib"
"fmt"
)
IMPORTANT: For use an external library or function and include them you have to put here the related path from “src” directory. So if your project is in the “firstgo” directory and your module files inside the “lib” directory you have to import the “firstgo/lib” although your main file is also inside the “firstgo” directory.
3. Functions section
You have to define a main function which executed when you run your program. Here we have to put the module call part.
Because we imported “firstgo/lib” directory and our “core.go” module is inside it we can use our module as we use in other language. I mean: <module>.<function>(<parameter list>)
As you can see I put the function result (which is a JSON object) to a variable then we can check it immediatelly without any further converting.
Outputs, output formatting…this is a quite interesting topic during your development activities. Every time when you create a function you want to provide reusable code with excellent outputs…but how can you achieve this?
Maybe you feel this is a simple question and the answer is also simple. Nevertheless you realize some weeks later this is a little bit complex area. I mean when you choose a result type or solution you have some direction you can choose.
True/False
Simple string
Nothing 🙁
Result code (0, 1, …)
JSON
My personal choice is JSON because easy to manage and you can provide several important information for re-usage via that . For example in Azure-Cli every time you receive JSON object which contains the most important information for current activity. Moreover a JSON object is easily managed by Python.
And now I show some useful example for JSON output management by Python.
0. Use JSON in python
JSON management is very simple in Python. You merely need to import JSON module.
# Import JSON module
import json
# Define JSON format string
mystring = '{"name": "Python", "version": "2.7"}'
# Convert string to JSON
myjson = json.loads(mystring)
# Use JSON data
print "{0}".format(myjson["name"])
1. Return from string
When you have a string in JSON format in your function and you would like to use it as a JSON object you need to convert the string to JSON.
# Define JSON format string
mystring = '{"name": "Python", "version": "2.7"}'
# my Function
def myFunction(inputString):
# Import JSON module
import json
# Try to convert to JSON
try:
# Convert string to JSON
myjson = json.loads(inputString)
# Return with result
return myjson["name"]
except:
# Error handling
return False
# Call function
myFunction(mystring)
To convert string to JSON object you can use the “json.loads()” method.
2. Return from JSON
Sometimes you have a JSON object and you would like to give back it via your function in string format.
# Define JSON format string
mystring = '{"name": "Python", "version": "2.7"}'
# my Function
def myFunction(inputString):
# Import JSON module
import json
# Try to convert to JSON
try:
# Convert string to JSON
myjson = json.loads(inputString)
# Return with result
return json.dumps(myjson)
except:
# Error handling
return False
# Call function
myFunction(mystring)
To convert JSON object to string you can use the “json.dumps()” method.
3. Return custom string
Finally you can create string output which is built by you.
# my Function
def myFunction(inputString):
# Return with result
return '{"status": "success", "message": "%s"}' % (inputString)
# Call function
myFunction("This is a string")
Here you should not import json module to create JSON format output
Of course can combine them and use other formats…Let’s try them…
When you have many python scripts and several very useful feature inside them maybe you are thinking about it: whether can I make a WebUI over them? Then you realize you are not a web developer and you have never made WebUI. So you feel you have to spend hundreds of hours to learn web development or you should skip this option.
I was in this situation then I had found Bottle and I created a website over my python scripts. I know there are some other way to do it such as Django or Flask. For the very first usage the Bottle is a perfect choice.
In this article I would like to provide you a “getting started” guide for Bottle installation and configuration. (more or less this is a link collection)
Basics
When I started this I had three things in my mind:
I use Visual Studio for development
I would like to use it from Azure Web App
I would like to use it from a stand alone CentOS Linux
At first we are a very lucky situation because there are several article on Internet regarding this topic. Please kindly notes I am focus on Visual Studio related usage. 🙂
Then a page for Python web application project templates (this article contains that very useful information to install Python on App Service MS recommends using the site extensions. These extensions are copies of the official releases of Python, optimized and repackaged for Azure App Service.)
Note:
– Azure generally uses Windows machines under App services so your web application will be executed by an IIS
– You have to install the python extension under the App service where you would like to hosts your UI
– Bottle uses MVC modell
I suggest to use different names for GET and POST method related functions. I mean when you have an upload function you should use def upload():for GET and use def do_upload(): for POST method in your code. Why? In this way you can call directly your upload (GET) function from another function.