Creating, deployment & customizing Linux VMs with Python & Chef – Part 1


Creating, deployment & customizing Linux VMs  with Python & Chef – Part 1

Recent surveys and trends have shown Python as one of the fast emerging & preferred scripting languages especially for DevOps (reference: http://redmonk.com/sogrady/2015/01/14/language-rankings-1-15/).

Through this two part series we will look at using Python to setup and create Linux virtual machines on Azure and customizing the machines through Custom Script Extensions (Part 1) and Chef VM Extensions (Part 2).

To elaborate as many features as possible, we will create a MySQL Percona cluster by setting up three VMs, on a virtual network and using custom script extensions, install MySQL through a Chef cookbook. But before we begin we need to ensure we have the following pre-requisites in place. There is a lot of information available in MSDN & Internet in general, I have added some of the references I found useful to get you started quick.

 

 

Pre-requisites

Throughout this post I have referenced various code snippets, the complete code can be found in the github repository (https://github.com/shwetams/python_azure_iaas) , I would recommend downloading it and referring the code files as we go through the following steps.

 

Step 1: Setup Azure account

 

You need to have an Azure account setup before we begin. If you don’t already have an Azure account you can create one at the azure portal.

 

 

Step 2: Create a Virtual Network

The VMs which this script creates uses a Vnet, since all the clusters in a MySQL cluster need to be on a Virtual network. Also, it is a good practice to have all your VMs connected over a single Virtual network.

Please use the steps described in the link to create a Vnet. Also, the link has several references to create different kinds of network, site to site, point to site etc. Please choose the one appropriate to your requirement.

If you want to use Python to create the Vnet, you can do so by calling the REST API as explained in the msdn link. Following is a sample Virtual network config file that I used for setting up the three VMs.

https://github.com/shwetams/python_azure_iaas/blob/master/Vnet_Config_File.xml

It’s however much quicker to simply create it from portal as described earlier, since Virtual network will be a one time activity in most cases.

 

 

Step 3: Create storage account & upload files

The shell script to execute should be stored in Azure blob storage account for the Azure VM Agent to be able to access and download the script for execution.

You can create a storage account by following these steps on the portal. Or simply use the  Python script percona_cluster_setup.py and enter “sa”. The createstorage() account function will create the storage account after taking required parameters as input.  Following is an extract from the script.

storage_acc_name = ""   

    storage_replication = ""

    storage_location_name = ""

    #Standard_LRS, Standard_ZRS, Standard_GRS, Standard_RAGRS

    if storage_acc_name == "":

        storage_acc_name = str(input("Enter a storage account name. Storage account names must be between 3 and 24 characters in length and use numbers and lower-case letters only... "))

        isBoolStorageAccountNameUnique = False

        while isBoolStorageAccountNameUnique == False:

            res = svc.check_storage_account_name_availability(storage_acc_name)

            if res.result  == True:

                isBoolStorageAccountNameUnique = True

                if len(storage_location_name) <= 0:

                    storage_location_name = str(input("Enter storage location... "))

                if len(storage_replication) <= 0:

                    storage_replication = str(input("Enter replication factor (Standard_LRS, Standard_ZRS, Standard_GRS, Standard_RAGRS)... "))

                    isBoolStorageReplicationValid = False

                    while isBoolStorageReplicationValid == False:

                        if storage_replication != "Standard_LRS"and storage_replication != "Standard_ZRS"and storage_replication != "Standard_GRS"and storage_replication != "Standard_RAGRS":

                            storage_replication = str(input("Invalid entry: Please re-enter replication factor (Standard_LRS, Standard_ZRS, Standard_GRS, Standard_RAGRS), press E if you want to exit"))

                            if storage_replication == "E"or storage_replication == "e":

                                isStorageAccCreated = False

                                break

                        else:

                            isBoolStorageReplicationValid = True

                if isBoolStorageReplicationValid == True:

                    svc.create_storage_account(storage_acc_name,storage_acc_name,storage_acc_name,None,storage_location_name,True,None)

                    isStorageAccCreated = True

            else:

                storage_acc_name = str(input("The storage account name entered is already in use, please re-enter another one, press E to exit.."))                           

                if storage_acc_name == "E"or storage_acc_name == "e":

                    isStorageAccCreated = False

                    break

        if isStorageAccCreated == True:

            print("Storage account..." + storage_acc_name + " has been created")

        else:

            print("Storage account could be created")

 

    returnTrue

 

In this sample, we are using same storage account for storing shell scripts, custom image as well as VM disks. You can create multiple based on your requirement.

This link explains various kinds of storage accounts available on Azure.

 

Step 4: Upload the Percona Cookbook

 

You can find the chef cookbook for Percona used on the github repository https://github.com/phlipper/chef-percona. After uploading it on the Chef server please create a client.rb and validation.pem file. Detailed steps to use Chef and chef cookbooks can be found on the link below.

https://www.digitalocean.com/community/tutorials/how-to-understand-the-chef-configuration-environment-on-a-vps   

 

 

Step 4: Modify and Upload script file

Upload the install-sh.sh script file (present in the github repository). In this sample, the shell script is downloading the validation.pem & client.rb file and installing the chef client, and then running custom Chef cookbook for installing the Percona cluster.

Change the path of the blob storage based on where you have uploaded the client.rb and validation.pem file

http://<your storage account name>.blob.core.windows.net/demo/client.rb

http://<your storage account name>/demo/validation.pem

Make sure you modify the install-sh shell script to include your client.rb, validation.pem and the chef cookbook path.

 

Step 5: Create an image (optional)

You can create a VM from a custom image, or from image gallery. In our sample scripts below, we have shown both. We create the VM from a custom image in the powershell scripts and from an image gallery in the Python scripts, explained in more detail below.

This link will guide you through step by step to create a custom image on Azure.

The Python script on github uses an image from Gallery.

 

 

Running sample python script

The script is written in python 3.4.2. Following describes the script and steps needed to run the script.

 

Configuration path

The scripts extensively use a folder at “C:\Python_Files”, please change this to your location wherever applicable or create a folder on the C: drive with the name "Python_Files".

 

Step 1: Registration of Azure certificate

The scripts use Azure REST API reference to setup VMs. In order to access the REST API, the script needs to use a registered certificate to authenticate itself.

You need to have a certificate file (.cer) while using the script and also for uploading it on the Azure portal.

If you have a .pem file, you can install openssl and use the following command to create a .cer file:

openssl x509 -inform PEM -in cacert.pem -outform DER -out certificate.cer

 

Following link explains the steps to upload an Azure management certificate.

https://www.simple-talk.com/cloud/security-and-compliance/windows-azure-management-certificates/

Open the python script “percona_cluster_setup.py” and set the variables. The certificate_path variable is the system registry path that azure python sdk uses to extract the certificate file for authentication.

subscription_id = "<subscription id>"

certificate_path = "CURRENT_USER\\my\\AzureCertificate"

You should change the certificate_path to the .cer file path if you are running it on Linux VM.

Also update the script “vm_getstatus.py” with the subscription_id variable.

subscription_id = "<subscription id>"

You will also need a .pem file with key and without key to run some basic http access functions (that do not use the Azure SDK), please update the path to certificate key, and certificate file in the “VMClusterSetupClass.py”, as explained below:

cert_path = "C:\Python_Files\AzureCertificate.pem"

cert_key_path = "C:\Python_Files\AzureCertificateKey.pem"

 

 

Step 2: Set all script constants

Refer to the following lines of code to change/edit or modify features of the VM in “percona_cluster_setup.py”:

Defining the user name and password:

 

linux_config = LinuxConfigurationSet(host_name=vm_name,user_name="azureuser",user_password="<yourpassword>",disable_ssh_password_authentication="false")

Defining the end points/external & internal ports

network = ConfigurationSet()

    network.configuration_set_type = "NetworkConfiguration"

    network.input_endpoints.input_endpoints.append(ConfigurationSetInputEndpoint('ssh', 'tcp', '22', '22'))

    network.input_endpoints.input_endpoints.append(ConfigurationSetInputEndpoint('http', 'tcp', '80', '80'))

   

       

    network_02 = ConfigurationSet()

    network_02.configuration_set_type = "NetworkConfiguration"

    network_02.input_endpoints.input_endpoints.append(ConfigurationSetInputEndpoint('ssh', 'tcp', '223', '22'))

    network_02.input_endpoints.input_endpoints.append(ConfigurationSetInputEndpoint('http', 'tcp', '8023', '8023'))

   

    network_03 = ConfigurationSet()

    network_03.configuration_set_type = "NetworkConfiguration"

    network_03.input_endpoints.input_endpoints.append(ConfigurationSetInputEndpoint('ssh', 'tcp', '224', '22'))

    network_03.input_endpoints.input_endpoints.append(ConfigurationSetInputEndpoint('http', 'tcp', '8024', '8024'))

 

Please note that within a vnet there are only unique external ports allowed. So if you are create multiple VMs, you need to define different ssh external ports all mapping to the same internal ssh port, in this case 22.

Enter the storage account name to complete the URL where the VM’s OS disk will be created. Please make sure you have a container called “vhds” created, or change the container name to whatever you have created in the url below.

 

os_hd = OSVirtualHardDisk(source_image_name="b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20140927-en-us-30GB",media_link="https://<storage account name>.blob.core.windows.net/vhds/" + cs_name + "pipedrivevhd010101sgvr01.vhd",os="Linux")

    os_hd_01 = OSVirtualHardDisk(source_image_name="b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20140927-en-us-30GB",media_link="https://<storage account name>.blob.core.windows.net/vhds/" + cs_name + "pipedrivevhd020202sgvr02.vhd",os="Linux")

    os_hd_02 = OSVirtualHardDisk(source_image_name="b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20140927-en-us-30GB",media_link="https://<storage account name>.blob.core.windows.net/vhds/" + cs_name + "pipedrivevhd030303sgvr03.vhd",os="Linux")

 

The resource extension parameter is used to instructe the azure VMagent to download and run the script, following is the code snippet.

res_ext_ref = []

    res_exts = ResourceExtensionReferences()

    res = ResourceExtensionReference()

   

    res.reference_name="MyCustomScriptExtension"

    res.name = "CustomScriptForLinux"

    res.publisher = "Microsoft.OSTCExtensions"

    res.version =    "1.1"

    res.label= "MyCustomScriptExtension"

    #_encode_base64(

    res_chef_client_rb = ResourceExtensionParameterValue()

   

Setting up start up script details in the resource extension.

Please update the path where the startup scripts are located and the storage account details as highlighted below,

res_chef_client_rb.key = "CustomScriptExtensionPublicConfigParameter"

    res_chef_client_rb.value = _encode_base64('{"fileUris":["http://<storage account name>.blob.core.windows.net/demo/install-chef.sh"],"commandToExecute":"sh install-chef.sh"}')

    res_chef_client_rb.type = "Public"

    res_chef_client_validation = ResourceExtensionParameterValue()

    res_chef_client_validation.key = "CustomScriptExtensionPrivateConfigParameter"

    res_chef_client_validation.value = _encode_base64('{"storageAccountName":"<storage account name>","storageAccountKey":"<storage account key>"}')

    res_chef_client_validation.type = "Private"

   

   

 

Step 3: Create cloud services

The script is creating three VMs in three different cloud service. You can modify the script to create all three VMs in the same cloud service as well (in the Create_Virtual_Machine_New() function, change the parameters in vm02 and vm03 “add_role” to true).

However, the script assumes that the cloud service name remains the same and there is a suffix of 01,02 & 03.

For example, if the cloud service name is “mydummycs”, the three cloud services created should be “mydummycs01”,”mydummycs02”,”mydummycs03”.

You can manually create cloud service from the portal or use the script and enter “cs” to create the cloud services when prompted.

>python percona_cluster_setup.py cs

 

Step 4: Run the scripts to create VMs

 

Once you have setup all the constants please run the script with “vm” input when prompted.

>python percona_cluster_setup.py vm

The script will print three Request IDs for the three VMs create, please save it for accessing the request status as explained in the next step.

 

 

Step 5: Use the request IDs to get status updates

The vm_getstatus.py script accepts request id as a parameter and prints the status response into the “C:\Python_Files” folder. You can run this script and pass the request ID to get status of the request as shown in the example below:

>python vm_getstatus.py –r <requested>

 

 

Troubleshooting

While the vm_getstatus.py script can be used to get status of the request submission, and the runtime logs while running the powershell runbook, sometimes the startup script might fail to run. You can monitor the VM creation progress on the portal too.

Once the VM reaches the running status, you can logon to the VM and view the following logs in case of any errors in VM provisioning.

Following log file logs all the VM provisioning activity by Azure VM agent.

/var/log/waagent.log

 

Following log logs running of the extension script.

 

 /var/log/azure/Microsoft.OSTCExtensions.CustomScriptForLinux/1.1/extension.log

 

By the end of this you will have three VMs running on a Virtual network with the MySQL Percona cluster installed through the chef client. In my next series, we will use the Linux Chef client VM extension which simplifies the process further.

 


Skip to main content