Delete old AMI’s by filtering with tags using boto3 and Lambda

Hello,

When you are building custom AMI’s in AWS account you will need to manage them by deleting the old AMI’s and keep only few latest images. For this you can use the below python code in Lambda function. I took the below code as reference from here and modified it to delete the AMI’s by filtering the images which has only specified tags.

Filtering the images with tags is important as different teams/projects will be having their images and it avoids accidental deletion of the wrong images.

Note: Before executing this code make sure your AMI’s are tagged.

    Code explanantion:

* First import libraries datetime, boto3 and time.
* Next get the ec2 connection session using boto3.
* Assign a variable older_days and pass the value as days (all images which are older than specified days from the present date will be filtered)

* Invoke the main function lambda_handler and then
* Invoke the function get_ami_list by passing older_days as a parameter

* Function get_ami_list uses ec2 descirbe_images to get all the images details which has specified ownerid as the owner
* Next it will invoke the function get_delete_date, calculates and finds out the date which is 5 days past from the present date
* Next the images will be filtered according to the specified tag value and if the image is older then 5 days.
* Then images are further filtered if older than 5 days and deregistered by invoking function delete_ami

from datetime import datetime, timedelta, timezone
import boto3
import time

client.ec2 = boto3.client('ec2', region_name='us-east-1')

#Here all images which are older than 5 days from the present date will be filtered
older_days = 5

def lambda_handler(event, context):
    get_ami_list(older_days)

def get_ami_list(older_days):
    amiNames = client.ec2.describe_images(Owners=['123456789123'])
    print(amiNames)
    today_date = datetime.now().strftime('%d-%m-%Y')
    print("Today's date is " + today_date)
    deldate1 = get_delete_date(older_days)
    print("AMI images which are older than " + str(deldate1) + " will be deregistered")
    for image in amiNames['Images']:
        taginfo = image['Tags']
        for tagName in taginfo:
			#Filter only the images having tag value as Proj1AMI
            if (tagName['Value'] == 'Proj1AMI'):
                ami_creation = image['CreationDate']
                imageID = image['ImageId']
                print("=================================================")
                print("Image id is " + imageID)
                print("Creation date for above image is " + ami_creation)
                if (str(ami_creation) < str(get_delete_date(older_days))):
                    print("This AMI is older than " + str(older_days) + " days")
                    delete_ami(imageID)

def get_delete_date(older_days):
	delete_time = datetime.now(tz=timezone.utc) - timedelta(days=older_days)
	return delete_time;

def delete_ami(imageID):
	print("Deregistering Image ID: " + imageID)
	client.ec2.deregister_image(ImageId=imageID)
Advertisements

Update SSM parameter store on another AWS account using AssumeRole

Hi,

In this post we are going to update the SSM parameter store in 2nd AWS account with the details from 1st AWS account. For this we will create a AWS Lambda function with python code. The python code will assume the role from another account and uses the temporarily generated STS credentials to connect and update the SSM parameter on the 2nd AWS account.

Create a Lambda function by selecting Python 2.7 and add the below code into it


#!/usr/bin/python

import boto3
import time

ssmparam = boto3.client('ssm')

account_id = '112211221122'
account_role = 'AssumeRole-SSM'
region_Name = 'us-east-1'

AmiId = 'ami-119c8dc1172b9c8e'

def lambda_handler(event, context):
    print("Assuming role for account: " + account_id)
    credentials = assume_role(account_id,account_role)

	#Call the function to update the SSM parameter with value
    updateSSM_otherAccount(credentials,region_Name,account_id)

def assume_role(account_id, account_role):
    sts_client = boto3.client('sts')
    role_arn = 'arn:aws:iam::' + account_id + ':role/' + account_role
    print (role_arn)

    '''Call the assume_role method of the STSConnection object and pass the role
    ARN and a role session name'''

    assuming_role = True
    assumedRoleObject = sts_client.assume_role(RoleArn=role_arn,RoleSessionName="NewAccountRole")
    print (assumedRoleObject['Credentials'])

    '''From the response that contains the assumed role, get the temporary
    credentials that can be used to make subsequent API calls'''
    return assumedRoleObject['Credentials']
    
def updateSSM_otherAccount(creds, region_Name, account_id):
    client1 = boto3.client('ssm',region_name=region_Name,aws_access_key_id=creds['AccessKeyId'],aws_secret_access_key=creds['SecretAccessKey'],aws_session_token=creds['SessionToken'])

    ssmparam_update = client1.put_parameter(Name='DevAMI',
            Description='the latest ami id of Dev env',
            Value=AmiId, Type='String', Overwrite=True)

Steps to configure AssumeRole

Note: Make sure to modify the account id’s in the below json policy

1. Add the inline policy to role attached to the lambda in 1st AWS account (556655665566)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "123",
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Resource": [
                "arn:aws:iam::112211221122:role/AssumeRole-SSM"
            ]
        }
    ]
}

2. Create a role in 2nd AWS account (AssumeRole-SSM) (112211221122), edit the trust relationship and add below policy.
Attach EC2 full permissions to this role so that we will get access to SSM parameter store

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Resource": "arn:aws:iam::556655665566:role/lambda-ec2-role"
        }
    ]
}

Golden image creation using Packer and AWS CodePipeline

Hi All, we know that Packer can be used to create Golden images for multiple platforms. Here we will use Packer to create an Golden image of Amazon Linux OS in AWS. The created images are called as AMI which appear in AWS dashboard. The image creation is necessarry in situations when we want the OS to have pre set of packages installed to support our application. The custom created AMI can be used to spin up EC2 instances when we need to build large infrastructure frequently to support the applications.

In this tutorial I will be using AWS CodeCommit, CodeBuild and create a CodePipeline with these. The CodePipeline will automatically get triggered when a commit happens to the CodeCommit repo. The pipeline will run the CodeBuild which will trigger the buildspec.yml and use the packer build command mentioned in it to build the Golden image (AMI)

I will be commiting 2 files to CodeCommit – buildspec.yml and CreateAMI.json file

Below is the content of buildspec.yml

---
version: 0.2

phases:
  pre_build:
    commands:
      - echo "Installing HashiCorp Packer..."
      - curl -qL -o packer.zip https://releases.hashicorp.com/packer/0.12.3/packer_0.12.3_linux_amd64.zip && unzip packer.zip
      - echo "Installing jq..."
      - curl -qL -o jq https://stedolan.github.io/jq/download/linux64/jq && chmod +x ./jq
      - echo "Validating CreateAMI.json"
      - ./packer validate CreateAMI.json
  build:
    commands:
      ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
      ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
      ### More info here: https://github.com/mitchellh/packer/issues/4279
      - echo "Configuring AWS credentials"
      - curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
      - aws configure set region $AWS_REGION
      - echo "AWS region set is:" $AWS_REGION
      - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
      - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
      - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
      - echo "Building HashiCorp Packer template, CreateAMI.json"
      - ./packer build CreateAMI.json
  post_build:
    commands:
      - echo "HashiCorp Packer build completed on `date`"

Below is the content of CreateAMI.json

{
    "variables": {
        "aws_region": "{{env `AWS_REGION`}}"
    },
  "builders": [
    {
      "type": "amazon-ebs",
      "region": "{{user `aws_region`}}",
      "instance_type": "t2.micro",
      "source_ami": "ami-0080e4c5bc078760e",
      "ssh_username": "ec2-user",
      "ami_name": "custom-Dev1",
      "ami_description": "Amazon Linux Image OS with pre-installed packages",
      "run_tags": {
        "Name": "custom-Dev1",
	"Env": "dev",
	"Project": "DevOps"
      }
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "inline": [
        "sudo yum install java python wget -y",
	"sudo yum install tomcat -y"
      ]
    }
  ]
}

1. Create an AWS CodeCommit Repository and add these 2 files into it.
2. Create AWS CodeBuild project and select CodeCommit repo and master branch
3. Create CodePipeline by selecting CodeCommit repo and CodeBuild project as stages.
Screenshot from 2018-12-19 12-06-05

Screenshot from 2018-12-19 12-07-56

Screenshot from 2018-12-19 12-08-52

Skip Deploy stage and create the pipleine.

4. Select the created CodePipeline and click on Release changes which will start running the pipeline.

5. After the pipeline finishes successfully, go to the EC2 dashboard and click on AMI in left side and you should see the created Golden image.

Azure Kubernetes Cluster (AKS) – Creation steps

Microsoft Azure is an open, flexible, enterprise-grade cloud computing platform. Azure Kubernetes Service (AKS) brings these two solutions together, allowing users to quickly and easily create fully managed Kubernetes clusters.

Here we will create Azure Kubernetes cluster (AKS) using Azure cli.
The master server of the Kubernetes cluster will be managed by Azure (this is free) and only the worker nodes will be created in Virtual machine section of the Azure. We can use the kubectl to interact with the pods.

Sign up for Microsoft Azure account with a subscription or create a new free account.

The first step is to provision a new Kubernetes cluster using the Microsoft Azure CLI. Follow these steps:

Install AZ using below command

curl -L https://aka.ms/InstallAzureCli | bash

Install Kubectl using below commands on RHEL or CentOS

sudo cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

sudo yum install -y kubectl

Log in to Microsoft Azure using below command.
This will generate a url and unique code. Open the url in the browser and copy the code for login successfully.

az login

Create a resource group by mentioning the resource group name and location

az group create --name Dev-AKS --location eastus

Create a cluster by specifying the cluster name and node count as 3 nodes

az aks create --resource-group Dev-AKS --name Dev-AKS-1 --node-count 3 --enable-addons monitoring --generate-ssh-keys

Get the Credetials for the cluster which will allow to communicate with the created cluster.

az aks get-credentials --name Dev-AKS-1 --resource-group Dev-AKS

Next use kubectl commands to check cluster resources

kubectl describe cluster
kubectl get services
kubectl get pods

AWS Cloudformation – cfn-init and UserData

We use AWS Cloudformation to provision resources in AWS. There are lot of examples available in internet on different use cases.
The Cloudformation scripts can be written using yaml or json language.

Using AWS CloudFormation we can automatically install, configure, and start applications on Amazon EC2 instances. Doing so enables you to easily duplicate deployments and update existing installations without connecting directly to the instance, which can save you a lot of time and effort.

For installing packages automatically on EC2 instance upon boot up we need to use cfn-init and metadata in Cloudformation.

Below is an example on how cfn-init and metadata is defined in the Cloudformation script and how they work.Understanding of this is very important if you want to have packages installed on to EC2 instance upon boot up.

The cfn-init helper script reads template metadata from the AWS::CloudFormation::Init key and acts accordingly to:
Fetch and parse metadata from AWS CloudFormation
Install packages
Write files to disk
Enable/disable and start/stop services

cfn-init does not require credentials, so you do not need to use the –access-key, –secret-key, –role, or –credential-file options.
In this case we are connecting to S3 to download the scripts. So we need to create a role which allows access to get objects in S3 bucket, and then attach this role to EC2 instance.

Put your commands and scripts to be run on EC2 instance in the commands section under AWS::CloudFormation::Init

This will be invoked in the UserData section
/opt/aws/bin/cfn-init –verbose –stack –region us-east-1 –resource Create_Instance

We also have to install cfn as give in UserData section and start cfn service before invoking the AWS::CloudFormation::Init

Note: Please use json editor to format the below code before using it.

“Resources”: {
“Create_Instance”: {
“Type”: “AWS::EC2::Instance”,
“Metadata”: {
“AWS::CloudFormation::Init”: {
“config”: {
“commands”: {
“01_mkdir_scripts”: {
“command”: “if [ ! -d \”/home/ec2-user/scripts \” ] ; then mkdir -p \”/home/ec2-user/scripts\” ; fi;”
},
“02_copy_scripts_from_s3”: {
“command”: “/usr/bin/aws s3 cp s3://testBucket19/installScripts /home/ec2-user/scripts –recursive”
},
“03_Install_java”: {
“command”: “/bin/bash -x /home/ec2-user/scripts/installJava.sh”,
“waitAfterCompletion”: “50”
},
“04_Install_Tomcat”: {
“command”: “/bin/bash -x /home/ec2-user/scripts/installTomcat.sh”,
“waitAfterCompletion”: “50”
}
}
}
}
}
},

“Properties”: {
“UserData”: {
“Fn::Base64”: {
“Fn::Join”: [
“”,
[
“#!/bin/bash\n”,
“yum install -y python-pip\n”,
“pip install awscli\n”,
“/usr/bin/easy_install –script-dir /opt/aws/bin https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n”,
“cp -v /usr/lib/python2*/site-packages/aws_cfn_bootstrap*/init/redhat/cfn-hup /etc/init.d \n”,
“chmod +x /etc/init.d/cfn-hup \n”,
“/etc/init.d/cfn-hup start \n”,
“/opt/aws/bin/cfn-init –verbose –stack “,
{
“Ref”: “AWS::StackId”
},
” –resource Create_Instance –region “,
{
“Ref”: “AWS::Region”
},
“\n”,
“/opt/aws/bin/cfn-signal -e 0 –stack “,
{
“Ref”: “AWS::StackName”
},
” –resource Create_Instance “,
” –region “,
{
“Ref”: “AWS::Region”
},
“\n”
]
]
}
}
}

Jenkinsfile example to publish into Artifactory

Here is a Jenkinsfile to upload and download artifacts from the Jfrog Artifactory. I really like to share this post as it took me few hours to get this working as I couldn’t find proper example on the internet.

You can install standalone Jfrog Artifactory in an EC2 Linux VM.

Next you will need to have the Artifactory plugin installed in the Jenkins. Go to Manage Jenkins->Configure System –> In server_id field mention name as Artifac_dev_server1 and provide the artifactory url, username and password.

Screenshot from 2018-09-20 11-15-24

You can fork and use the sample application source code from my github repo – https://github.com/raghuck/ant-Build-Project

Below is the Jenkinsfile file in groovy language syntax.

pipeline {
    agent any
    environment {
    def uploadSpec = """{
     "files": [
      {
          "pattern": "classes/abc/*",
          "target": "generic-local/"
        }
     ]
    }"""
    def downloadSpec = """{
     "files": [
      {
          "pattern": "generic-local/*",
          "target": "/home/ec2-user/.jenkins/workspace/testpipeline-1/downloads/"
        }
     ]
    }"""
    }
    
    stages {
        stage('Build') {
            steps {
                echo 'Building..'
                sh 'ant -f build.xml run'
            }
        }
        stage('Test') {
            steps {
                echo 'Publish the artifacts..'
                sh 'mkdir -p downloads'
                    script
                        {
                        //def server = Artifactory.newServer('http://10.213.243.17:8081/artifactory', 'admin', 'art@123')
                        def server = Artifactory.server 'Artifac_dev_server1'
                        server.bypassProxy = true
                        server.upload(uploadSpec)
                        echo 'Uploaded the file to Jfrog Artifactory successfully'
                        server.download(downloadSpec)
                        echo 'Downloaded the file from Jfrog Artifactory successfully'
                        }
            }
        }
        stage('Notify') {
            steps {
                echo 'Mail Notification...'
                mail body: 'Project build successful for job named testpipeline-1',
                from: 'test1@gmail.com',
                subject: 'project build successful',
                to: 'test2@gmail.com'
            }
        }
    }
}

There are 2 functions defined as uploadSpec and downloadSpec which contains the path and details about files/artifacts which needs to uploaded and downloaded. In the stage ‘Test’ the connection to the Artifactory happens and file gets uploaded using server.upload(uploadSpec). Similarly we can excute the download function by using server.download(downloadSpec)

The ‘Notify’ stage contains simple syntax to send mail with message as success or failure state of the job.

Deployment into Kubernetes on Google Cloud

Let’s Deploy an application into Kubernetes on GCP (Google Cloud Platform).

Install the Google Cloud SDK, which includes the gcloud command-line tool.

Install the Kubernetes command-line tool. kubectl is used to communicate with Kubernetes, which is the cluster orchestration system of Kubernetes Engine clusters

Create a project in your GCP console and retrieve the Project ID of it

Next set your project and zone with below commands

gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b

export PROJECT_ID="$(gcloud config get-value project -q)"

Download sample applications
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples

Switch to the directory

cd kubernetes-engine-samples/hello-app

Install the Docker using below commands

sudo yum install docker -y
docker --version
sudo service docker status
sudo service docker start
sudo docker images
sudo docker ps

The value of PROJECT_ID will be used to tag the container image for pushing it to your private Container Registry.

docker build -t gcr.io/${PROJECT_ID}/my-app:v1 .

The gcr.io prefix refers to Google Container Registry, where the image will be stored. Let’s push the docker image to GCR (If you not enabled GCR then enable it from your console)

docker images
gcloud docker -- push gcr.io/${PROJECT_ID}/my-app:v1

Create a container cluster

Now that the container image is stored in a registry, you need to create a container cluster to run the container image. A cluster consists of a pool of Compute Engine VM instances running Kubernetes, the open source cluster orchestration system that powers Kubernetes Engine.

Run the following command to create a four-node cluster named myapp-cluster:

gcloud container clusters create myapp-cluster --num-nodes=4
gcloud compute instances list

Let’s Deploy application to Kubernetes

To deploy and manage applications on a Kubernetes Engine cluster, you must communicate with the Kubernetes cluster management system. You typically do this by using the kubectl command-line tool.

The kubectl run command below causes Kubernetes to create a Deployment named myapp-web on your cluster. The Deployment manages multiple copies of your application, called replicas, and schedules them to run on the individual nodes in your cluster.

Run the following command to deploy your application, listening on port 8090:

kubectl run hello-web --image=gcr.io/${PROJECT_ID}/my-app:v1 --port 8090
kubectl get deployment myapp-web
kubectl get pods

Expose your application to the Internet

kubectl expose deployment myapp-web --type=LoadBalancer --port 80 --target-port 8090

The kubectl expose command above creates a Service resource, which provides networking and IP support to your application’s Pods. Kubernetes Engine creates an external IP and a Load Balancer for your application.

The –port flag specifies the port number configured on the Load Balancer, and the –target-port flag specifies the port number that is used by the Pod created by the kubectl run command from the previous step.

Get your service IP address by using below command

kubectl get service
http://223.0.123.0

Scale up your application using below commands

kubectl scale deployment myapp-web --replicas=2
kubectl get deployment myapp-web
kubectl get pods

To Deploy a new version of your app use below commands

docker build -t gcr.io/${PROJECT_ID}/my-app:v2 .
gcloud docker -- push gcr.io/${PROJECT_ID}/my-app:v2
kubectl set image deployment/myapp-web myapp-web=gcr.io/${PROJECT_ID}/my-app:v2

Clean up the things using below commands

kubectl delete service myapp-web
gcloud compute forwarding-rules list
gcloud container clusters delete myapp-cluster