Deployment into Kubernetes on Google Cloud

Let’s Deploy an application into Kubernetes on GCP (Google Cloud Platform).

Install the Google Cloud SDK, which includes the gcloud command-line tool.

Install the Kubernetes command-line tool. kubectl is used to communicate with Kubernetes, which is the cluster orchestration system of Kubernetes Engine clusters

Create a project in your GCP console and retrieve the Project ID of it

Next set your project and zone with below commands

gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b

export PROJECT_ID="$(gcloud config get-value project -q)"

Download sample applications
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples

Switch to the directory

cd kubernetes-engine-samples/hello-app

Install the Docker using below commands

sudo yum install docker -y
docker --version
sudo service docker status
sudo service docker start
sudo docker images
sudo docker ps

The value of PROJECT_ID will be used to tag the container image for pushing it to your private Container Registry.

docker build -t gcr.io/${PROJECT_ID}/my-app:v1 .

The gcr.io prefix refers to Google Container Registry, where the image will be stored. Let’s push the docker image to GCR (If you not enabled GCR then enable it from your console)

docker images
gcloud docker -- push gcr.io/${PROJECT_ID}/my-app:v1

Create a container cluster

Now that the container image is stored in a registry, you need to create a container cluster to run the container image. A cluster consists of a pool of Compute Engine VM instances running Kubernetes, the open source cluster orchestration system that powers Kubernetes Engine.

Run the following command to create a four-node cluster named myapp-cluster:

gcloud container clusters create myapp-cluster --num-nodes=4
gcloud compute instances list

Let’s Deploy application to Kubernetes

To deploy and manage applications on a Kubernetes Engine cluster, you must communicate with the Kubernetes cluster management system. You typically do this by using the kubectl command-line tool.

The kubectl run command below causes Kubernetes to create a Deployment named myapp-web on your cluster. The Deployment manages multiple copies of your application, called replicas, and schedules them to run on the individual nodes in your cluster.

Run the following command to deploy your application, listening on port 8090:

kubectl run hello-web --image=gcr.io/${PROJECT_ID}/my-app:v1 --port 8090
kubectl get deployment myapp-web
kubectl get pods

Expose your application to the Internet

kubectl expose deployment myapp-web --type=LoadBalancer --port 80 --target-port 8090

The kubectl expose command above creates a Service resource, which provides networking and IP support to your application’s Pods. Kubernetes Engine creates an external IP and a Load Balancer for your application.

The –port flag specifies the port number configured on the Load Balancer, and the –target-port flag specifies the port number that is used by the Pod created by the kubectl run command from the previous step.

Get your service IP address by using below command

kubectl get service
http://223.0.123.0

Scale up your application using below commands

kubectl scale deployment myapp-web --replicas=2
kubectl get deployment myapp-web
kubectl get pods

To Deploy a new version of your app use below commands

docker build -t gcr.io/${PROJECT_ID}/my-app:v2 .
gcloud docker -- push gcr.io/${PROJECT_ID}/my-app:v2
kubectl set image deployment/myapp-web myapp-web=gcr.io/${PROJECT_ID}/my-app:v2

Clean up the things using below commands

kubectl delete service myapp-web
gcloud compute forwarding-rules list
gcloud container clusters delete myapp-cluster
Advertisements

Delete file in sub-directory of S3 using Python

Hi All,
We use boto3 libraries to connect to S3 and do actions on bucket for objects to  upload, download, copy, delete. But let’s say if you want to download a specific object which is under a sub directory in the bucket then it becomes difficult to its less known on how to do this.

Below are few python script examples on using prefix of the subdirectory with boto and boto3 libraries

Example 1: Copy a file/object which is residing in a subdiretory of Bucket1 to Bucket2

import boto
conn = boto.connect_s3()

srcBucket = conn.get_bucket('testProjBucket-1') #Source Bucket name
dstBucket = conn.get_bucket('testProjBucket-2') #Destination Bucket name
fileName='test.txt'

dstBucket.copy_key('Dir2/subDir2/'+fileName,srcBucket,'Dir1/subDir1/'+fileName)

Example 2: Downloads the test.txt from bucket ‘testProjBucket-1’ to the local system path /home/ec2-user/mydownloads/
Here the downloaded file name will be as hai.txt

import boto3
s3 = boto3.resource('s3')

fileName="test.txt"
prefix1=('Dir1/subDir1/'+fileName)

s3.meta.client.download_file('testProjBucket-1', prefix1, '/home/ec2-user/mydownloads/hai.txt')

Example 3: Delete a specific object from a specific sub-directory inside a bucket (Using boto libraries)

import boto
conn = boto.connect_s3(region_name='', aws_access_key_id = '', aws_secret_access_key = '')

fileName = "test.py"

srcBucket = conn.get_bucket('testProjBucket-1')
srcBucket.delete_key('Dir1/subDir1/'+fileName)

Example 4: Delete a specific object from a specific sub-directory inside a bucket (Using boto3 libraries)

import boto3
client = boto3.client('s3', region_name='us-east-1', aws_access_key_id = '', aws_secret_access_key = '')
fileName="test.txt"

prefix1=('Dir1/subDir1/'+fileName)

response = client.delete_object(
Bucket='testProjBucket-1',
Key=prefix1
)

Note: There is no move command for object in boto3 library. We can only use copy command. But we can use move in the aws-cli

How to upload and run Nodejs package using AWS Lambda

Hey guys.. If you have tried using Nodejs code to run in AWS Lambda you know how painful it is to package the node modules with needed libraries to make it work in Lambda function. Yes it is difficult in begining but once you start exploring and understanding it becomes so much interesting as what all you can achieve using nodejs.

Here I will using nodejs UUID module to generate a unique id which can be used in application or in database. The AWS documentation tells that “You can create a deployment package yourself or write your code directly in the Lambda console, in which case the console creates the deployment package for you and uploads it, creating your Lambda function.” but there is no step-by-step instructions and screenshots to show how to do it. And also you won’t get much information in other blogs as I have tried exploring and ended up without proper steps. So, I like to show you here how to do.

The best way is to install nodejs and test the code on your local linux or windows environment and then package and upload the zip file to Lambda function.

Install nodejs using this command (the OS is RedHat)


curl --silent --location https://rpm.nodesource.com/setup_6.x | sudo bash -
sudo yum -y install nodejs

You can verify the installation by checking the node version (node -v) and npm version (npm -v)

 

Install the UUID module using below command


npm install uuid

 

You will get node_modules directory under /home/ec2-user

Navigate to directory /home/ec2-user/node_modules/uuid

and zip the all the files under this
zip -r TestnpmLambda1.zip *

Go to AWS Lambda and create a function
Select nodejs

1

2

Upload the zip file to Lambda function

Next copy the below code to the “edit code inline”


exports.handler = (event, context, callback) => {
var uuid = require('uuid');
console.log(uuid.v4());
};

Next save and test the function by creating a event

3

Then you should see new random UniqueID generated every time when you test this function.

 

Fetch the Elastic Beanstalk environment details using python script

Hi there!!

Its been quite sometime and have been busy working on multiple technologies. Recently my lead asked me to create a python script to fetch the minimum and maximum instances count of all the Elastic beanstalk environments. It was great to work on this requirement. Below is the python script

def get_details():
	row1=['Application Name','Environment Name','Min Count','Max count']
	with open('EB-instances-count',"a") as csvDataFile:
		writer = csv.writer(csvDataFile)
		writer.writerow(row1)
	try:
		eb = boto3.client('elasticbeanstalk',"us-east-1")
		NameInfo=eb.describe_environments()
		for names in NameInfo['Environments']:
			app_name=(names['ApplicationName'])
			env_name=(names['EnvironmentName'])  
			response = eb.describe_configuration_settings(
				EnvironmentName=env_name,
				ApplicationName=app_name
			 )
			minCount=response['ConfigurationSettings'][0]['OptionSettings'][4]
			maxCount=response['ConfigurationSettings'][0]['OptionSettings'][3]
			minVal=minCount['Value']
			maxVal=maxCount['Value']
			print "Gathering count for Environment: " + env_name
			fields=[app_name,env_name,minVal,maxVal]
			with open('EB-instances-count.csv',"a") as csvDataFile:
				writer = csv.writer(csvDataFile)
				writer.writerow(fields)
	except ClientError as e:
		if e.response['Error']['Code'] == "InvalidParameterValue":
			print env_name + " Environment not found, so skipping it"
		pass
	
if __name__ == '__main__':
	get_details()

In this I am getting the application names and environment names of Elastic beanstalk in a region and parsing through them and fetching the min and max instances count. Also after fetching the counts I am writing them to .csv (spreadsheet) file. We can run this script at any time to know the present count of instances being used. This script can be further updated/modified to fetch different information of the environments in Elastic beanstalk.

The interesting part would be to filter the required information from the response. And other thing is lets say if the environment is deleted it will take sometime to disappear from the console and we might see error as we can still get the environment name but not its settings as its already deleted right.
So in this case we have to capture the particular error the exception part and ignore it.

Note: Be careful about the indentation 🙂

Let me know for any questions. Thanks

EBS snapshots deletion by filtering tags

Hey guys!!

Having daily backups of your data is the most important thing in IT industry. EBS snapshots are used to backup Amazon EBS volume with data. Taking regular backup of the volumes decreases the risk of disaster incase of failures. For more detail refer to this post here

Here we taking EBS snapshots for Production environment daily and its not required to have many snapshots as the cost will increase. So in such cases we will be deleting the snapshot after 10 days from the backup date, so that we will endup having 10 snapshots at any given point of time.

The below python script will uses the boto3 library to connect to AWS and fetch the details of services. When a EBS snapshot is created for a EC2 instance, there will be a tag created for snapshot with instanceId details and DateToDelete key with value of future 10th day date.

We will be using two arrays to filter the snapshot tags with key ebsSnaphots_clean:true and instance tags with Environment:Prod
Next we will use for loop to parse through all the ec2 instance details which have tag value and key as Environment:Prod

Similarly we will parse through the EBS snapshots with ebsSnaphots_clean:true and Deletion_date having today’s date.
Next we will fetch the tags and compare the snapshot instanceID with the respective EC2 instanceID of production environment and if they match then that respective snapshot will be deleted.

import boto3
import datetime
import dateutil
from dateutil import parser
from boto3 import ec2

ec = boto3.client('ec2')

def lambda_handler(event, context):
    Deletion_date = datetime.date.today().strftime('%Y-%m-%d')
    firstFilter = [
        {'Name': 'tag-key', 'Values': ['DateToDelete']},
        {'Name': 'tag-value', 'Values': [Deletion_date]},
		{'Name': 'tag-key', 'Values': ['ebsSnaphots_clean']},
		{'Name': 'tag-value', 'Values': ['true']},
    ]

    secondFilter = [
        {'Name': 'tag-key', 'Values': ['Environment']},
		{'Name': 'tag-value', 'Values': ['Sandbox']},
    ]	

    snapshot_details = ec.describe_snapshots(Filters=firstFilter)
    ec2_details = ec.describe_instances(Filters=secondFilter)
	
    for myinst in ec2_details['Reservations']:
        for instID in myinst['Instances']:
            print "The instanceID is %s" % instID['InstanceId']
            Instance_ID = instID['InstanceId']
            for snap in snapshot_details['Snapshots']:
                print "Checking Snapshot %s" % snap['Snapshot_Id']
                for tag in snap['Tags']:
                    if tag['Key'] == 'snap_InstanceID':
                        match_instance = tag['Value']
                        if Instance_ID == match_instance:
                            print "Checking Snapshot %s" % snap['Snapshot_Id']
                            print "the instanceID " +Instance_ID+ " matches with the Snapshot assigned instanceID tag " +match_instance+ " for snapshot %s" % snap['Snapshot_Id']
                            print "Deleting snapshot %s" % snap['Snapshot_Id']
                            ec.delete_snapshot(SnapshotId=snap['Snapshot_Id'])
                        else:
                            print "The instance " +Instance_ID+" is of different environment and do not match with snapshot "+match_instance
                    else:
                        print "no matches"

Note: Please check and take care of indentation

Thanks!!

CloudFormation template to create CodeDeploy application

The requirement is to use cloudformation stacks to create CodeDeploy application and deployment group with required configuration. Although we get the information about which resources to use its all bits and pieces. It took me couple of hours to understand and write the CloudFormation template and use it to create codedeploy application.

Basically we have to create two cloudformation stacks.
1. stack1 – to create codedeploy application
2. stack2 – to create deployment group and other parameters and associate it with the codedeploy application created.
The AWS::CodeDeploy::Application resource creates an AWS CodeDeploy application. You can use the AWS::CodeDeploy::DeploymentGroup resource to associate the application with an AWS CodeDeploy deployment group.
stack1 – codedeploy-appName.json
creates codedeploy application
 
stack2 – codedeploy-deployGrp.json
associates with codedeploy application, creates deployment group, deployment config name, adds ec2 tag filters and ServiceRoleArn
Below is the template
codedploy-appName.json

{
“AWSTemplateFormatVersion”: “2010-09-09”,
“Resources”: {
“MyCodeDeployApp”: {
“Type” : “AWS::CodeDeploy::Application”,
“Properties” : {
“ApplicationName” : “App11”
}
}
}
}

codedploy-deployGrp.json
{
“AWSTemplateFormatVersion”: “2010-09-09”,
“Parameters”: {
“DeploymentConfigurationName”: {
“Description”: “With predefined configurations, you can deploy application revisions to one instance at a time, half of the instances at a time, or all the instances at once.”,
“Type”: “String”,
“Default”: “CodeDeployDefault.OneAtATime”,
“ConstraintDescription”: “Must be a valid Deployment configuration name”
}
},

“Resources”: {
“MyCodeDeploy” : {
“Type” : “AWS::CodeDeploy::DeploymentGroup”,
“Properties” : {
“ApplicationName” : “App11”,
“DeploymentConfigName” : {
“Ref”: “DeploymentConfigurationName”
},
“DeploymentGroupName” : “DeployGrp11”,
“Ec2TagFilters” : [{
“Key” : “Name”,
“Type” : “KEY_AND_VALUE”,
“Value” : “App1-env”
}],
“ServiceRoleArn” : “arn:aws:iam::326840742362:role/deployRole”
}
}
}
}

Access control for S3

Do you want to control the access options for your S3 buckets and the objects in them ?

Amazon Simple Storage Service (S3) is storage for the Internet.

There are different types of access control for S3 bucket and objects in it.

Below are the possible options

  1. We can use Bucket policy to
  • Grant access to bucket (to view/list the objects in bucket)
  • Grant access to view/access the content of object in a bucket.
  • Grant access to edit the access control list for the bucket.

 

  1. We can use use IAM policy to grant access to S3 console to only view/list the buckets and objects inside them. (Note: they will not be able to access the data of an object)
  • AmazonS3FullAccess
  • AmazonS3ReadOnlyAccess

           Custom IAM policy to

  • Grant access to bucket (to view/list the objects in bucket)
  • Grant access to view/access the content of object in a bucket.

 

  1. Provide Public access
  • Grant access to view/access the content of object in a bucket.
  • Grant access to edit the access control list for the bucket.

 

  1. Pre-signed URLs can be used to provide a URL that your users can employ to upload files with predefined names, as well as granting time-limited permission to download objects or list the contents of a bucket.

The pre-signed URLs are useful if you want your user/customer to be able upload a specific object to your bucket, but you don’t require them to have AWS security credentials or permissions.

This provides your users with limited access to a specific resource, removing the need to grant public access to your bucket.

When you create a pre-signed URL, you must provide your security credentials, specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The pre-signed URLs are valid only for the specified duration.