How to upload and run Nodejs package using AWS Lambda

Hey guys.. If you have tried using Nodejs code to run in AWS Lambda you know how painful it is to package the node modules with needed libraries to make it work in Lambda function. Yes it is difficult in begining but once you start exploring and understanding it becomes so much interesting as what all you can achieve using nodejs.

Here I will using nodejs UUID module to generate a unique id which can be used in application or in database. The AWS documentation tells that “You can create a deployment package yourself or write your code directly in the Lambda console, in which case the console creates the deployment package for you and uploads it, creating your Lambda function.” but there is no step-by-step instructions and screenshots to show how to do it. And also you won’t get much information in other blogs as I have tried exploring and ended up without proper steps. So, I like to show you here how to do.

The best way is to install nodejs and test the code on your local linux or windows environment and then package and upload the zip file to Lambda function.

Install nodejs using this command (the OS is RedHat)


curl --silent --location https://rpm.nodesource.com/setup_6.x | sudo bash -
sudo yum -y install nodejs

You can verify the installation by checking the node version (node -v) and npm version (npm -v)

 

Install the UUID module using below command


npm install uuid

 

You will get node_modules directory under /home/ec2-user

Navigate to directory /home/ec2-user/node_modules/uuid

and zip the all the files under this
zip -r TestnpmLambda1.zip *

Go to AWS Lambda and create a function
Select nodejs

1

2

Upload the zip file to Lambda function

Next copy the below code to the “edit code inline”


exports.handler = (event, context, callback) => {
var uuid = require('uuid');
console.log(uuid.v4());
};

Next save and test the function by creating a event

3

Then you should see new random UniqueID generated every time when you test this function.

 

Advertisements

Fetch the Elastic Beanstalk environment details using python script

Hi there!!

Its been quite sometime and have been busy working on multiple technologies. Recently my lead asked me to create a python script to fetch the minimum and maximum instances count of all the Elastic beanstalk environments. It was great to work on this requirement. Below is the python script

def get_details():
	row1=['Application Name','Environment Name','Min Count','Max count']
	with open('EB-instances-count',"a") as csvDataFile:
		writer = csv.writer(csvDataFile)
		writer.writerow(row1)
	try:
		eb = boto3.client('elasticbeanstalk',"us-east-1")
		NameInfo=eb.describe_environments()
		for names in NameInfo['Environments']:
			app_name=(names['ApplicationName'])
			env_name=(names['EnvironmentName'])  
			response = eb.describe_configuration_settings(
				EnvironmentName=env_name,
				ApplicationName=app_name
			 )
			minCount=response['ConfigurationSettings'][0]['OptionSettings'][4]
			maxCount=response['ConfigurationSettings'][0]['OptionSettings'][3]
			minVal=minCount['Value']
			maxVal=maxCount['Value']
			print "Gathering count for Environment: " + env_name
			fields=[app_name,env_name,minVal,maxVal]
			with open('EB-instances-count.csv',"a") as csvDataFile:
				writer = csv.writer(csvDataFile)
				writer.writerow(fields)
	except ClientError as e:
		if e.response['Error']['Code'] == "InvalidParameterValue":
			print env_name + " Environment not found, so skipping it"
		pass
	
if __name__ == '__main__':
	get_details()

In this I am getting the application names and environment names of Elastic beanstalk in a region and parsing through them and fetching the min and max instances count. Also after fetching the counts I am writing them to .csv (spreadsheet) file. We can run this script at any time to know the present count of instances being used. This script can be further updated/modified to fetch different information of the environments in Elastic beanstalk.

The interesting part would be to filter the required information from the response. And other thing is lets say if the environment is deleted it will take sometime to disappear from the console and we might see error as we can still get the environment name but not its settings as its already deleted right.
So in this case we have to capture the particular error the exception part and ignore it.

Note: Be careful about the indentation ūüôā

Let me know for any questions. Thanks

EBS snapshots deletion by filtering tags

Hey guys!!

Having daily backups of your data is the most important thing in IT industry. EBS snapshots are used to backup Amazon EBS volume with data. Taking regular backup of the volumes decreases the risk of disaster incase of failures. For more detail refer to this post here

Here we taking EBS snapshots for Production environment daily and its not required to have many snapshots as the cost will increase. So in such cases we will be deleting the snapshot after 10 days from the backup date, so that we will endup having 10 snapshots at any given point of time.

The below python script will uses the boto3 library to connect to AWS and fetch the details of services. When a EBS snapshot is created for a EC2 instance, there will be a tag created for snapshot with instanceId details and DateToDelete key with value of future 10th day date.

We will be using two arrays to filter the snapshot tags with key ebsSnaphots_clean:true and instance tags with Environment:Prod
Next we will use for loop to parse through all the ec2 instance details which have tag value and key as Environment:Prod

Similarly we will parse through the EBS snapshots with ebsSnaphots_clean:true and Deletion_date having today’s date.
Next we will fetch the tags and compare the snapshot instanceID with the respective EC2 instanceID of production environment and if they match then that respective snapshot will be deleted.

import boto3
import datetime
import dateutil
from dateutil import parser
from boto3 import ec2

ec = boto3.client('ec2')

def lambda_handler(event, context):
    Deletion_date = datetime.date.today().strftime('%Y-%m-%d')
    firstFilter = [
        {'Name': 'tag-key', 'Values': ['DateToDelete']},
        {'Name': 'tag-value', 'Values': [Deletion_date]},
		{'Name': 'tag-key', 'Values': ['ebsSnaphots_clean']},
		{'Name': 'tag-value', 'Values': ['true']},
    ]

    secondFilter = [
        {'Name': 'tag-key', 'Values': ['Environment']},
		{'Name': 'tag-value', 'Values': ['Sandbox']},
    ]	

    snapshot_details = ec.describe_snapshots(Filters=firstFilter)
    ec2_details = ec.describe_instances(Filters=secondFilter)
	
    for myinst in ec2_details['Reservations']:
        for instID in myinst['Instances']:
            print "The instanceID is %s" % instID['InstanceId']
            Instance_ID = instID['InstanceId']
            for snap in snapshot_details['Snapshots']:
                print "Checking Snapshot %s" % snap['Snapshot_Id']
                for tag in snap['Tags']:
                    if tag['Key'] == 'snap_InstanceID':
                        match_instance = tag['Value']
                        if Instance_ID == match_instance:
                            print "Checking Snapshot %s" % snap['Snapshot_Id']
                            print "the instanceID " +Instance_ID+ " matches with the Snapshot assigned instanceID tag " +match_instance+ " for snapshot %s" % snap['Snapshot_Id']
                            print "Deleting snapshot %s" % snap['Snapshot_Id']
                            ec.delete_snapshot(SnapshotId=snap['Snapshot_Id'])
                        else:
                            print "The instance " +Instance_ID+" is of different environment and do not match with snapshot "+match_instance
                    else:
                        print "no matches"

Note: Please check and take care of indentation

Thanks!!

CloudFormation template to create CodeDeploy application

The requirement is to use cloudformation stacks to create CodeDeploy application and deployment group with required configuration. Although we get the information about which resources to use its all bits and pieces. It took me couple of hours to understand and write the CloudFormation template and use it to create codedeploy application.

Basically we have to create two cloudformation stacks.
1. stack1 – to create codedeploy application
2. stack2 – to create deployment group and other parameters and associate it with the codedeploy application created.
The AWS::CodeDeploy::Application resource creates an AWS CodeDeploy application. You can use the AWS::CodeDeploy::DeploymentGroup resource to associate the application with an AWS CodeDeploy deployment group.
stack1 – codedeploy-appName.json
creates codedeploy application
 
stack2 – codedeploy-deployGrp.json
associates with codedeploy application, creates deployment group, deployment config name, adds ec2 tag filters and ServiceRoleArn
Below is the template
codedploy-appName.json

{
‚ÄúAWSTemplateFormatVersion‚ÄĚ: ‚Äú2010-09-09‚ÄĚ,
‚ÄúResources‚ÄĚ: {
‚ÄúMyCodeDeployApp‚ÄĚ: {
‚ÄúType‚ÄĚ : ‚ÄúAWS::CodeDeploy::Application‚ÄĚ,
‚ÄúProperties‚ÄĚ : {
‚ÄúApplicationName‚ÄĚ : ‚ÄúApp11‚ÄĚ
}
}
}
}

codedploy-deployGrp.json
{
‚ÄúAWSTemplateFormatVersion‚ÄĚ: ‚Äú2010-09-09‚ÄĚ,
‚ÄúParameters‚ÄĚ: {
‚ÄúDeploymentConfigurationName‚ÄĚ: {
‚ÄúDescription‚ÄĚ: ‚ÄúWith predefined configurations, you can deploy application revisions to one instance at a time, half of the instances at a time, or all the instances at once.‚ÄĚ,
‚ÄúType‚ÄĚ: ‚ÄúString‚ÄĚ,
‚ÄúDefault‚ÄĚ: ‚ÄúCodeDeployDefault.OneAtATime‚ÄĚ,
‚ÄúConstraintDescription‚ÄĚ: ‚ÄúMust be a valid Deployment configuration name‚ÄĚ
}
},

‚ÄúResources‚ÄĚ: {
‚ÄúMyCodeDeploy‚ÄĚ : {
‚ÄúType‚ÄĚ : ‚ÄúAWS::CodeDeploy::DeploymentGroup‚ÄĚ,
‚ÄúProperties‚ÄĚ : {
‚ÄúApplicationName‚ÄĚ : ‚ÄúApp11‚ÄĚ,
‚ÄúDeploymentConfigName‚ÄĚ : {
‚ÄúRef‚ÄĚ: ‚ÄúDeploymentConfigurationName‚ÄĚ
},
‚ÄúDeploymentGroupName‚ÄĚ : ‚ÄúDeployGrp11‚ÄĚ,
‚ÄúEc2TagFilters‚ÄĚ : [{
‚ÄúKey‚ÄĚ : ‚ÄúName‚ÄĚ,
‚ÄúType‚ÄĚ : ‚ÄúKEY_AND_VALUE‚ÄĚ,
‚ÄúValue‚ÄĚ : ‚ÄúApp1-env‚ÄĚ
}],
‚ÄúServiceRoleArn‚ÄĚ : ‚Äúarn:aws:iam::326840742362:role/deployRole‚ÄĚ
}
}
}
}

Access control for S3

Do you want to control the access options for your S3 buckets and the objects in them ?

Amazon Simple Storage Service (S3) is storage for the Internet.

There are different types of access control for S3 bucket and objects in it.

Below are the possible options

  1. We can use Bucket policy to
  • Grant access to bucket (to view/list the objects in bucket)
  • Grant access to view/access the content of object in a bucket.
  • Grant access to edit the access control list for the bucket.

 

  1. We can use use IAM policy to grant access to S3 console to only view/list the buckets and objects inside them. (Note: they will not be able to access the data of an object)
  • AmazonS3FullAccess
  • AmazonS3ReadOnlyAccess

           Custom IAM policy to

  • Grant access to bucket (to view/list the objects in bucket)
  • Grant access to view/access the content of object in a bucket.

 

  1. Provide Public access
  • Grant access to view/access the content of object in a bucket.
  • Grant access to edit the access control list for the bucket.

 

  1. Pre-signed URLs can be used to provide a URL that your users can employ to upload files with predefined names, as well as granting time-limited permission to download objects or list the contents of a bucket.

The pre-signed URLs are useful if you want your user/customer to be able upload a specific object to your bucket, but you don’t require them to have AWS security credentials or permissions.

This provides your users with limited access to a specific resource, removing the need to grant public access to your bucket.

When you create a pre-signed URL, you must provide your security credentials, specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The pre-signed URLs are valid only for the specified duration.

Do you want to setup Multi Factor authentication (MFA) for EC2

Last week I came across enabling Multi Factor Authentication for EC2 instances, so there will be extra level of verification code need to provided along with password to authenticate and login into EC2 instances. When I tried to implement this the information was spread across multiple blogs and the google authenticator link was not working as the blogs were all updated in the past.

It took me 2 hours to finally put all the bits together and activate the authentication.

Below are the steps I followed to implement MFA authentication successfully.

My assumption is that you already have a EC2 instance created. If so then Login into it and switch to root.

Then install the pam-devel, make, gcc-c++ and wget packages.

Linux uses PAM (Pluggable Authentication Module) to integrate the Google Authenticator MFA. So make sure that PAM and PAM-Devel packages are installed on your system.

[ec2-user@ip-192-26-10-97 ~]$ sudo su ‚Äď

[root@ip-192-26-10-97 ~]# yum install pam-devel make gcc-c++ wget
Loaded plugins: priorities, update-motd, upgrade-helper
amzn-main/latest                                         | 2.1 kB     00:00
amzn-updates/latest                                      | 2.3 kB     00:00
Package 1:make-3.82-21.10.amzn1.x86_64 already installed and latest version
Package wget-1.18-1.18.amzn1.x86_64 already installed and latest version
Resolving Dependencies
Dependency Updated:
glibc.x86_64 0:2.17-157.169.amzn1   glibc-common.x86_64 0:2.17-157.169.amzn1
Complete!
[root@ip-192-26-10-97 ~]#

Download the google authenticator source file using wget and install it

Download location: http://www.filewatcher.com/m/libpam-google-authenticator-1.0-source.tar.bz2.32708-0.html

[root@ip-192-26-10-97 ~]# wget ftp://ftp.netbsd.org/pub/pkgsrc/distfiles/libpam-google-authenticator-1.0-source.tar.bz2

--2017-03-01 10:15:55--  ftp://ftp.netbsd.org/pub/pkgsrc/distfiles/libpam-google-authenticator-1.0-source.tar.bz2
=> ‚Äėlibpam-google-authenticator-1.0-source.tar.bz2‚Äô
Resolving ftp.netbsd.org (ftp.netbsd.org)... 199.233.217.201, 2001:470:a085:999::21
Connecting to ftp.netbsd.org (ftp.netbsd.org)|199.233.217.201|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done.    ==> PWD ... done.
==> TYPE I ... done.  ==> CWD (1) /pub/pkgsrc/distfiles ... done.
==> SIZE libpam-google-authenticator-1.0-source.tar.bz2 ... 32708
==> PASV ... done.    ==> RETR libpam-google-authenticator-1.0-source.tar.bz2 ... done.
Length: 32708 (32K) (unauthoritative)
libpam-google-authenticator-1.0-source. 100%[============================================================================>]  31.94K  37.2KB/s    in 0.9s
2017-03-01 10:16:00 (37.2 KB/s) - ‚Äėlibpam-google-authenticator-1.0-source.tar.bz2‚Äô saved [32708]
[root@ip-172-31-24-97 ~]#

Extract the downloaded file and install it using below command

tar -xvf libpam-google-authenticator-1.0-source.tar.bz2
cd libpam-google-authenticator-1.0
make
make install

If you want more information as what these commands does refer: https://robots.thoughtbot.com/the-magic-behind-configure-make-make-install

Make changes to the PAM and SSH configuration files to enable the Multi-Factor Authentication over SSH logins. Add below line to file /etc/pam.d/sshd

auth       required     pam_google_authenticator.so

In /etc/ssh/sshd_config, change the two parameters to yes and save it

PasswordAuthentication yes

ChallengeResponseAuthentication yes

Restart the sshd service

/etc/init.d/sshd restart

Next relogin again into EC2 instance

Add new user to the group

[root@ip-192-26-10-97 ~]# sudo useradd -s /bin/bash -m -d /home/testuser1 -g root testuser1

Change password of the new user

[ec2-user@ip-192-26-10-97 ~]$ sudo passwd testuser1
Changing password for user testuser1.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[ec2-user@ip-192-26-10-97 ~]$

Add user to sudoers file by using command sudo visudo

testuser1  ALL=(ALL:ALL) ALL

Restart the sshd service

/etc/init.d/sshd restart

So now we have the new user created, you can try to login using the user credentials and you will prompted for password only.

Next you will have to switch to the new user and enable MFA for this user by giving command as google-authenticator

[root@ip-192-26-10-97 ~]# su - testuser1

[testuser1@ip-192-26-10-97 ~]$ google-authenticator
Do you want authentication tokens to be time-based (y/n) y
https://www.google.com/chart?chs=200x200&chld=M|0&cht=yr&nnhl=otpauth://totp/testuser1@ip-172-31-24-97%3Fsecret%3DXGR5RZHUEE7CIOXN

Your new secret key is: XGR8RXHUHFGQ7CROZN
Your verification code is 533145
Your emergency scratch codes are:

14263591
47294949
18396452
85438789
16308009

Do you want me to update your "/home/testuser1/.google_authenticator" file (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) y

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

[testuser1@ip-192-26-10-97 ~]$

So now we have the MFA enabled for our required user.
Next download the google-authenticator app on your android or IOS mobile and give the account name as

testuser1

enter the secret key which gets generated when you run the google-authenticator command. Voila!! you should now see a verification code.

Login to EC2 instance using new user credentials and it will ask for verification code.