Update SSM parameter store on another AWS account using AssumeRole

Hi,

In this post we are going to update the SSM parameter store in 2nd AWS account with the details from 1st AWS account. For this we will create a AWS Lambda function with python code. The python code will assume the role from another account and uses the temporarily generated STS credentials to connect and update the SSM parameter on the 2nd AWS account.

Create a Lambda function by selecting Python 2.7 and add the below code into it


#!/usr/bin/python

import boto3
import time

ssmparam = boto3.client('ssm')

account_id = '112211221122'
account_role = 'AssumeRole-SSM'
region_Name = 'us-east-1'

AmiId = 'ami-119c8dc1172b9c8e'

def lambda_handler(event, context):
    print("Assuming role for account: " + account_id)
    credentials = assume_role(account_id,account_role)

	#Call the function to update the SSM parameter with value
    updateSSM_otherAccount(credentials,region_Name,account_id)

def assume_role(account_id, account_role):
    sts_client = boto3.client('sts')
    role_arn = 'arn:aws:iam::' + account_id + ':role/' + account_role
    print (role_arn)

    '''Call the assume_role method of the STSConnection object and pass the role
    ARN and a role session name'''

    assuming_role = True
    assumedRoleObject = sts_client.assume_role(RoleArn=role_arn,RoleSessionName="NewAccountRole")
    print (assumedRoleObject['Credentials'])

    '''From the response that contains the assumed role, get the temporary
    credentials that can be used to make subsequent API calls'''
    return assumedRoleObject['Credentials']
    
def updateSSM_otherAccount(creds, region_Name, account_id):
    client1 = boto3.client('ssm',region_name=region_Name,aws_access_key_id=creds['AccessKeyId'],aws_secret_access_key=creds['SecretAccessKey'],aws_session_token=creds['SessionToken'])

    ssmparam_update = client1.put_parameter(Name='DevAMI',
            Description='the latest ami id of Dev env',
            Value=AmiId, Type='String', Overwrite=True)

Steps to configure AssumeRole

Note: Make sure to modify the account id’s in the below json policy

1. Add the inline policy to role attached to the lambda in 1st AWS account (556655665566)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "123",
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Resource": [
                "arn:aws:iam::112211221122:role/AssumeRole-SSM"
            ]
        }
    ]
}

2. Create a role in 2nd AWS account (AssumeRole-SSM) (112211221122), edit the trust relationship and add below policy.
Attach EC2 full permissions to this role so that we will get access to SSM parameter store

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Resource": "arn:aws:iam::556655665566:role/lambda-ec2-role"
        }
    ]
}
Advertisements

Deployment into Kubernetes on Google Cloud

Let’s Deploy an application into Kubernetes on GCP (Google Cloud Platform).

Install the Google Cloud SDK, which includes the gcloud command-line tool.

Install the Kubernetes command-line tool. kubectl is used to communicate with Kubernetes, which is the cluster orchestration system of Kubernetes Engine clusters

Create a project in your GCP console and retrieve the Project ID of it

Next set your project and zone with below commands

gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b

export PROJECT_ID="$(gcloud config get-value project -q)"

Download sample applications
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples

Switch to the directory

cd kubernetes-engine-samples/hello-app

Install the Docker using below commands

sudo yum install docker -y
docker --version
sudo service docker status
sudo service docker start
sudo docker images
sudo docker ps

The value of PROJECT_ID will be used to tag the container image for pushing it to your private Container Registry.

docker build -t gcr.io/${PROJECT_ID}/my-app:v1 .

The gcr.io prefix refers to Google Container Registry, where the image will be stored. Let’s push the docker image to GCR (If you not enabled GCR then enable it from your console)

docker images
gcloud docker -- push gcr.io/${PROJECT_ID}/my-app:v1

Create a container cluster

Now that the container image is stored in a registry, you need to create a container cluster to run the container image. A cluster consists of a pool of Compute Engine VM instances running Kubernetes, the open source cluster orchestration system that powers Kubernetes Engine.

Run the following command to create a four-node cluster named myapp-cluster:

gcloud container clusters create myapp-cluster --num-nodes=4
gcloud compute instances list

Let’s Deploy application to Kubernetes

To deploy and manage applications on a Kubernetes Engine cluster, you must communicate with the Kubernetes cluster management system. You typically do this by using the kubectl command-line tool.

The kubectl run command below causes Kubernetes to create a Deployment named myapp-web on your cluster. The Deployment manages multiple copies of your application, called replicas, and schedules them to run on the individual nodes in your cluster.

Run the following command to deploy your application, listening on port 8090:

kubectl run hello-web --image=gcr.io/${PROJECT_ID}/my-app:v1 --port 8090
kubectl get deployment myapp-web
kubectl get pods

Expose your application to the Internet

kubectl expose deployment myapp-web --type=LoadBalancer --port 80 --target-port 8090

The kubectl expose command above creates a Service resource, which provides networking and IP support to your application’s Pods. Kubernetes Engine creates an external IP and a Load Balancer for your application.

The –port flag specifies the port number configured on the Load Balancer, and the –target-port flag specifies the port number that is used by the Pod created by the kubectl run command from the previous step.

Get your service IP address by using below command

kubectl get service
http://223.0.123.0

Scale up your application using below commands

kubectl scale deployment myapp-web --replicas=2
kubectl get deployment myapp-web
kubectl get pods

To Deploy a new version of your app use below commands

docker build -t gcr.io/${PROJECT_ID}/my-app:v2 .
gcloud docker -- push gcr.io/${PROJECT_ID}/my-app:v2
kubectl set image deployment/myapp-web myapp-web=gcr.io/${PROJECT_ID}/my-app:v2

Clean up the things using below commands

kubectl delete service myapp-web
gcloud compute forwarding-rules list
gcloud container clusters delete myapp-cluster