Aws automation using python

AWS AUTOMATION USING PYTHON

In this course we will discussed about the automation of AWS with python mainly focused on setting the environment and creating, configuring the S3 bucket, publishing website to the S3 bucket and also we will discussed about creating, configuring about EC2 instances. We are also create auto-scaling group and lambda function for cloud-watch events. Integrating AWS with slack by using serverless frame work.

1. SETUP:

For this course I used AWS instance of t2.micro Amazon linux2 with RAM size is 1 GB 1 core cpu.

Install python:

$sudo yum install python3 –y

Install nodejs:

$curl -sL https://rpm.nodesource.com/setup_10.x | sudo bash -

$sudo yum install nodejs --y

$node --version

$npm --version

Install awscli: if it is not aws linux

$sudo yum install awscli --y

Install Git:

$sudo yum install git –y

Install OpenSSH : it is not required if it is in your machine.

$sudo yum install openssh –y

Install pipenv:

$sudo pip3 install pipenv

Install serverless:

$npm install -g serverless

Install ipython:

$pip3 install ipython

AWS Setup:

  • Create an AWS Account
  • Create a group as admin in IAM and give Admin access to it.
  • create a user with programmatic access and add to admin group
  • download the .CSV file

Configure AWS account:

 $aws configure --profile pythonAutomation1
  • Enter the details of AceeskeyID and Security key ID
  • select the region and output format as json

Download the SSH KEY from the system using ssh-keygen

Setup the Git:

  • Create a git account.
  • Create a repository.
  • Go to settings add the ssh key and paste the .pub key which is genereted using the ssh-keygen command.
  • Clone the git repository to local system.
  • Start the work by creating a directory for each project.

Publish a Website to S3:

Check-in to the cloned repository and create a directory called 01-webotron.

Run command

$pipenv --three

Install boto3 :

$pipenv install -d ipython

Check the number of buckets in s3 using console mode

$aws s3 ls --profile pythonAutomation1

Create a bucket using console mode method.

$aws s3 mb s3://automatingawssrini-console

2.Print the number of buckets in s3 using ipython scripting and create a new bucket for this we have to open terminal and checked-in to the 01-webotron.

Run command

$pipenv --three

Run command

$pipenv run ipython

Then we have to write the script in ipython as follows:

import boto3
session = boto3.Session(profile_name='pythonAutomation1')
s3 = session.resource('s3')
for bucket in s3.buckets.all(): 
print(bucket) 
new_bucket = s3.create_bucket(Bucket='automatingawssrini1-boto3', CreateBucketConfiguration={'LocationConstraint': 'us-east-2'})

new_bucket 
for bucket in s3.buckets.all(): 
print(bucket)

Next, we should know the list of the objects in bucket. For that we should write and execute the script like below

> import boto3
> import click
> 
> session = boto3.Session(profile_name='pythonAutomation1')
> s3 = session.resource('s3')
> 
> @click.group()
> def cli():
>     "webotron deploys websites to aws"
>     pass
> 
> @cli.command('list_buckets')
> def list_buckets():
>     "List all s3 buckets"
>     for bucket in s3.buckets.all():
>         print(bucket)
> 
> @cli.command('list-bucket-objects')
> @click.argument('bucket')
> def list_bucket_objects(bucket):
>     "List objects in an s3 bucket"
>     for obj in s3.Bucket(bucket).objects.all():
>         print(obj)
> 
> if __name__ == '__main__':
>     cli()

After executing the script in python we have to copy the script and paste it in a new file.

Run the script using the command \ python filename.py automatingawssrini-boto3 list-bucket-objects

Then you will get the objects present in the bucket.

3.Create and configure website with s3 using boto3:

The Manual process as follows:

  1. First we have to sample html file and pushed to the s3 bucket.

  2. Change the bucket policy to public accessible by using Amazon bucket permission policy.

  3. Then change static website hosting disable to enable in properties.

  4. There generated a link then copy the link and open in a new tab. then your website will be displayed.

Sample html file:

<!DOCTYPE htm>
<html lang = "en" dir = "ltr">
  <head>
      <meta charset = "utf-8">
     <title>My First Website</title>
  </head>
  <body>
     <p>This is my website. There are many like it, but this is mine.</p>
  </body>
</html>

Now this time created a bucket and changed the policy and pushed the website using the python scripting the script as follows.

import boto3
import click
from botocore.exceptions import ClientError

session = boto3.Session(profile_name='pythonAutomation1')
s3 = session.resource('s3')
@click.group()
def cli():
    "Webotron deploys websites to AWS"
    pass
@cli.command('list-buckets')
def list_buckets():
    "List all s3 buckets"
    for bucket in s3.buckets.all():
        print(bucket)
@cli.command('list-bucket-objects')
@click.argument('bucket')
def list_bucket_objects(bucket):
    "List objects in an s3 bucket"
    for obj in s3.Bucket(bucket).objects.all():
        print(obj)

@cli.command('setup-bucket')
@click.argument('bucket')
def setup_bucket(bucket):
    "Create and configure S3 bucket"
    s3_bucket = None
    try:
        s3_bucket = s3.create_bucket(
            Bucket=bucket,
            CreateBucketConfiguration={'LocationConstraint': session.region_name}
        )
    except ClientError as e:
        if e.response['Error']['Code'] == 'BucketAlreadyOwnedByYou':
            s3_bucket = s3.Bucket(bucket)
        else:
            raise e

policy = """
{
  "Version":"2012-10-17",
  "Statement":[{
  "Sid":"PublicReadGetObject",
  "Effect":"Allow",
  "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::%automatingawssrini-commandline/*"
      ]
    }
  ]
}
""" % s3_bucket.name
policy = policy.strip()

pol = s3_bucket.Policy()
pol.put(Policy=policy)

ws = s3_bucket.Website()
ws.put(WebsiteConfiguration={
    'ErrorDocument': {
        'Key': 'error.html'
    },
    'IndexDocument': {
        'Suffix': 'index.html'
    }
})

return

if __name__ == '__main__':
    cli()

4.Syncing Directory to the S3 with Python:

  1. Download the website template from the web

  2. Install pathlib library

  3. Configure the website directory to S3.

  4. Make bucket as public then you can access the website.

The script is below as follows

import boto3
import click
from botocore.exceptions import ClientError
from pathlib import Path
import mimetypes

session = boto3.Session(profile_name='pythonAutomation1')
s3 = session.resource('s3')
@click.group()
def cli():
    "Webotron deploys websites to AWS"
    pass
@cli.command('list-buckets')
def list_buckets():
    "List all s3 buckets"
    for bucket in s3.buckets.all():
        print(bucket)
@cli.command('list-bucket-objects')
@click.argument('bucket')
def list_bucket_objects(bucket):
    "List objects in an s3 bucket"
    for obj in s3.Bucket(bucket).objects.all():
        print(obj)
@cli.command('setup-bucket')
@click.argument('bucket')
def setup_bucket(bucket):
    "Create and configure S3 bucket"
    s3_bucket = None
    try:
        s3_bucket = s3.create_bucket(
            Bucket=bucket,
            CreateBucketConfiguration={'LocationConstraint': session.region_name}
        )
    except ClientError as e:
        if e.response['Error']['Code'] == 'BucketAlreadyOwnedByYou':
            s3_bucket = s3.Bucket(bucket)
        else:
            raise e
    policy = """
    {
      "Version":"2012-10-17",
      "Statement":[{
      "Sid":"PublicReadGetObject",
      "Effect":"Allow",
      "Principal": "*",
          "Action":["s3:GetObject"],
          "Resource":["arn:aws:s3::automatingawssrini-commandline/*"
          ]
        }
      ]
    }
    """ % s3_bucket.name
    policy = policy.strip()
    pol = s3_bucket.Policy()
    pol.put(Policy=policy)
    ws = s3_bucket.Website()
    ws.put(WebsiteConfiguration={
        'ErrorDocument': {
            'Key': 'error.html'
        },
        'IndexDocument': {
            'Suffix': 'index.html'
        }
    })

    return

def upload_file(s3_bucket, path, key):
    content_type = mimetypes.guess_type(key)[0] or 'text/plain'

    s3_bucket.upload_file(
        path,
        key,
        ExtraArgs={
            'ContentType': 'text/html'
        })

@cli.command('sync')
@click.argument('pathname', type=click.Path(exists=True))
@click.argument('bucket')
def sync(pathname, bucket):
    "Sync contents of PATHNAME to BUCKET"
    s3_bucket = s3.Bucket(bucket)

    root = Path(pathname).expanduser().resolve()

    def handle_directory(target):
        for p in target.iterdir():
            if p.is_dir(): handle_directory(p)
            if p.is_file(): upload_file(s3_bucket, str(p), str(p.relative_to(root)))

    handle_directory(root)
if __name__ == '__main__':
    cli()

After writing the script we need to execute the command as follows to run the script.

$python web.py sync /home/ec2-user/code/AUTOMATING-AWS-WITH-PYTHON/webotron/petclinic automatingawssrini-command line

SETUP NOTIFICATIONS FOR CLOUD-WATCH EVENTS:

5.Creating an ec2 instance:

  1. Create a directory notifon and install boto3

  2. Run pipenv --three command and install ipython

  3. Write the code to create and configure the ec2 instance using ipython as follows

$pipenv shell

$pipenv run ipython

import boto3
session = boto3.Session(profile_name='pythonAutomation1')
ec2 = session.resource('ec2')
key_name = 'python_automation_key'
key_path = key_name + '.pem'
key = ec2.create_key_pair(KeyName=key_name)
key.key_material
with open(key_path, 'w') as key_file:
    key_file.write(key.key_material)
  
get_ipython().run_line_magic('ls' '-l python_automation_key.pem')
get_ipython().run_line_magic('ls', '-l python_automation_key.pem')
import os, stat
os.chmod(key_path, stat.S_IRUSR | stat.S_IWUSR)
get_ipython().run_line_magic('ls', '-l python_automation_key.pem')
ec2.images.filter(Owners=['amazon'])
list(ec2.images.filter(Owners=['amazon']))
len(list(ec2.images.filter(Owners=['amazon'])))
img = ec2.Image('ami-09ff6a63071e9b2b1')
img.name
ec2_apse2 = session.resource('ec2', region_name='ap-southeast-2')
img.name
ec2_ap2 = ec2_apse2.Img('ami-09ff6a63071e9b2b1')
img_apse2 = ec2_apse2.Img('ami-09ff6a63071e9b2b1')
img_apse2 = ec2_apse2.Image('ami-09ff6a63071e9b2b1')
img_apse2.name
ec2_apse2 = session.resource('ec2', region_name='ap-southeast-2')
img_apse2 = ec2_apse2.Image('ami-09ff6a63071e9b2b1')
img_apse2.name
img = ec2.Image('ami-0d8f6eb4f641ef691')
img.name
ec2_apse2 = session.resource('ec2', region_name='ap-southeast-2')
img_apse2 = ec2_apse2.Image('ami-0d8f6eb4f641ef691')
img_apse2.name
img.name
ami_name = 'amzn2-ami-hvm-2.0.20190618-x86_64-gp2'
filets = [{'Name': 'name', 'Values': [ami_name]}]
filters = [{'Name': 'name', 'Values': [ami_name]}]
list(ec2.images.filter(Owners=['amazon'], Filters=filters))
list(ec2_apse2.images.filter(Owners=['amazon'], Filters = filters))
img
key
instances = ec2.create_instances(ImageId=img.id, MinCount=1, MaxCount=1, InstanceType='t2.micro', KeyName=key.key_name)
instances = ec2.create_instances(ImageId=img.id, MinCount=1, MaxCount=1, InstanceType='t2.micro', KeyName=key.key_name)
instances
ec2.Instance(id='i-0336e6bdbb4d16192')
inst = instances[0]
inst.terminate()
inst = instances[0]
inst.public_dns_name
inst.wait_until_running()
inst.reload()
inst.public_dns_name
instances = ec2.create_instances(ImageId=img.id, MinCount=1, MaxCount=1, InstanceType='t2.micro', KeyName=key.key_name)
inst = instances[0]
inst.public_dns_name
inst.wait_until_running()
inst.reload()
inst.public_dns_name
inst.security_groups
sg = ec2.SecurityGroup(inst.security_groups[0]['GroupId'])
sg.authorize_ingress(IpPermissions=[{'FromPort': 22, 'TopPort': 22, 'IpProtocal': 'TCP' , 'IpRanges': ['CidrIp': '172.31.85.64/00']}])

6.Setup auto-scaling group:

  1. Create an auto-scaling group named as Notifon Example

  2. Make certain Changes like Min no of instances as 1 and Max is 4

  3. Create a policy for Scale up and Scale Down

  4. Create an alarm in policy based on CPU utilization. Average load is >50% for scaling up <20% for scaling down

  5. Do the stress test on the instance you created.

  6. Download the stress tool sudo yum install -y stress

  7. Run the command as stress -c 1 -t 600&

  8. Check the no of instances are increased or not.

  9. After 600sec the instance will scaledown due to stress test is over

  10. But it takes few minutes to scale up and scale down so we wrote the code to scaleup and scaledown within seconds.

The code for scaling up and down is

import boto3

session = boto3.Session(profile_name='pythonAutomation1')

as_client = session.client('autoscaling')

as_client.execute_policy(AutoScalingGroupName='Notifon Example', PolicyName='Scale Up')

Incase of scaledown:

import boto3

session = boto3.Session(profile_name='pythonAutomation1')

as_client = session.client('autoscaling')

as_client.execute_policy(AutoScalingGroupName='Notifon Example', PolicyName='Scale Down')

7.Create an AWS LAMBDA function to perform the Cloud-watch event:

  1. Open AWS Lambda click on create a function.

  2. Name the function as handlecloudwatchevent run time as python3.7

  3. select a role as Create a new role from templates. Role name as handlecloudwatcheventRole

  4. Select the policy template as Simple Microservicess permission.

  5. Create a function.

  6. Write the lambda function code as like this.

import json
def lambda_handler(event, context):
#Todo Implement
print(event)
return {
‘statusCode’: 200,
‘body’: json.dumps(‘Hello from Lambda!’)
}

  1. You can check the cloud watch event logs.

8.Create and Setup Serverless frame work:

Using Lambda functions we cannot do all the operation so we create a serverless frame work to deployment.

  1. Make a directory called notifier.

  2. check-in and run the command as $serverless --template aws-python3 --name notifon-notifier

  3. Run the command as$sls create -t aws-python3 -n notifon-notifier

  4. Edit the severless.yml file as follows.

At the provider we need to edit

provider:

  name: aws

  runtime: python3.7

  profile: pythonAutomation1

  region: us-east-2

Below we have another segment edit that also

function:

  hello:

  handler: handler.hello

Then after edit the handlers.yml file as follows

import json

def hello(event, context):

    body = { “ message”: “Go Serverless v1.0! Your function is executed successfully!”, “input”: event

}

Run the command $sls deploy

It will create a lambda function in aws account

9.Configure the AWS account with slack:

  • Create a slack account

  • Download the requests library \pipenv install requests

  • Open the ipython session and write the script as below.

      import requests
    
      url = “url of the slack”
    
      data = { “text”: “Hello, world.” }
    
      requests.post(url, json=data)
    

You will get a notification message in slack.

10.Rekogintion:

Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Rekognition API, and the service can identify the objects, people, text, scenes, and activities, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial recognition on images and video that you provide. You can detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

  • Upload a file to video analysis then you will get the result of video in the name of labels.

  • Then open the terminal and start writing the code.

  • The code is below as

      import boto3
    
      session = boto3.Session(profile_name='pythonAutomation1')
    
      s3 = session.resource('s3')
    
      bucket = s3.create_bucket(Bucket='robinvideolyzervideos', CreateBucketConfiguration={'LocationConstraint': session.region_name})
    
      from pathlib import Path
    
      get_ipython().run_line_magic('ls', '/home/ec2-user/watch?v=668nUCeBHyY')
    
      pathname = '/home/ec2-user/watch?v=668nUCeBHyY'
    
      path = Path(pathname).expanduser().resolve()
    
      print(path)
    
      bucket.upload_file(str(path), str(path.name))
    
      rekognition_client = session.client('rekognition')
    
      response = rekognition_client.start_label_detection(Video={'S3Object': { 'Bucket': bucket.name, 'Name': path.name}})
    
      response
    
      job_id = response['JobId']
    
      result = rekognition_client.get_label_detection(JobId=job_id)
    
      result
    
      result.keys()
    
      result['JobStatus']
    
      result['ResponseMetadata']
    
      result['VideoMetadata']
    
      result['Labels']
    
      len(result['Labels'])