Python 3 Scripting for System Administrators:-Part:5

Creating Our First Class
For this course, we haven’t created any custom classes because it’s not something that we’ll do all the time, but in the case of our CLI, we need to. Our idea of having a flag of --driver that takes two distinct values isn’t something that any existing argparse.Action can do. Because of this, we’re going to follow along with the documentation and implement our own custom DriverAction class. We can put our custom class in our cli.py file and use it in our add_argument call.
src/pgbackup/cli.py
from argparse import Action, ArgumentParser

class DriverAction(Action):
def call(self, parser, namespace, values, option_string=None):
driver, destination = values
namespace.driver = driver.lower()
namespace.destination = destination

def create_parser():
parser = ArgumentParser(description="""
Back up PostgreSQL databases locally or to AWS S3.
“”")
parser.add_argument(“url”, help=“URL of database to backup”)
parser.add_argument("–driver",
help=“how & where to store backup”,
nargs=2,
action=DriverAction,
required=True)
return parser
Adding More Tests
Our CLI is coming along, but we probably want to raise an error if the end-user tries to use a driver that we don’t understand. Let’s add a few more tests that do the following:
1. Ensure that you can’t use a driver that is unknown, like azure.
2. Ensure that the drivers for s3 and local don’t cause errors.
test/test_cli.py (partial)
def test_parser_with_unknown_drivers():
“”"
The parser will exit if the driver name is unknown.
“”"

parser = cli.create_parser()

with pytest.raises(SystemExit):
    parser.parse_args([url, "--driver", "azure", "destination"])

def test_parser_with_known_drivers():
“”"
The parser will not exit if the driver name is known.
“”"

parser = cli.create_parser()

for driver in ['local', 's3']:
    assert parser.parse_args([url, "--driver", driver, "destination"])

Adding Driver Type Validation
Since we already have a custom DriverAction, we can feel free to customize this to make our CLI a little more intelligent. The only drivers that we are going to support (for now) are s3 and local, so let’s add some logic to our action to ensure that the driver given is one that we can work with:
known_drivers = [‘local’, ‘s3’]

class DriverAction(Action):
def call(self, parser, namespace, values, option_string=None):
driver, destination = values
if driver.lower() not in known_drivers:
parser.error(“Unknown driver. Available drivers are ‘local’ & ‘s3’”)
namespace.driver = driver.lower()
namespace.destination = destination
Removing Test Duplication Using pytest.fixture
Before we consider this unit of our application complete, we should consider cleaning up some of the duplication in our tests. We create the parser using create_parser in every test but using pytest.fixturewe can extract that into a separate function and inject the parser value into each test that needs it.
Here’s what our parser function will look like:
tests/test_cli.py (partial)
import pytest

@pytest.fixture
def parser():
return cli.create_parser()
We haven’t run into this yet, but the @pytest.fixture on top of our function definition is what’s known as a “decorator”. A “decorator” is a function that returns a modified version of the function. We’ve seen that if we don’t use parentheses that our functions aren’t called, and because of that we’re able to pass functions into other functions as arguments. This particular decorator will register our function in the list of fixtures that can be injected into a pytest test. To inject our fixture, we will add an argument to our test function definition that has the same name as our fixture name, in this case, parser. Here’s the final test file:
tests/test_cli.py
import pytest

from pgbackup import cli

url = “postgres://[email protected]:5432/db_one”

@pytest.fixture()
def parser():
return cli.create_parser()

def test_parser_without_driver(parser):
“”"
Without a specified driver the parser will exit
“”"
with pytest.raises(SystemExit):
parser.parse_args([url])

def test_parser_with_driver(parser):
“”"
The parser will exit if it receives a driver
without a destination
“”"
with pytest.raises(SystemExit):
parser.parse_args([url, “–driver”, “local”])

def test_parser_with_driver_and_destination(parser):
“”"
The parser will not exit if it receives a driver
with a destination
“”"
args = parser.parse_args([url, “–driver”, “local”, “/some/path”])

assert args.driver == "local"
assert args.destination == "/some/path"

def test_parser_with_unknown_drivers(parser):
“”"
The parser will exit if the driver name is unknown.
“”"
with pytest.raises(SystemExit):
parser.parse_args([url, “–driver”, “azure”, “destination”])

def test_parser_with_known_drivers(parser):
“”"
The parser will not exit if the driver name is known.
“”"
for driver in [‘local’, ‘s3’]:
assert parser.parse_args([url, “–driver”, driver, “destination”])
Now, all of our tests should pass, and we’re in a good spot to make a commit.

Install pytest-mock
Before we can learn how to use mocking in our tests, we need to install the pytest-mock package. This will pull in a few packages for us, and mainly provide us with a mocker fixture that we can inject into our tests:
(pgbackup-E7nj_BsO) $ pipenv install --dev pytest-mock
Writing Tests With Mocking
We’re going to put all of the Postgres related logic into its own module called pgdump, and we’re going to begin by writing our tests. We want this module to do the following:
1. Make a call out to pg_dump using subprocess.Popen.
2. Returns the subprocess that STDOUT can be read from.
We know how to use the subprocess module, but we haven’t used subprocess.Popen yet. Behind the scenes, the functions that we already know use Popen, and wait for it to finish. We’re going to use this instead of run, because we want to continue running code instead of waiting, right until we need to write the contents of proc.stdout to a file or S3.
To ensure that our code runs the proper third-party utilities, we’re going to use mocker.patch on the subprocess.Popen constructor. This will substitute in a different implementation that holds onto information like the number of times the function is called and with what arguments. Let’s see what this looks like in practice:
tests/test_pgdump.py
import pytest
import subprocess

from pgbackup import pgdump

url = “postgres://bob:[email protected]:5432/db_one”

def test_dump_calls_pg_dump(mocker):
“”"
Utilize pg_dump with the database URL
“”"
mocker.patch(‘subprocess.Popen’)
assert pgdump.dump(url)
subprocess.Popen.assert_called_with([‘pg_dump’, url], stdout=subprocess.PIPE)
The arguments that we’re passing to assert_called_with will need to match what is being passed to subprocess.Popen when we exercise pgdump.dump(url).

Initial Implementation
Our first error is from not having a src/pgbackup/pgdump.py file, so let’s be sure to create that. We can guess that we’ll also have an error for the missing function, so let’s skip ahead a little and implement that:
src/pgbackup/pgdump.py
import subprocess

def dump(url):
return subprocess.Popen([‘pg_dump’, url], stdout=subprocess.PIPE)
This will get our tests to passing, but what happens when the pg_dump utility isn’t installed?
Adding Tests For Missing PostgreSQL Client
Let’s add another test that tells our subprocess.Popen to raise an OSError instead of succeeding. This is the kind of error that we will receive if the end-user of our package doesn’t have the pg_dump utility installed. To cause our stub to raise this error we need to set the side_effect attribute when we call mocker.patch. We’ll pass in an OSError to this attribute. Finally, we’ll want to exit with a status code of 1 if we catch this error and pass the error message through. That means we’ll need to use pytest.raises again to ensure we receive a SystemExit error. Here’s what the final tests look like for our pgdump module:
tests/test_pgdump.py
import pytest
import subprocess

from pgbackup import pgdump

url = “postgres://bob:[email protected]:5432/db_one”

def test_dump_calls_pg_dump(mocker):
“”"
Utilize pg_dump with the database URL
“”"
mocker.patch(‘subprocess.Popen’)
assert pgdump.dump(url)
subprocess.Popen.assert_called_with([‘pg_dump’, url], stdout=subprocess.PIPE)

def test_dump_handles_oserror(mocker):
“”"
pgdump.dump returns a reasonable error if pg_dump isn’t installed.
“”"
mocker.patch(‘subprocess.Popen’, side_effect=OSError(“no such file”))
with pytest.raises(SystemExit):
pgdump.dump(url)
Implementing Error Handling
Since we know that subprocess.Popen can raise an OSError, we’re going to wrap that call in a try block, print the error message, and use sys.exit to set the error code:

src/pgbackup/pgdump.py

import sys
import subprocess

def dump(url):
try:
return subprocess.Popen([‘pg_dump’, url], stdout=subprocess.PIPE)
except OSError as err:
print(f"Error: {err}")
sys.exit(1)
Manual Testing
We can have a certain amount of confidence in our code because we’ve written tests that cover our expected cases, but since we used patching, we don’t know that it works. Let’s manually load our code into the python REPL to test it out:

(pgbackup-E7nj_BsO) $ PYTHONPATH=./src python

(pgbackup-E7nj_BsO) $ PYTHONPATH=./src python

>>> from pgbackup import pgdump

>>> dump = pgdump.dump(‘postgres://demo:[email protected]:80/sample’)

>>> f = open(‘dump.sql’, ‘w+b’)

>>> f.write(dump.stdout.read())

>>> f.close()

Note: We needed to open our dump.sql file using the w+b flag because we know that the .stdout value from a subprocess will be a bytes object and not a str.
If we exit and take a look at the contents of the file using cat, we should see the SQL output. With the pgdumpmodule implemented, it’s now a great time to commit our code.

Writing Local File Tests
Working with files is something that we already already know how to do, and local storage is no different. If we think about what our local storage driver needs to do, it really needs two things:
1. Take in one “readable” object and one, local, “writeable” object.
2. Write the contents of the “readable” object to the “writeable” object.
Notice that we didn’t say files, that’s because we don’t need our inputs to be file objects. They need to implement some of the same methods that a file does, like read and write, but they don’t have to be file objects.
For our testing purposes, we can use the tempfile package to create a TemporaryFile to act as our “readable” and another NamedTemporaryFile to act as our “writeable”. We’ll pass them both into our function, and assert after the fact that the contents of the “writeable” object match what was in the “readable” object:
tests/test_storage.py
import tempfile

from pgbackup import storage

def test_storing_file_locally():
“”"
Writes content from one file-like to another
“”"
infile = tempfile.TemporaryFile(‘r+b’)
infile.write(b"Testing")
infile.seek(0)

outfile = tempfile.NamedTemporaryFile(delete=False)
storage.local(infile, outfile)
with open(outfile.name, 'rb') as f:
    assert f.read() == b"Testing"

Implement Local Storage
The requirements we looked at before are close to what we need to do in the code. We want to call close on the “writeable” file to ensure that all of the content gets written (the database backup could be quite large):
src/pgbackup/storage.py
def local(infile, outfile):
outfile.write(infile.read())
outfile.close()
infile.close()

Installing boto3

To interface with AWS (S3 specifically), we’re going to use the wonderful boto3 package. We can install this to our virtualenv using pipenv:
(pgbackup-E7nj_BsO) $ pipenv install boto3
Configuring AWS Client
The boto3 package works off of the same configuration file that you can use with the official aws CLI. To get our configuration right, let’s leave our virtualenv and install the awscli package for our user. From there, we’ll use its configure command to set up our config file:
(pgbackup-E7nj_BsO) $ exit
$ mkdir ~/.aws
$ pip3.6 install --user awscli
$ aws configure
$ exec $SHELL
The exec $SHELL portion reload the shell to ensure that the configuration changes are picked up. Before moving on, make sure to reactivate our development virtualenv:

$ pipenv shell

Writing S3 test

Following the approach that we’ve been using, let’s write tests for our S3 interaction. To limit the explicit dependencies that we have, we’re going to have the following parameters to our storage.s3 function:
• A client object that has an upload_fileobj method.
• A boto3 client meets this requirement, but in testing, we can pass in a “mock” object that implements this method.
• A file-like object (responds to read).
• An S3 bucket name as a string.
• The name of the file to create in S3.
We need an infile for all of our tests, so let’s extract a fixture for that also.

tests/test_storage.py (partial)

import tempfile
import pytest

from pgbackup import storage

@pytest.fixture
def infile():
infile = tempfile.TemporaryFile(‘r+b’)
infile.write(b"Testing")
infile.seek(0)
return infile

Local storage tests…

def test_storing_file_on_s3(mocker, infile):
“”"
Writes content from one readable to S3
“”"
client = mocker.Mock()

storage.s3(client,
        infile,
        "bucket",
        "file-name")

client.upload_fileobj.assert_called_with(
        infile,
        "bucket",
        "file-name")

Implementing S3 Strategy
Our test gives a little too much information about how we’re going to implement our storage.s3 function, but it should be pretty simple for us to implement now:
src/pgbackup/storage.py (partial)
def s3(client, infile, bucket, name):
client.upload_fileobj(infile, bucket, name)
Manually Testing S3 Integration
Like we did with our PostgreSQL interaction, let’s manually test uploading a file to S3 using our storage.s3function. First, we’ll create an example.txt file, and then we’ll load into a Python REPL with our code loaded:

(pgbackup-E7nj_BsO) $ echo "UPLOADED" > example.txt

(pgbackup-E7nj_BsO) $ PYTHONPATH=./src python

>>> import boto3

>>> from pgbackup import storage

>>> client = boto3.client(‘s3’)

>>> infile = open(‘example.txt’, ‘rb’)

>>> storage.s3(client, infile, ‘pyscripting-db-backups’, infile.name)

Add “console_script” to project
We can make our project create a console script for us when a user runs pip install. This is similar to the way that we made executables before, except we don’t need to manually do the work. To do this, we need to add an entry point in our setup.py:
setup.py (partial)
install_requires=[‘boto3’],
entry_points={
‘console_scripts’: [
‘pgbackup=pgbackup.cli:main’,
],
}
Notices that we’re referencing our cli module with a : and a main. That main is the function that we need to create now.
Wiring The Units Together
Our main function is going to go in the cli module, and it needs to do the following:
1. Import the boto3 package.
2. Import our pgdump and storage modules.
3. Create a parser and parse the arguments.
4. Fetch the database dump.
5. Depending on the driver type do one of the following:
◦ create a boto3 S3 client and use storage.s3 or
◦ open a local file and use storage.local
src/pgbackup/cli.py
def main():
import boto3
from pgbackup import pgdump, storage

args = create_parser().parse_args()
dump = pgdump.dump(args.url)
if args.driver == 's3':
    client = boto3.client('s3')
    # TODO: create a better name based on the database name and the date
    storage.s3(client, dump.stdout, args.destination, 'example.sql')
else:
    outfile = open(args.destination, 'wb')
    storage.local(dump.stdout, outfile)

Let’s test it out:
$ pipenv shell
(pgbackup-E7nj_BsO) $ pip install -e .
(pgbackup-E7nj_BsO) $ pgbackup --driver local ./local-dump.sql postgres://demo:[email protected]:80/sample
(pgbackup-E7nj_BsO) $ pgbackup --driver s3 pyscripting-db-backups postgres://demo:[email protected]:80/sample
Reviewing the Experience
It worked! That doesn’t mean there aren’t things to improve though. Here are some things we should fix:
• Generate a good file name for S3
• Create some output while the writing is happening
• Create a shorthand switch for --driver (-d)
Generating a Dump File Name
For generating our filename, let’s put all database URL interactions in the pgdump module with a function name of dump_file_name. This is a pure function that takes an input and produces an output, so it’s a prime function for us to unit test. Let’s write our tests now:
tests/test_pgdump.py (partial)
def test_dump_file_name_without_timestamp():
“”"
pgdump.db_file_name returns the name of the database
“”"
assert pgdump.dump_file_name(url) == “db_one.sql”

def test_dump_file_name_with_timestamp():
“”"
pgdump.dump_file_name returns the name of the database
“”"
timestamp = “2017-12-03T13:14:10”
assert pgdump.dump_file_name(url, timestamp) == “db_one-2017-12-03T13:14:10.sql”
We want the file name returned to be based on the database name, and it should also accept an optional timestamp. Let’s work on the implementation now:
src/pgbackup/pgdump.py (partial)
def dump_file_name(url, timestamp=None):
db_name = url.split("/")[-1]
db_name = db_name.split("?")[0]
if timestamp:
return f"{db_name}-{timestamp}.sql"
else:
return f"{db_name}.sql"
Improving the CLI and Main Function
We want to add a shorthand -d flag to the driver argument, let’s add that to the create_parser function:
src/pgbackup/cli.py (partial)
def create_parser():
parser = argparse.ArgumentParser(description="""
Back up PostgreSQL databases locally or to AWS S3.
“”")
parser.add_argument(“url”, help=“URL of database to backup”)
parser.add_argument("–driver", “-d”,
help=“how & where to store backup”,
nargs=2,
metavar=(“DRIVER”, “DESTINATION”),
action=DriverAction,
required=True)
return parser
Lastly, let’s print a timestamp with time.strftime, generate a database file name, and print what we’re doing as we upload/write files.
src/pgbackup/cli.py (partial)
def main():
import time
import boto3
from pgbackup import pgdump, storage

args = create_parser().parse_args()
dump = pgdump.dump(args.url)

if args.driver == 's3':
    client = boto3.client('s3')
    timestamp = time.strftime("%Y-%m-%dT%H:%M", time.localtime())
    file_name = pgdump.dump_file_name(args.url, timestamp)
    print(f"Backing database up to {args.destination} in S3 as {file_name}")
    storage.s3(client,
            dump.stdout,
            args.destination,
            file_name)
else:
    outfile = open(args.destination, 'wb')
    print(f"Backing database up locally to {outfile.name}")
    storage.local(dump.stdout, outfile)

Feel free to test the CLI’s modifications and commit these changes.

Adding a setup.cfg
Before we can generate our wheel, we’re going to want to configure setuptools to not build the wheel for Python 2. We can’t build for Python 2 because we used string interpolation. We’ll put this configuration in a setup.cfg:
setup.cfg
[bdist_wheel]
python-tag = py36
Now we can run the following command to build our wheel:
(pgbackup-E7nj_BsO) $ python setup.py bdist_wheel
Next, let’s uninstall and re-install our package using the wheel file:
(pgbackup-E7nj_BsO) $ pip uninstall pgbackup
(pgbackup-E7nj_BsO) $ pip install dist/pgbackup-0.1.0-py36-none-any.whl
Install a Wheel From Remote Source (S3)
We can use pip to install wheels from a local path, but it can also install from a remote source over HTTP. Let’s upload our wheel to S3 and then install the tool outside of our virtualenv from S3:
(pgbackup-E7nj_BsO) $ python

>>> import boto3

>>> f = open(‘dist/pgbackup-0.1.0-py36-none-any.whl’, ‘rb’)

>>> client = boto3.client(‘s3’)

>>> client.upload_fileobj(f, ‘pyscripting-db-backups’, ‘pgbackup-0.1.0-py36-none-any.whl’)

>>> exit()

We’ll need to go into the S3 console and make this file public so that we can download it to install.

Let’s exit our virtualenv and install pgbackup as a user package:

(pgbackup-E7nj_BsO) $ exit

$ pip3.6 install --user https://s3.amazonaws.com/pyscripting-db-backups/pgbackup-0.1.0-py36-none-any.whl

$ pgbackup --help

fibonacci_series

def fibo(n):

a=0

b=1

for x in range(n):

    a=b

    b=a+b

    print(a, '\n')

return b

num=int(input("enter the value of N : "))

print(fibo(num))

Quantity

for quant in range(50,0,-1):

if quant>1:

print(quant, "bottles of beer on the wall", quant, "bottle of beer")

if quant > 2:

suffix = str(quant) + "bottels of beer on the wall"

else:

suffix = "1 bottle of beer on the wall"

elif quant == 1 :

print("1 bottle of beer on the wall, 1 bottle of beer")

suffix = "no more beer on the wall"

print("take one down and pass it around ", suffix)

print("--------")

CREATE AWS-EC2 INSTANCE

import boto3

ec2 = boto3.resource("ec2")

instance = ec2.create_instance(

ImageId = "AWS-AMI ID" # AWS_AMI_ID

MinCount=1, # minimum no. of instances

MaxCount=1, # maximum no. of instances

InstanceType= "t2.micro"

)

print(instance[0].id)

"""

###for terminate or stop the instance

instance_id = "EC2_ ID"

instance=ec2.instance(instance_id)

response=instance.terminate()

print(response)

"""

Create_S3_bucket

import boto3

s3=boto3.responce(‘s3’)

bucket_name="surya_bucket"

try:

responce=s3.create_bucket(bucket=bucket_name, CreatebucketConfiguration={"locationConstant":"us-east-2"})

print(responce)

export Exception as error:

print(error)