Thursday, April 4, 2019

IVR Cloud Migration Project

IVR Cloud Migration ProjectINTRODUCTIONThe primary objective of the IVR Cloud Migration Project is to Lift and c tout ensemble forth their leading uses into the AWS Cloud Environment. The Lift and Shift of the IVR Applications argon recommended to choose automation the least amount of homophile interaction to build and deploy onto AWS Cloud. This document will give a step-by-step process to carry on out the task of automating the unveiling and master(prenominal)tenance of the natural coverings.REQUIREMENTSThe IVR Applications wait the spare-time activity resources to replicate and automate the on-premise environment onto AWS Cloud.In the Automation suffice, the requirement is to control minimal human interaction and have an automation pipeline from creating a build for the application to creating, deploying and configuring until a rillning application grammatical case is apparatus.The tools that argon required ar as followsAWS EC2 patterns clearSphere emancipation ProfileJenkins furrowCyberArk AuthenticationAnsible TowerAWS Cloud organic lawAWS compromising bear down hackamoresAWS S3 BucketELASTIC sum up CLOUD (EC2)Elastic Compute Cloud (EC2) is a virtual computing environment which provides substance abusers the platform to pass water applications and allowing them to scale their applications by providing Infrastructure as a Service.Key Concepts associated with an EC2 are Virtual computing environments are known as instances.Preconfigured scouts for your instances, known as virago Machine Images (AMIs), that package the bits you fate for your host (including the run system and additional software).Various courses of CPU, memory, retentivity, and networking capacity for your instances, known as instance types.Secure login information for your instances utilize key pairs (AWS stores the public key, and you store the private key in a secure place).Storage volumes for short data thats deleted when you stop or terminate your instanc e, known as instance store volumes.Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon east by south), known as Amazon EBS volumes.Multiple physical locations for your resources, such as instances and Amazon EBS volumes, known as regions and Availability Z superstars.A firewall that enables you to position the protocols, behaviors, and source IP ranges that nookie reach your instances using credential assorts.Static IPv4 addresses for dynamic maculate computing, known as Elastic IP addresses.Metadata, known as tags, that you can create and assign to your Amazon EC2 resources.Virtual networks you can create that are logically isolated from the rest of the AWS Cloud, and that you can optionally join to your own network, known as Virtual Private Clouds (VPCs).WEBSPHERE LIBERTY PROFILEIBM WebSphere Application Server V8.5 acquaintance Profile is a composable, dynamic application server environment that supports development and testing of web applica tions.The Liberty profile is a simplified, lightweight development and application runtime environment that has the following characteristicsSimple to configure. anatomy is request from an XML file with text-editor friendly syntax.Dynamic and flexible. The run time consigns single what your application demand and recomposes the run time in response to configuration changes.Fast. The server starts in under 5 seconds with a elementary web application.Extensible. The Liberty profile provides support for user and product extensions, which can use placement programming Interfaces (SPIs) to extend the run time.JENKINSJenkins is a self- aimed, open source automation server which can be utilize to automate all sorts of tasks such as building, testing, and deploying software. Jenkins can be cut ined through native system packages, Docker, or even run standalone by any machine with the Java Runtime Environment installed.Jenkins Pipeline is a retinue of plugins which supports imple menting and integrating continual delivery pipelines into Jenkins. Pipeline provides an extensible square up of tools for modelinging simple-to-complex delivery pipelines as code.A Jenkinsfile which is a text file that contains the definition of a Jenkins Pipeline is checked into source control. This is the foundation of Pipeline-As-Code treating the continuous delivery pipeline a part of the application to be version and reviewed like any some other code.REQUIREMENTS The Requirements for Jenkins Server includes the followingThe sizing requirement for a Jenkins instance is that there is no one size fits all answer the exact specifications of the hardware that you will need will depend heavily on your organizations needs.Your Jenkins master runs on Java and requires to have the OpenJDK installed on the Instance with the JAVA_HOME path Set.Jenkins runs on a local anaesthetic webserver like Tomcat and requires it to be configured.RAM allotted for it can range from 200 MB for a sm all installation to 70+ GB for a single and massive Jenkins master. However, you should be able to bet the RAM required based on your be sick build needs.Each build node fellowship will take 2-3 threads, which equals about 2 MB or more of memory. You will similarly need to factor in CPU overhead for Jenkins if there are a lot of users who will be accessing the Jenkins user interface.The more automated the environment configuration is, the easier it is to replicate a configuration onto a juvenile agent machine. Tools for configuration management or a pre-baked image can be excellent solutions to this end. Containers and virtualization are also popular tools for creating generic agent environments.JENKINS FILE STRUCTURE Jenkins File Structure is a model to automate the non-human part of the whole software development process, with now common things like continuous integration, but by further empowering teams to implement the technical part of a Continuous Delivery.DirectoryDesc ription. jenkinsThe nonremittal Jenkins family directory.FingerprintsThis directory is employ by Jenkins to keep track of artifact fingerprints. We look at how to track artifacts later in the book.jobsThis directory contains configuration details about the build jobs that Jenkins manages, as well as the artifacts and data resulting from these builds.pluginsThis directory contains any plugins that you have installed. Plugins allow you to extend Jenkins by adding extra feature. Note Except the Jenkins core plugins (subversion, cvs, ssh-slaves, maven, and scid-ad), no plugins are stored with Jenkins executable, or expand web application directory.updatesThis is an internal directory used by Jenkins to store information about for sale plugin updates.userContentYou can use this directory to place your own custom content onto your Jenkins server. You can access files in this directory at http//myserver/userContent (stand-alone).usersIf you are using the native Jenkins user database, user storeys will be stored in this directory.warThis directory contains the expanded web application. When you start Jenkins as a stand-alone application, it will extract the web application into this directory.JENKINS setup Jenkins Setup is carried out on a managing server which has access to all your remote servers or nodes. The Process can be demonstrate with a few simple steps.Jenkins has native integrations with different Operating Systems. These are the Operating Systems that support Jenkins areSolaris 10UbuntuRed Hat DistributionsWindowsUNIX DaemonDockerJENKINS CONFIGURATION The Configuration file for Jenkins is used to make certain changes to the default configuration. The Priority configuration changes are searched by Jenkins in the following orderJenkins will be launched as a daemon on startup. See /etc/init.d/jenkins for more details.The jenkins user is created to run this service. If you change this to a different user via the config file, you must change the owner o f /var/log/jenkins, /var/lib/jenkins, and /var/cache/jenkins.Log file will be placed in /var/log/jenkins/jenkins.log. Check this file if you are troubleshooting Jenkins./etc/sysconfig/jenkins will capture configuration parameters for the launch.By default, Jenkins try on port 8080. Access this port with your browser to start configuration. Note that the built-in firewall may have to be opened to access this port from other computers.A Jenkins RPM repository is added in /etc/yum.repos.d/jenkins.repo ready A JENKINS PIPELINE The requirement for creating a pipeline is to have a repository with the Jenkins file which holds the firmness of the pipeline.STEP 1Select new Item from the Jenkins Dashboard.New Item on the Jenkins home pageboy src=https//aaimagestore.s3.amazonaws.com/july2017/0020514.008.pngSTEP 2 demean a Name for the Pipeline and Select Pipeline from the list of options. lose it OK.STEP 3Toggle Tabs to Customize the Pipeline to photograph Apply.STEP 4To Build the Job, C lick Build Now on the Dashboard to run the Pipeline.ANSIBLEAnsible Tower is the Automation tool used in this project and is a simple tool to manage quadruple nodes. Ansible is recommended to automate the deployment and configuration management of the System and its Applications.Ansible Automation can be setup on any machine as it does not require a daemon or database. It will begin with the assigned user to SSH into a host file. This allows the user to run the Ansible script to execute the single-valued functions which runs various tasks delimit.NOTE In scope of the IVR applications the ansible script executes multiple roles for the creation of EC2 Instances and the installation of WebSphere Applications. Each of these roles have their very own YAML script to create and populate the instance.REQUIREMENTS The Requirements for Ansible Server includes the followingAnsible Tower Setup requires to be on a Linux Instance (CentOS or RHEL),Linux setup for some elemental services includi ng Git, Python, OpenSSL.Some Additional RequirementJinja2 A modern, fast and easy to use stand-alone template engine for Python.PyYAML A YAML parser and emitter for the Python programming language.Paramiko A native Python SSHv2 channel library.Httplib2 A comprehensive HTTP client library.SSHPass A non-interactive SSH password authentication.ANSIBLE FILE STRUCTURE Ansible Playbook is a model of configuration or a process which contains count of plays. Each play is used to map a group of hosts to some well-defined roles which can be represented by ansible call tasks.Master Playbook The Master Playbook file contains the information of the rest of the Playbook.The Master Playbook for the project has been accustomed as Site.yml.This YAML script is used to define the roles to execute.NOTE The roles in the Master Playbook are invoked to fare their respective tasks.mode = /ivr/aws_env/playbooks/ivrSITE.YMLInventory Ansible contains information about the hosts and groups of hosts to be managed in the hosts file. This is also called an inventory file.Path = /ivr/aws_env/playbooks/ivr/inventory sort out Variables and Host Variables Similar to the hosts inventory file, you can also include hosts and groups of hosts configuration variables in a separate configuration pamphlet like group_vars and hosts_vars.These can include configuration parameters, whether on the application or operating system level, which may not be valid for all groups or hosts.This is where having multiple files can be useful inside group_vars or hosts_vars, you can create a group or host in more than one way, allowing you to define specific configuration parameters.Roles Roles in Ansible build on the idea of include files and combine them to form clean, reusable abstractions they allow you to focus more on the volumed picture and only define the details when needed. To correctly use roles with Ansible, you need to create a roles directory in your working Ansible directory, and then any nece ssary sub-directories.The Following displays the Playbook Structure for Ansible.ANSIBLE SETUP Ansible Setup is carried out on a managing server which has access to all your remote servers or nodes. The Process can be demonstrated with a few simple steps. tone I.Login as the Root User on the Instance where Ansible needs to be installed.Use the sudo apt-get install ansible -y command to install the package onto an Ubuntu/Debian System.Use the sudo yum install ansible -y command to install the package onto a CentOS/RHEL/Fedora System.Step II.The Ansible system can connect to any remote server using SSH by authenticating the request.NOTE Ansible can use ssh-keygen to create a RSA encrypted key and can retroflex it to the remote server to connect using SSH without authentication.Step III.Create an Inventory file which is used to work against multiple systems across the infrastructure at the same time. This is executed by taking portions of the systems linked in the Inventory file.The De fault path for the Inventory file is etc/ansible/hosts.NOTE This path can be changed by using -i which is a recommended option depending on the project requirement.There can be more than one inventory files which can be executed at the same time. The inventory file holds the group distinguishs which defines the group of servers that are maintained together.The inventory file needs to be populated with the host IP Addresses that are to be accessed.The inventory file is as followsPath = /ivr/aws_env/playbooks/ivr/inventoryhostsThe IVR in the brackets indicates group numbers. Group names are used to classify systems and determining which systems you are going to control at what quantify and for what reason.The group name can be used to interact with all the hosts onside different modules (-m) defined in ansible.Example ansible -m ping IVRANSIBLE CONFIGURATION The Configuration file for Ansible is used to make certain changes to the default configuration.The Priority configuratio n changes are searched by ansible in the following orderPath= /ivr/aws_env/playbooks/ivr/etc/ansible.cfg is the path setup for ansible configuration changes.CLOUD FORMATIONAWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run-in AWS.You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you.You dont need to individually create and configure AWS resources and figure out whats dependent on what AWS CloudFormation handles all of that.CloudFormation TemplateCloudFormation templates are created for the service or application architectures you want and have AWS CloudFormation use those templates for quick and accredited provisioning of the services or applications (called stacks ). You can also easily update or replicate the stacks as needed.Example Template STEPS TO LAUNCH A CLOUD FORMATION STACK Sign in to AWS Management Console and open the Cloud Formation console at http//console.aws.amazon.com/cloudformation/From the navigation bar recognise the region for the instance Click on the Create a New load up.Choose an Option from a Sample Template, Template to S3 and S3 Template URL Using a template to build an EC2 Instance Enter a Stack Name and Provide the Key Pair to SSH into the Instance.aAAdd Tags to the Instance, this also help organize your instance to group with application specific, team specific instances. criticism and Create Stack.CloudFormation Stack starts building the stack using the template.In Scope of this Project, IVR Application Instances are build using a Cloud Formation Template and will be triggered using Ansible Role.Simple Storage Service (S3)Elastic commitment Balancer (ELB)A load haltere serves as a single point of clash for c lients, which increases the availability of your application. You can add and remove instances from your load balancer as your needs change, without disrupting the overall turn tail of requests to your application. Elastic Load Balancing scales your load balancer as traffic to your application changes over time, and can scale to the clear majority of workloads automatically.You can configure wellness checks, which are used to monitor the health of the registered instances so that the load balancer can send requests only to the healthy instances. You can also unload the work of encryption and decryption to your load balancer so that your instances can focus on their main work.Setting Up an Elastic Load BalancerStep 1 Select a Load Balancer TypeElastic Load Balancing supports two types of load balancersApplication Load Balancers and classical Load Balancers.To create an Elastic Load Balancer, Open the Amazon EC2 console and choose Load Balancers on the navigation pane.Step 2 put tog ether Your Load Balancer and ListenerOn the Configure Load Balancer page, complete the following effect.To configure your load balancer and meeter1. For Name, type a name for your load balancer.The name of your Application Load Balancer must be unique within your set of Application Load Balancers for the region, can have a maximum of 32 characters, can contain only alphanumeric characters and hyphens, and must not begin or end with a hyphen.2. For Scheme, keep the default value, internet-facing.3. For IP address type, select ipv4 if your instances support IPv4 addresses or dual stack if they support IPv4 and IPv6 addresses.4. For Listeners, keep the default, which is a listener that accepts HTTP traffic on port 80.5. For Availability Zones, select the VPC that you used for your EC2 instances. For each of the two Availability Zones that contain your EC2 instances, select the Availability Zone and then select the public subnet for that Availability Zone.6. Choose bordering Configur e Security Settings.Step 3 Configure a Security Group for Your Load BalancerThe security group for your load balancer must allow it to communicate with registered channelizes on both the listener port and the health check port. The console can create security groups for your load balancer on your behalf, with rules that specify the correct protocols and ports.Note If you prefer, you can create and select your own security group instead. For more information, control Recommended Rules in the Application Load Balancer Guide.On the Configure Security Groups page, complete the following procedure to have Elastic Load Balancing create a security group for your load balancer on your behalf.Step 4 Configure Your Target GroupTo configure a security group for your load balancer1. Choose Create a new security group.2. Type a name and description for the security group, or keep the default name and description. Thisnew security group contains a rule that allows traffic to the load balancer listener port that you selectedon the Configure Load Balancer page.3. Choose Next Configure Routing.Step 4 Configure Your Target GroupCreate a posterior group, which is used in request routing. The default rule for your listener routes requests to the to registered targets in this target group. The load balancer checks the health of targets in this targetgroup using the health check settings defined for the target group. On the Configure Routing page,complete the following procedure.To configure your target group1. For Target group, keep the default, New target group.2. For Name, type a name for the new target group.3. Keep Protocol as HTTP and manner as 80.4. For Health checks, keep the default protocol and ping path.5. Choose Next Register Targets.Step 5 Register Targets with Your Target GroupOn the Register Targets page, complete the following procedure.To register targets with the target group1. For Instances, select one or more instances.2. Keep the default port, 80, and choo se Add to registered.3. If you need to remove an instance that you selected, for Registered instances, select the instanceand then choose Remove.4. When you have finished selecting instances, choose Next Review.Step 6 Create and Test Your Load BalancerBefore creating the load balancer, review the settings that you selected. subsequently creating the load balancer, verify that its sending traffic to your EC2 instances.To create and test your load balancer1. On the Review page, choose Create.2. After you are notified that your load balancer was created successfully, choose Close.3. On the navigation pane, under stretch BALANCING, choose Target Groups.4. Select the newly created target group.5. On the Targets tab, verify that your instances are ready. If the status of an instance is initial, its probably because the instance is still in the process of being registered, or it has not passed theAuto measureAUTOMATIONOVERVIEWThere are 2 Parts of the Automation Process which is used To Create a Custom AMI for all IVR Applications To Create Instances for Each Application using the Custom AMI.STEPS TO CREATE THE customs AMI The process of automating this environment starts from creating a Jenkins Pipeline for code deploy to the application that needs to be build.The Pipeline also needs integration of CyberArk for the Authentication and registering the service account required for the automation.The following process is triggered as part of the Ansible playbook where it performs multiple roles to complete automation of the Application.The Ansible role first-year calls for a CloudFormation Template.A CloudFormation Template is used to Build a Stack required (EC2 Instance). This template is given the AMI ID of the Verizon standard.The CloudFormation Template after the creation of the Instance triggers a WebSphere Role from Ansible that installs the OpenJDK, WebSphere Liberty Profile, creating a WLP User and Add the Necessary Net groups for the application.An AMI of t he Instance at this point is created.STEPS TO CREATE THE APPLICATION INSTANCES The process of automating this environment starts from creating a Jenkins Pipeline for code deploy to the application that needs to be build.The Pipeline also needs integration of CyberArk for the Authentication and registering the service account required for the automation.The following process is triggered as part of the Ansible playbook where it performs multiple roles to complete automation of the Application.The Ansible role first calls for a CloudFormation Template.A CloudFormation Template is used to Build a Stack required (EC2 Instance). This template is given the Custom AMI created for IVR.After the creation of the Instance an S3 Role is triggered from Ansible.The S3 Role Performs the Ansible Role based on the Application Instance.NOTE An S3 Bucket with folder structure for each application is maintained to keep the updated code and certificates along with other required installation files.IVR T ouch Point S3 role fetches the pinna files, configuration files and the certificates in the IVR-TP folder of the S3 bucket and install them on the Instance that is created by the Cloud Formation Role.IVR Middleware S3 role fetches the EAR files, configuration files and the certificates in the IVR-MW folder of the S3 bucket and install them on the Instance that is created by the Cloud Formation Role.IVR Activations S3 role fetches the EAR files, configuration files and the certificates in the IVR-Activations of the S3 bucket and install them on the Instance that is created by the Cloud Formation Role.IVR CTI S3 role fetches the IBM eXtreme outgo Grid Installation followed by Siteminder SSO installation. After the application requirements are fulfilled, the EAR files, configuration files and the certificates in the IVR-CTI folder of the S3 bucket are deployed on the Instance.IVR Work Hub S3 role fetches the IBM eXtreme Scale Grid Installation followed by Siteminder SSO installa tion. After the application requirements are fulfilled, the EAR files, configuration files and the certificates in the IVR

No comments:

Post a Comment