Saturday, October 12, 2019

MongoDB Getting Started and Setup Cluster

In the past couple of days. I have change my job started working on server less model.so all of there data they don't store in the traditional way in RDBMS in fact thay store all of there application logs in the DB.

So We have started using MongoDB As a document database few of the advantages of mongodb.
  • Document Oriented Storage − Data is stored in the form of JSON style documents.
  • Index on any attribute
  • Replication and high availability
  • Auto-sharding
  • Rich queries
  • Fast in-place update
  • Professional support by MongoDB.

    Now Lets start Building the Mongo cluster with 1 master 1 stave 1 Arbiter

    Pre-requisites.
  • 3 Centos machines (in cloud or on premises)
mongo0 192.168.56.131
mongo1 192.168.56.132
mongo2 192.168.56.133
  1. 1 Windows Host for testing using studio 3T
Login on all 3 vms,
  1. Disable the firewalld service
# systemctl disable firewalld
  1. Stop the firewalld service
# systemctl stop firewalld
  1. Disable SeLinux
To permanently disable SELinux, use your favorite text editor to open the file /etc/sysconfig/selinux 
Then change the directive SELinux=enforcing to SELinux=disabled 
  1. Update the all packages and Install EPEL release repo
# yum update -y && yum insall epel-release
  1. Edit hostname file and change the hostname mongo0,mongo1,mongo2… /etc/hostanme


  1. Add the below entry in the /etc/hosts
mongo0 192.168.56.131
mongo1 192.168.56.132
mongo2 192.168.56.133


  1. reboot the system
# reboot now


Configure the package management system (yum).
Create a /etc/yum.repos.d/mongodb-org-4.2.repo file so that you can install MongoDB directly using yum:
[mongodb-org-4.2]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.2/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.2.asc
Install the MongoDB packages.
To install the latest stable version of MongoDB, issue the following command:
sudo yum install -y mongodb-org
Alternatively, to install a specific release of MongoDB, specify each component package individually and append the version number to the package name, as in the following example:
sudo yum install -y mongodb-org-4.2.0 mongodb-org-server-4.2.0 mongodb-org-shell-4.2.0 mongodb-org-mongos-4.2.0 mongodb-org-tools-4.2.0
You can specify any available version of MongoDB. However yum upgrades the packages when a newer version becomes available. To prevent unintended upgrades, pin the package. To pin a package, add the following exclude directive to your /etc/yum.conf file:
exclude=mongodb-org,mongodb-org-server,mongodb-org-shell,mongodb-org-mongos,mongodb-org-tools

Start one member (master) of the replica set.

This mongod should not enable auth.

Create administrative users.

The following operations will create two users: a user administrator that will be able to create and modify users (siteUserAdmin), and a root user (siteRootAdmin) that you will use to complete the remainder of the tutorial:
use admin
db.createuser( {
user: "siteUserAdmin",
pwd: "",
roles: [ "userAdminAnyDatabase" ]
});
db.createuser( {
user: "siteRootAdmin",
pwd: "",
roles: [ "userAdminAnyDatabase",
"readWriteAnyDatabase",
"dbAdminAnyDatabase",
"clusterAdmin" ]
});

Stop the mongod instance.

Create the key file to be used by each member of the replica set.

Create the key file your deployment will use to authenticate servers to each other.
To generate pseudo-random data to use for a keyfile, issue the following openssl command:
mkdir -p /etc/mongod
openssl rand -base64 741 > /etc/mongod/mongodb-keyfile
chown -R mongod:mongod /etc/mongod
chmod 600 /etc/mongod/mongodb-keyfile
You may generate a key file using any method you choose. Always ensure that the password stored in the key file is both long and contains a high amount of entropy. Using openssl in this manner helps generate such a key.

Copy the key file to each member of the replica set.

Copy the mongodb-keyfile to all hosts where components of a MongoDB deployment run. Set the permissions of these files to 600 so that only the owner of the file can read or write this file to prevent other users on the system from accessing the shared secret.
mkdir -p /etc/mongod
chown -R mongod:mongod /etc/mongod
chmod 600 /etc/mongod/mongodb-keyfile
on master
Execute the scp command for all the slaves one by one.
scp /etc/mongod/mongodb-keyfile user@slave: /etc/mongod/mongodb-keyfile

Sample mongod.conf file is looks something like this



# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log


# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
replication:
replSetName: "rs0"
security:
keyFile: /etc/mongod/mongodb-keyfile
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options
#auditLog:
#snmp:

Make sure the changes highlighted in yellow are available in your conf file

Connect to the member of the replica set where you created the administrative users.

Connect to the replica set member you started and authenticate as the siteRootAdmin user. From the mongo shell, use the following operation to authenticate:
use admin
db.auth("siteRootAdmin", "");

Initiate the replica set.

Use rs.initiate() on the replica set member:
rs.initiate()
MongoDB initiates a set that consists of the current member and that uses the default replica set configuration.

Verify the initial replica set configuration.

rs.conf()
The replica set configuration object resembles the following:
{
"_id" : "rs0",
"version" : 1,
"members" : [
{
"_id" : 1,
"host" : "mongo0:27017"
}
]
}

Add the remaining members to the replica set.

Add the remaining members with the rs.add() method.
The following example adds two members:
rs.add("mongo1")
rs.addArb("mongo2")
When complete, you have a fully functional replica set. The new replica set will elect a primary.

Check the status of the replica set.

Use the rs.status() operation:
rs.status()
On secondary (slave) execute
rs.slaveOk()

Tuesday, July 31, 2018

Getting Started with Docker..Management

Docker concepts

Docker is a platform for developers and sysadmins to develop, deploy, and run applications with containers. The use of Linux containers to deploy applications is called containerization. Containers are not new, but their use for easily deploying applications is.
Containerization is increasingly popular because containers are:
  • Flexible: Even the most complex applications can be containerized.
  • Lightweight: Containers leverage and share the host kernel.
  • Interchangeable: You can deploy updates and upgrades on-the-fly.
  • Portable: You can build locally, deploy to the cloud, and run anywhere.
  • Scalable: You can increase and automatically distribute container replicas.
  • Stackable: You can stack services vertically and on-the-fly.
As its a trending technology I also try to learn it but I could not get much so I was looking for some thing which can help me with docker management and I can focus of writing containers and can manage them from GUI so after a search on google i came across a tool call portainer which it self is deployed on  docker but can be a great help to manage docker containers. so i started to evaluate it. Its really easy to install just install docker on host using command


# sudo apt-get install docker.io

# docker volume create portainer_data




# docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer




and your portainer get up and running in no time and you can access the same on the browser
http://:9000





You can do everything from building containers to deploying it.

Friday, July 27, 2018

AWS Lambda To start and Stop Instance as Per the Schedule Tag

Hello Guys,

I have recently join a new company and development team is used to work in the India working hours and rest of the time the Servers on the AWS are remaining ideal/utilized rest of the day. so they want to reduce the cost so they come to me and ask for the strategy to reduce cost so I suggested to go have shut down the machine/servers in non-working hours which they agree but  they want a governing mechanism to enforce this so I have  come up with a lambda function which allow them to customized startup and stop time as well as they don't need to manually start and stop the same. As i need to implement the same functionality on the 25 account so I have written a lambda function and then put it in a cloud formation and provided them and ask them to execute this in the account and then just add tag name "schedule" and value of start and stop time in UTC as the lambda is run on the UTC lambda function will take care of the rest.
To further reduce costs of our EC2 instances, we determined some instances that are running 24/7 unnecessarily. An automated process to schedule stop and start instances would greatly help cutting costs. Of course, the solution itself should not add an extra instance to our infrastructure. The solution must be an example of serverless computing.
Automatically stop running EC2 instances 24/7 unnecessarily
We created a lambda function that scans all instances for a specific tag. The tag we use is named ‘Schedule’ and contains the desired ‘runtime’ for the specific instance. Instances without a Schedule tag will not be affected. We support the following content format in the Schedule tag:
08:00-19:15   start the instance at 08:00, stop it at 19:15
21:00-07:45   start the instance at 21:00, stop it at 07:45 the next day
-17:45              stop this instance at 17:45 today
-16:30T           Terminate instance at 16:30 today
# whatever     Anything starting with a # is totally ignored
NOTE: All times are UTC times!
Invalid tag-content is automatically prefixed with an # so that it is ignored in future invocations of the lambda function. Also instances with only an end-time will have their tag rewritten to avoid restarts at 00:00.
Our lambda function is written in python using boto3 for AWS integration. The function requires a role to be able to interact with EC2. The lambda function has to be able to read and change tags, stop and start instances and even terminate them!
To further automate the process, we need to automatically execute the lambda function, which can be achieved by creating a CloudWatch Event Rule. It is a bit of a funny place for a schedule, but that’s where you configure it. The event will be used as a trigger to start our lambda function.
The scheduler we create in the stack runs the lambda function every 10 minutes, giving your EC2 scheduler a granularity of 6 runs per hour.
As shown in the below screenshot One need to just add the tag “Schedule” and required timing against it and it will start and stop the instance at the given time. 



Below is the attached cloud formation template .
{
  "AWSTemplateFormatVersion": "2010-09-09",
  
  "Description" : "[INFRA] Lambda code to start/stop EC2 instances based on tag Schedule. Examples of valid content for Schedule tag are: 08:00-18:00 EC2 instance will run from 08:00 till 18:00 every day. -17:00 wil stop the instance today at 17:00 and rewrite the tag to #-17:00. -17:00T will TERMINATE the instance at 17:00. Any invalid content will be rewritten to start with a #. Any content starting with # will be ignored",
  "Resources" : {
  
    "EC2SchedulerLambda": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Handler": "index.EC2Scheduler",
        "FunctionName" : "EC2Scheduler",
"Description" : "[INFRA] Lambda code to start/stop EC2 instances based on tag Schedule. NOTE times are UTC. See stack description for further information",
        "Role": { "Fn::GetAtt" : ["EC2SchedulerRole", "Arn"] },
        "Code": {
          "ZipFile":  { "Fn::Join": ["\n", [
"from datetime import datetime, time",
"import boto3",
"import re",
"",
" theTag='Schedule'",
"",
"doLog = True",
"",
"def EC2Scheduler(event, context):",
"",
"  now = datetime.now().replace(second=0, microsecond=0)",
"",
"  if doLog:",
"    print 'Reference time is ', now.time()",
"",
"  ec2 = boto3.resource('ec2')",
"",
"  allInstances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['running', 'stopped']}])",
"",
"  for instance in allInstances:",
"    if instance.tags :",
"      for tag in instance.tags:",
"        if tag['Key'] == theTag :",
"          Range=tag['Value']",
"",
"          if doLog :",
"            print instance.instance_id, ",
"",
"          shouldrun=shouldRunNow(instance, now, Range)",
"",
"          if doLog :",
"            print 'should %s' %( 'run' if shouldrun else 'NOT run'),",
"",
"          alignInstance(instance, shouldrun)",
"",
"def shouldRunNow(instance, now, tRange):",
"",
"  currentState = instance.state['Code'] == 16",
"",
"  # is the tRange commented out",
"",
"  if tRange[0:1] == '#':",
"    if doLog:",
"      print 'Range starts with # -- no changes',",
"    return currentState",
"",
"  # does the time indicate an end-time only [eg -13:15]",
"",
"  if tRange[0:1] == '-':",
"    tEnd=time(int(tRange[1:3]),int(tRange[4:6]))",
"",
"    if doLog:",
"      print 'End time:', tEnd,",
"",
"    if now.time() > tEnd:",
"",
"      msg='stop'",
"      if tRange[6:7] == 'T':",
"",
"        terminateAllowed=instance.describe_attribute(Attribute='disableApiTermination')['DisableApiTermination']['Value'] == False",
"",
"        if terminateAllowed:",
"          instance.modify_attribute( Attribute='instanceInitiatedShutdownBehavior', Value='terminate' )",
"          msg='terminate'",
"        else:",
"          msg='stop [terminate not allowed]'",
"",
"      # re-tag",
"",
"      instance.delete_tags(Tags=[{'Key': theTag, 'Value': tRange}])",
"      instance.create_tags(Tags=[{'Key': theTag, 'Value': '#'+msg+'@'+tRange}])",
"",
"      if doLog:",
"        print 'time to ', msg, ",
"",
"      return False",
"",
"    else:",
"      # leave instance in current state",
"      if doLog:",
"        print 'NO time to stop (yet) ',",
"      return currentState",
"",
"  # some simple checks for tRange",
"",
"  if not re.match('\\d{2}:\\d{2}-\\d{2}:\\d{2}',tRange):",
"    if doLog:",
"      print 'error in format of tag: >', tRange, '< -- no changes required ',",
"",
"    instance.delete_tags(Tags=[{'Key': theTag}, {'Value': tRange}])",
"    instance.create_tags(Tags=[{'Key': theTag, 'Value': '# Err: '+tRange}])",
"",
"    return currentState",
"",
"  tStart=time(int(tRange[0:2]),int(tRange[3:5]))",
"  tEnd=time(int(tRange[6:8]),int(tRange[9:11]))",
"",
"  inInterval = False",
"    ",
"  # first case, start < end --> same day",
"",
"  if tStart < tEnd:",
"    if tStart <= now.time() <= tEnd:        ",
"      inInterval = True ",
"",
"  # second case, end < start --> carry over to next day",
"",
"  else:",
"    if tStart <= now.time() <= time(23,59) or time(0,0) <= now.time() <= tEnd:",
"      inInterval = True",
"",
"  if doLog :",
"    print 'Ref time is %s interval' %( 'in' if inInterval else 'NOT in'), tRange, ",
"",
"  return inInterval",
"",
"def alignInstance(inst, requiredOn):",
"",
"  actualOn = inst.state['Code'] == 16 ",
"  msg='is compliant'",
"",
"  if actualOn != requiredOn:",
"    if requiredOn == True:",
"      msg='starting'",
"      inst.start()",
"    else:",
"      termReq=inst.describe_attribute(Attribute='instanceInitiatedShutdownBehavior')['InstanceInitiatedShutdownBehavior']['Value'] == 'terminate'",
"",
"      if termReq:",
"        msg='terminating'",
"        inst.terminate()",
"      else:",
"        msg='stopping'",
"        inst.stop()",
"",
"  if doLog:",
"    print '-->', msg",
""
  ]]}
        },
        "Runtime": "python2.7"
      }
    },
    
    "EC2SchedulerTick": {
      "Type": "AWS::Events::Rule",
      "Properties": {
"Description": "ScheduledRule for EC2Scheduler",
"ScheduleExpression": "rate(10 minutes)",
        "Name" : "EC2SchedulerTick",
"State": "ENABLED",
"Targets": [{
  "Arn": { "Fn::GetAtt": ["EC2SchedulerLambda", "Arn"] },
  "Id": "TargetFunctionV1"
}]
      }
    },
    "PermissionForEventsToInvokeLambda": {
      "Type": "AWS::Lambda::Permission",
      "Properties": {
"FunctionName": { "Ref": "EC2SchedulerLambda" },
"Action": "lambda:InvokeFunction",
"Principal": "events.amazonaws.com",
"SourceArn": { "Fn::GetAtt": ["EC2SchedulerTick", "Arn"] }
      }
    },
    "EC2SchedulerRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "AssumeRolePolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [{ "Effect": "Allow", "Principal": {"Service": ["lambda.amazonaws.com"]}, "Action": ["sts:AssumeRole"] }]
        },
        "Path": "/",
        "RoleName" : "EC2SchedulerRole",
        "Policies": [{
          "PolicyName": "EC2SchedulerPolicy",
          "PolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
    {
"Effect": "Allow",
"Action": ["logs:*"], 
"Resource": "arn:aws:logs:*:*:*" 
    },
    {
"Effect": "Allow",
"Action": ["ec2:*"], 
"Resource": "*"
    }]
          }
        }]
      }
    }
  }

}

this is it, Enjoy...

Saturday, February 24, 2018

Enforce Tagging. Using AWS Lambda Part-2

Hello Guys,

As we have already completed with prerequisites in the part-1. so Lets get going with lambda function.

Go to the https://aws.amazon.com  and login to console and click on lambda.
Then click on the Create Function --> Author from Scratch

When you click on author from scratch scroll down and you will find a form so fill it as shown in the snapshot

I have use the Run-time environment as Python 2.7 as I have written the code in Python 2.7
In the Role section Select the Role as Choose from existing role and in the next combo/drop-down box select the role Our role name "Lambda_basic_execution".
and click on Create function.

A nice screen will pop up

and on the left side of the function mean who will trigger it add cloud watch event which we have created in the Part1


Its configuration will look something like this at the bottom of the function.


Now click on the Function and Copy Paste the code.

import boto3
import logging

#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)

TopicArn = 'Write the Topic ARN for notification here'

#define the connection
ec2 = boto3.resource('ec2')

def lambda_handler(event, context):
    # Use the filter() method of the instances collection to retrieve
    # all running EC2 instances.
    filters = [
        {
            'Name': 'instance-state-name',
            'Values': ['running']
        }
    ]
   
    #filter the instances
    instances = ec2.instances.filter(Filters=filters)

    #locate Untagged untagged instances
    untaggedInstances = [instance.id for instance in instances if 'Name' not in [t['Key'] for t in instance.tags]]
#    The below line was added for debugging
    print untaggedInstances
    ptower_untaggedInstances = [instance.id for instance in instances if 'Product Tower' not in [t['Key'] for t in instance.tags]]
    app_untaggedInstances = [instance.id for instance in instances if 'Application' not in [t['Key'] for t in instance.tags]]
    scon_untaggedInstances = [instance.id for instance in instances if 'Support Contact' not in [t['Key'] for t in instance.tags]]
    appown_untaggedInstances = [instance.id for instance in instances if 'Application Owner' not in [t['Key'] for t in instance.tags]]
    dom_untaggedInstances = [instance.id for instance in instances if 'Domain' not in [t['Key'] for t in instance.tags]]
    untaggedInstances = untaggedInstances + ptower_untaggedInstances + app_untaggedInstances  + scon_untaggedInstances + appown_untaggedInstances + dom_untaggedInstances
    untaggedInstances = list(set(untaggedInstances))
    print untaggedInstances
    #print the instances for logging purposes
    #print untaggedInstances
   
    #make sure there are actually instances to shut down.
    if len(untaggedInstances) > 0:
        #perform the shutdown
         #print "Right now doing testing"
         shuttingDown = ec2.instances.filter(InstanceIds=untaggedInstances).stop()
        #publish_to_sns(shuttingDown)
         print shuttingDown
    #     print untaggedInstances
    else:
        print "Nothing to see here"



def publish_to_sns(message):
    sns = boto3.client('sns')
    sns_message = "We have shutdown the instances and Instace IDS are..."+str(message)
    response = sns.publish(TopicArn=topic_arn, Message = sns_message)
--------------------------------------------------------------------------------------------------------------------------
Code is Ends Here dont copy the ------
and After pasting it in the Function code section it will be like.


We are mostly Done.

Now If you Don't Create an Instance with The Tag :

  • Name
  • Product Tower
  • Application
  • Support Contact
  • Application Owner
  • Domain
Then the Instance will be ShutDown also Before enabling the script also make sure your exesting Instances also Have These tags otherwise it will also Stop them....

let me know If you face any issue while Implementing this....Am Happy to help you....
and also Belive me guys You wont find a better way to do this If you have multiple accounts and if they have different tagging requirement this will be the portable and simple solution....Enjoy...


Enforce Tagging. Using AWS Lambda Part-1

Hello Guys,

I have done a very interesting assignment in recent days in my office as the requirement was to shutdown all the instances of aws account which are not tagged with tag 'Owner' in it so I have started working on it for couple of days and created a simple lambda function

So lets start with the implementation
1. Created a IAM role using which the lambda service is going to execute the lambda function.
SO I have created a role name lambda_basic_execution  and attached two one is inline policy which looks something like this and other is EC2fulladmin which is available in aws.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        }
    ]

}

So my IAM role is look something like below snapshot.


Cloudwatch Rule

Then I have Created a cloud-watch Rule which will we use to trigger the function
Sc  this is my rule which will trigger the even when ever the instance change its state.
{ "source": [ "aws.ec2" ], "detail-type": [ "EC2 Instance State-change Notification" ] }


We are almost done with our preparations  We will continue in the next part about the lamda function...

Tuesday, January 30, 2018

Install Sound-wave and AWS inventory

Hello Guys,
Welcome back in the recent event in my office i need to work on a searchable inventory solution in AWS. so that i can get the list of AWS instances in CSV format. So i have google out and try to get the solution suitable for my requirement so i choose the soundwave. So lets not waste the time and getting started with the installation and some sample queries.

So lets install a Ubuntu server on virtual box if you don't want to waste money on AWS server. So i have install on my local server.

I have installed  Ubuntu server and install git on it also installed terraform on it.

#sudo apt-get update
#sudo apt-get upgrade -y
#sudo apt-get install git

Install oracle jdk1.8 on ubuntu srever as a prerequisite for that you need to install ppa repo by using following command

#sudo apt-add-repository ppa:webupd8team/java
#sudo apt-get update
#sudo apt-get install oracle-java8-installer

Now lets install terraform

#wget https://releases.hashicorp.com/terraform/0.9.8/terraform_0.9.8_linux_amd64.zip
#unzip terraform_0.9.8_linux_amd64.zip
#sudo mv terraform /usr/local/bin/

Now  lets conform terraform binary is accessible
#terraform --version

Now lets install docker as well on the server you can take reference from Here . and install docker on your server.

also Keep the Access and secret key of your AWS account handy you can get the access and secret  key from the AWS console.

After this you can follow the instruction form here. After all of done you can check its install and working.check the public ip or IP you use to ssh on server.
and put it in browser.

Tuesday, July 18, 2017

Sonar Analysis with Pull request Raise part 2

Hello Guys,
Part - 2
Follow the link https://www.digitalocean.com/community/tutorials/how-to-install-jenkins-on-ubuntu-16-04
This link will help you in the installation of Jenkins on the server.
Part -3
Also follow the below link This will help you in the installation of SonarQube.
http://rathinavneet.blogspot.in/2017/05/install-sonarqube-on-ubuntu-part-1.html

Install github plugin in the sonarqube and in the jenkins install Sonarqube scanner plugin.


Open jenkins go to manage Jenkins in configure sonar Qube server as shown in figure.



Once the sonar qube is configured now start configuring the github-server in jenkins as shown in figure.

Now lets configure github pull request builder in jenkins.


and Thats it we are done with the integration of github with jenkins now we can configure the job.
which will automatically triggered when you raise a PR(pull request).
Optionally you can also configured Email alert.
That I will cover in the next article.


Wednesday, July 12, 2017

Sonar Analysis with Pull request Raise part 1.

Hello Guys,

In recent days one of my friend had ask me is it possible that when ever some one raise a pull request in the github repo.
ex. Assume i have  github repo called newrepo. Now whenever some one push his code into my repo I need to validate that code against some bench mark of quality and i also need to make sure that the code is of good quality and If it has any defect then I should get those in github comment on the pull request.
Its has to be done in  steps

  1. Create  a Github webhooks
  2. Generate personal access token with appropriate permissions .
  3.  Create a jenkins Job
  4. setup of Sonarqube
  5. Step up Quality gate for the Project (Optional)  
1. Create Github webhook.

In My case go to URL https://github.com/navneet-rathi/newrepo

Open your repository in that click on settings and in settings click on Webhooks or  goto URL.

https://github.com/navneet-rathi/newrepo/settings/hooks.

Replace newrepo with the name of your project repository.


click on add a Webhook and fill the details of your Jenkins server as shown in the picture.




 This web-hook will send an information about the Pull request  and  to trigger a jenkins job we need to create one more web hook as shown in below screenshot



and Thats It We are done with part 1
.
In next part  I will setup jenkins and remaining configuration and sonar changes  

Sunday, June 4, 2017

Getting started with Sonar part 3

Hello Guys,

This is the 3rd and last part of sonar series in which I will explain about how you can detect the pull request which is raise on github it is safe to merge with your main or master branch or not and it will also comment what are the reasons why is it not safe and it will comment on github so lets getting started. with it.
 so first thing first.

1. You have git repository setup
2. Standard sonar setup.

Now How we can active it.

So for testing we have a testing repository
https://github.com/navneet-rathi/sample-code-java.git

and we have a pull request raise for the repository for me the pull request raise we need to know the pull request no. which we can find out through the
   
from This I come to know that the pull request no is 2

you need to provide access the sonar so that it can write on your github. which is possible using personal access token which you will get using github you need to provide proper


I have given the below access it will give me a access code please note it down.

now its start to work.

I will use maven to check go to the project repository and fire the below command

mvn -Dsonar.analysis.mode=preview \
              -Dsonar.github.pullRequest=2 \
              -Dsonar.github.repository=navneet-rathi/sample-code-java \
              -Dsonar.github.oauth=<This token which you have previously noted down> \
              -Dsonar.host.url=http://localhost:9000 \
              -Dsonar.login=admin \
              -Dsonar.password=<your sonar password>

and thats it after this you will where it is feasible to merge the request or not if we merge what will be the consequence.

In my case its failed and the below is the reason.
  
Ohh yes guys below is the genral command 

mvn -Dsonar.analysis.mode=preview \
              -Dsonar.github.pullRequest=$PULL_REQUEST_ID \
              -Dsonar.github.repository=myOrganisation/myProject \
              -Dsonar.github.oauth=$GITHUB_ACCESS_TOKEN \
              -Dsonar.host.url=https://server/sonarqube \
              -Dsonar.login=$SONARQUBE_ACCESS_TOKEN or sonar username \
              -Dsonar.passord=$ Sonar Password
 
So thats it guys let me know if you want to know any thing more on this.