Tuesday, January 28, 2014

Create Your ownCloud with Open Stack

               Creating Own Cloud  System with CentOS 6.5

Hello Guys,
This is Navneet again I will today writing this blog will help you to build your own cloud with Cent OS and open-stack
 To build it its not a rocket science. its simple follow the

I was looking for an easy way to quickly deploy Openstack on my CentOS environment, I found that there are many tools available to accomplish this in Ubuntu but very few for CentOS. Then I found this project packstack, packstack is a utility that uses Puppet modules to deploy various parts of OpenStack on multiple pre-installed servers over SSH automatically using python. I found that Redhat has started contributing to packstack and they have some very good documentation on how to quickly get going. For example, if you are installing an all in one configuration, meaning installing all the openstack modules in one server, you only need to run three commands to get the environment up and running. As of the date of this blog, packstack is only supported on Fedora, Red Hat Enterprise Linux (RHEL) and compatible derivatives of both.

[root@hq-openstack-control ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

First add the fedora repo – you can also install from github – and install packstack using yum
[root@hq-openstack-control ~]# yum install -y http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly-2.noarch.rpm
[root@hq-openstack-control ~]# yum install -y openstack-packstack
Configure NTP on the nodes. This is not a must but is a good to have. Install ntp and sync up with your ntp server. I used a public ntp server in my configuration.
[root@hq-openstack-control ~]# yum install ntp
[root@hq-openstack-control ~]# chkconfig ntpd on
[root@hq-openstack-control ~]# ntpdate pool.ntp.org
[root@hq-openstack-control ~]# /etc/init.d/ntpd start
Now, since I wanted to make modifications to the default packstack install, I generated a file that contained the install configuration. This file is called “answer file” and I put my configuration preferences in it. I then told packstack to use that file to do the install. This answer file is also used if you need to make changes to the openstack cluster, for example if you wanted to add a node. You would use the same process, make changes to the answer file to reflect that a new node has been added and again run packstack and point it to modified the answer file.
I generated the answer file
[root@hq-openstack-control ~]# packstack --gen-answer-file=/root/grizzly_openstack.cfg
[root@hq-openstack-control ~]# vi grizzly_openstack.cfg
The answer file defaults to putting all the openstack modules in one node.
I made changes to ensure that my swift node was installed in my compute node running on a UCS B200 blade. I left the swift proxy node in the control node. Node 172.17.100.71 is my compute node and node 172.17.100.72 is my control node.
# A comma separated list of IP addresses on which to install the
# Swift Storage services, each entry should take the format
# [/dev], for example 127.0.0.1/vdb will install /dev/vdb
# on 127.0.0.1 as a swift storage device(packstack does not create the
# filesystem, you must do this first), if /dev is omitted Packstack
# will create a loopback device for a test setup
CONFIG_SWIFT_STORAGE_HOSTS=172.17.100.71

#The IP address on which to install the Swift proxy service
CONFIG_SWIFT_PROXY_HOSTS=172.17.100.72
I installed Cinder in my compute node.
# The IP address of the server on which to install Cinder
CONFIG_CINDER_HOST=172.17.100.71
I installed nova compute in my compute node. If on a later date I wanted to add a second compute node, I would come make the changes here.
# A comma separated list of IP addresses on which to install the Nova
# Compute services
CONFIG_NOVA_COMPUTE_HOSTS=172.17.100.71
I also set public interface for Nova Network on the control node to be eth0
# Public interface on the Nova network server
CONFIG_NOVA_NETWORK_PUBIF=eth0
And set the private interface for Nova Network dhcp on the control nodes and private interface of the Nova
# Private interface for Flat DHCP on the Nova network server
CONFIG_NOVA_NETWORK_PRIVIF=eth1


# Private interface for Flat DHCP on the Nova compute servers
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
At this point I was done and was ready to start the install. During the install, you will be prompted for the compute nodes root password.
[root@hq-openstack-control ~]# sudo packstack --answer-file=/root/grizzly_openstack.cfg
Welcome to Installer setup utility
Packstack changed given value  to required value /root/.ssh/id_rsa.pub
Installing:
Clean Up...                                              [ DONE ]
Adding pre install manifest entries...                   [ DONE ]
Setting up ssh keys...root@172.17.100.72's password:
..
172.17.100.72_swift.pp :                                             [ DONE ]
172.17.100.72_nagios.pp :                                            [ DONE ]
172.17.100.72_nagios_nrpe.pp :                                       [ DONE ]
Applying 172.17.100.72_postscript.pp  [ DONE ] 
172.17.100.72_postscript.pp :                                        [ DONE ]
A few 5-15 minutes later the install will complete.
[root@hq-openstack-control ~]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth hq-openstack-control                 internal         enabled    :-)   2013-05-08 16:32:10
nova-cert        hq-openstack-control                 internal         enabled    :-)   2013-05-08 16:32:10
nova-conductor   hq-openstack-control                 internal         enabled    :-)   2013-05-08 16:32:10
nova-scheduler   hq-openstack-control                 internal         enabled    :-)   2013-05-08 16:32:10
nova-network     hq-openstack-control                 internal         enabled    :-)   2013-05-07 20:56:47
nova-compute     hq-ucs-openstack-compute-node-01            enabled    :-)   2013-05-08 16:32:19
To log in, you naviage to glance on your browser from http://controler_node_ip. Username isadmin. Password would have been auto generated for you when you created the answer file. You can get it by greping for the keystone password
[root@hq-openstack-control ~]# cat /root/grizzly_openstack.cfg  | grep -i CONFIG_KEYSTONE_ADMIN_PW
CONFIG_KEYSTONE_ADMIN_PW=2asdaf559d32asdfasdfa234bd1
and thats it.

Open go to another system open web browser and type the ip address of machine like 172.16.100.72 .

and you will find some thing like this enter your credentials and enjoy like
user:admin password in my case
2asdaf559d32asdfasdfa234bd1 

Friday, January 24, 2014

Create blog using ghost on Ubuntu 12.04

Create blog using ghost on Ubuntu 12.04


You can down load and install ghost on your local or on your existing web-server hosted on cloud.

for you need to download the source code of ghost from the ghost site in my case i use curl
$ curl -L https://ghost.org/zip/ghost-latest.zip -o ghost.zip
then unzip the code using following command 
$unzip -uo ghost.zip -d ghost 
After you successfully extracted Ghost open a terminal, if you haven't already, then:
  • Change into the directory you extracted Ghost to with the following command:
    $ cd /path/to/ghost
  • To install Ghost type:
    npm install --production
    note the two dashes
  • When npm is finished installing, type the following to start Ghost in development mode:
    $ npm start
  • Ghost will now be running on 127.0.0.1:2368
    You can adjust the IP-address and port in config.js

and we are done.

for production deployment we can use
You can use forever to run Ghost as a background task. forever will also take care of your Ghost installation and it will restart the node process if it crashes.
  • To install forever type npm install forever -g
  • To start Ghost using forever from the Ghost installation directory type NODE_ENV=production forever start index.js
  • To stop Ghost type forever stop index.js
  • To check if Ghost is currently running type forever list
 and most important.you need to use reverse proxy pass settings.

NameVirtualHost *:80

<VirtualHost *:80>
ServerName blog.nrathi.com
ProxyPass / http://127.0.0.1:2368/
ProxyPassReverse / http://127.0.0.1:2368/
#RequestHeader set "X-Forwarded-Proto" "https"

# Fix IE problem (http error 408/409)
#SetEnv proxy-nokeepalive 1

ProxyPreserveHost on
</VirtualHost>
and we are done we can access it on the IP address on which we have hosted our blog  it.

Wednesday, January 22, 2014

Automate Backup Process and Store on AWS S3

  Automate Backup Process and Store on AWS S3

Hello Guys,

This is in fact the problem with most of the system administrator that they want to automate the backup process and want to store there on a reliable and highly available location.
 Location from which data can be retrieve at very high speed if required in case of server crash and production copy lost and they also want to store the data in GBs.

In My case I have around i have to daily backup 100GBs of data from mysql and store it .
so I have used a trick and use a cheap storage but highly reliable.
I come to know about Amazon S3 storage server its very cost effective and highly available through out the world.

Then Second problem i face is interfacing the two systems to work together to solve my problem like automated backup should be done and it should be stored in a particular folder and after certain time it should delete the older one like backup rotation.

SO i have to write a script which create a lazy backup and store it on s3 at the same time it should delete the backup which i have created 7 days back as its not much use full for me.

So I have use s3cmd as a interface between my backup script and Amazone S3.
Installation is simeple no rocket science involved.As i am using ubuntu 12.04

sudo apt-get install s3cmd

i have created amazon account and created credentials etc.

after thet I have write a script to take a backup of all postgres databases.

#!/bin/sh
#### BEGIN CONFIGURATION ####
# set dates for backup rotation
NOWDATE=`date +%Y-%m-%d`
LASTDATE=$(date +%Y-%m-%d --date='1 week ago')
# set backup directory variables
SRCDIR='/tmp/s3backups'
DESTDIR='path/to/s3folder'
BUCKET='s3bucket'
# database access details
HOST='127.0.0.1'
PORT='5432'
USER='user'
#### END CONFIGURATION ####
# make the temp directory if it doesn't exist
mkdir -p $SRCDIR
# dump each database to its own sql file
DBLIST=`psql -l -h$HOST -p$PORT -U$USER \
| awk '{print $1}' | grep -v "+" | grep -v "Name" | \
grep -v "List" | grep -v "(" | grep -v "template" | \
grep -v "postgres" | grep -v "root" | grep -v "|" | grep -v "|"`
# get list of databases
for DB in ${DBLIST}
do
pg_dump -h$HOST -p$PORT -U$USER $DB -f $SRCDIR/$DB.sql
done
# tar all the databases into $NOWDATE-backups.tar.gz
cd $SRCDIR
tar -czPf $NOWDATE-backup.tar.gz *.sql
# upload backup to s3
/usr/bin/s3cmd put $SRCDIR/$NOWDATE-backup.tar.gz s3://$BUCKET/$DESTDIR/
# delete old backups from s3
/usr/bin/s3cmd del --recursive s3://$BUCKET/$DESTDIR/$LASTDATE-backup.tar.gz
# remove all files in our source directory
cd
rm -f $SRCDIR/*

Sanme way I have script for mysql too.

#!/bin/sh
#### BEGIN CONFIGURATION ####
# set dates for backup rotation
NOWDATE=`date +%Y-%m-%d`
LASTDATE=$(date +%Y-%m-%d --date='1 week ago')
# set backup directory variables
SRCDIR='/tmp/s3backups'
DESTDIR='path/to/s3folder'
BUCKET='s3bucket'
# database access details
HOST='127.0.0.1'
PORT='3306'
USER='user'
PASS='pass'
#### END CONFIGURATION ####
# make the temp directory if it doesn't exist
mkdir -p $SRCDIR
# repair, optimize, and dump each database to its own sql file
for DB in $(mysql -h$HOST -P$PORT -u$USER -p$PASS -BNe 'show databases' | grep -Ev 'mysql|information_schema|performance_schema')
do
mysqldump -h$HOST -P$PORT -u$USER -p$PASS --quote-names --create-options --force $DB > $SRCDIR/$DB.sql
mysqlcheck -h$HOST -P$PORT -u$USER -p$PASS --auto-repair --optimize $DB
done
# tar all the databases into $NOWDATE-backups.tar.gz
cd $SRCDIR
tar -czPf $NOWDATE-backup.tar.gz *.sql
# upload backup to s3
/usr/bin/s3cmd put $SRCDIR/$NOWDATE-backup.tar.gz s3://$BUCKET/$DESTDIR/
# delete old backups from s3
/usr/bin/s3cmd del --recursive s3://$BUCKET/$DESTDIR/$LASTDATE-backup.tar.gz
# remove all files in our source directory
cd
rm -f $SRCDIR/*

#Same script we can use with variations hope you know all it Enjoy..


 

 

Friday, January 17, 2014

installing GITLAB on ubuntu 12.04

Creating you own private small Git hub.

To Create a small local github.com for highly confidential projects or for the people who cant effort
to host the project on github.com

Prerequisite:-
You should have a good machine it can be virtually hosted but should have large hard disk (50 GB or 100GB )and  should have average amount of ram to server the contents (512 to 1024MB) will be great and Linux OS(Ubuntu 12.04 server) on top of it and most importent for the installation process to work you should have internet connection on the Linux bit.
 
Installation process :-

 To Install git lab You should have either of the following command line tools available with you on Linux box.

  1.  sudo apt-get install curl  # curl
  2. sudo apt-get install wget #wget
I will use curl in my case. 

curl https://raw.github.com/gitlabhq/gitlab-recipes/master/install/ubuntu/ubuntu_server_1204.sh | sudo domain_var=gitlab.nrathi.com sh

ones you have fire this rest of all the stuff is handle by this script.  

on your machine from which you want to access this gitlab or if you want to use it in network then make a entry on router or for single you you can use host entry 

if you are using linux as a client machine the 

/etc/hosts
<ip of linux box> gitlab.nrathi.com.


open browser and type gitlab.nrathi.com and you can see some thing like this.





Tuesday, December 17, 2013

Syncing code on multiple servers in real time 2



I have use 0.0.0.0 so that i can access it from any ip address available with the server like server haing 2 IPs and of course loop back
127.0.0.1
172.16.5.135
192.168.1.191

so it will be accessible from all three IPS.

and then user name and password and we are all set.

then browser enter the ipadress on which we have set the torrent sync.



In the above screen shot you can see the secrete keep it handy and install it and select the folder and append paste the secret  and all done within few minutes data start syncing.

Heavy problems Simple solutions if you like let me know. 

Syncing code on multiple servers in real time 1

Few days back my boss come up with a requirement of syncing on data across the servers .
it may happen that servers IP address can change as they ware hosted on Amazon.

As all of us know by default they don't provide us white(public )IP Address.so i start searching and was very much worried as syncing should be real time and i dont know on which server the data is going to sync.

i have googled a lot and then i read about the torrent sync. Its a very good open source software having great functionality.I really love it.

So lets start with installation.

Install

As its not directly available we need to add PPA first and then install it Per usual, setting up the Windows and Android clients is a fairly brainless process; setting up a Linux client is a little more exciting. Here’s what I did on my Ubuntu 12.04 server:

Set Up: We don’t set things up manually in Ubuntu, son!

Use the PPA:

$ sudo add-apt-repository ppa:tuxpoldo/btsync
$ sudo apt-get update
$ sudo apt-get install btsync

If you’re on a version of Ubuntu that didn’t come with add-apt-repository, you can grab it from the software-properties-common package.

When you do this you need to go through the following installation cycle and when you done you are all set repeat the same steps and data syncing start automatically




For me i have choose 12345 as a sync port you can peek any but its recommended to use port above 1024 as bellow 1024 all are well known ports






Tuesday, November 26, 2013

Use windows tools (IE) on Ubuntu

you need to download curl
sudo apt-get install curl

wget -q "http://deb.playonlinux.com/public.gpg" -O- | sudo apt-key add -
sudo wget http://deb.playonlinux.com/playonlinux_precise.list -O 

/etc/apt/sources.list.d/playonlinux.list

sudo apt-get update
sudo apt-get install playonlinux



Now Wine will get automatically updated,

Open PlayonLinux and install IE6/IE7/IE8

Then click click click and done   Enjoy....

Wednesday, September 4, 2013

Building A Continuous Integration Server java Project part 1

Building A Continuous Integration Server java Project

For building a continuous Integration Sever we will use Jenkins and supported java libraries.
 Setup Jenkins on ubuntu 12.04 server 

sudo apt-get install default-jdk ant

wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins

and to upgrade the existing setup

sudo apt-get update
sudo apt-get install jenkins
 
new after the installation jenkins is online on the port no 8080 you can access it using 
in web browser ipaddress of machine:8080 ex.172.16.16.10:8080 

you can also convert this thing into a URL for that you need to install apache on the 
machine 
sudo apt-get install apache2

and create a file call jenkins.conf as
nano /etc/apache2/sites-available/jenkins.conf
#and add the following line to it 
 
<VirtualHost *:80>
 ServerAdmin yourname@company.com
 ServerName ci.company.com
 ServerAlias ci
 ProxyRequests Off
 <Proxy *>
  Order deny,allow
  Allow from all
 </Proxy>
 ProxyPreserveHost on
 ProxyPass / http://localhost:8080/
</VirtualHost>
 
make the DNS settings or make a host entry on the syatem where you want to access 
the server and we are all set with the basic setup of jenkins
 

Friday, August 23, 2013

Fixing MySQL Replication-with minimum downtime

Its a very critical task for a system Admin/DBA to fix the broken MySQL replication when the data in the system is crucial and does not afford any type of data loses.


Before starting the following tutorial Please fire stop slave on both the server if you have master master replication on slave server if have master slave.to avoid any kind of data loses.                 

mysql>stop slave;

If you are using INNODB as a storage/database engine they its also very critical to fixx the replication bacause though you drop the database from server innodb will never release the hard disk space acquired by the database






Note* before doing any thing Take the backup of all the databases available in the mysql.

ibdata stores all of the UNDO LOGS thus GROWS due to the deletes and space is never reclaimed.

To regain the harddisk space  you need to delete the ibdata file and log file of mysql which are in the /var/lib/mysql

Note* For our convenience we can keep the backup of ibdata and iblogs file 

cp -ar /var/lib/mysql/ib* /tmp/
rm /var/lib/mysql/ib*
service mysqld restart

This process with again regenerate the mysql back with the shrink ibdata file and log file.and mysql will again come up then you can restore all the other databases which are not under the replication.

then take the Mysql Consistent snapshot from the other server(which is having the latest data).
open two terminal and from one login into the mysql
select the database which is under the  rpelication

mysql> use <database name>
mysql> flush tables with read lock;
mysql> show master status;

 Note down the master log file and log position of master or take a snapshot of the terminal.

on the other terminal start taking the backup using mysqldump

#mysqldump -u root -p <databasename> > <databasename.sql>


A SQL dump file get generated and we can copy the file to other server(affected server)

#scp <databasename.sql> <user>@<serverip>:/tmp

after the dump completes fire on first terminal
mysql >unlock tables;
and close the terminal on master.

On the affected server open the mysql and fire 
 mysql >SET sql_log_bin =0;

This will ignore the bin log while restore and You are safe during the restore that data not get replicated from one to another server
then use the restore command  which can be done in two ways.
using the shell and other from mysql

#mysql -u root -p <databasename> < <databasename>.sql
or 
#mysql -u root -p 
mysql> use <databasename>;
mysql> source  <path/to/databasename.sql> 

Note  if my file is in  tmp and database file name as newdb.sql then
mysql> source /tmp/newdb.sql 

ofter the restore please note down the servers master log possition useg command

mysql> show master status;
mysql >SET sql_log_bin =1; # enable the bin log on the affected server

then use change master to command and again re-point the servers

change master to
master_host='ipaddress of master',
master_user='user_having_replication slave_privilege',
master_password='user_password' ,
MASTER_LOG_FILE ='mysql log name of master which we have noted first',
 MASTER_LOG_POS = position no  in log filewhich we have noted first ;


And we have done with the fixing of the server with the minimal down time.