Thursday, May 8, 2014

Install Syslog Server (Syslog-ng) on CentOS

Syslog-NG and CentOS 6.x


Requirements:

  • CentOS 6.x (or other Redhat based flavor)
  • Internet Connectivity
  • chkconfig (yum install chkconfig)
  • wget (yum install wget)

Installation:

Install the prerequisite first.
 # yum install chkconfig wget

Install EPEL Repositories:
  1. Login to your server as root (or su root)
  2. Type: cd /root
  3. Type (Current link as of this post):
#wget http://dl.fedoraproject.org/pub/epel/6Server/i386/epel-release-6-8.noarch.rpm
 #rpm -Uvh /root/epel-release-6-8.noarch.rpm
#yum repolist
Install Syslog-NG:
 Run update check:
 #yum check-update
To see if this will impact any other software on your system.
Check the availability of Syslog-NG by typing:
yum list *syslog-ng*

Configure CentOS Services, Stop Rsyslog, and Start Syslog-NG:
Disable rsyslog: 
#chkconfig rsyslog off
Confirm rsyslog is disabled:
#chkconfig syslog-ng on

service rsyslog stop
service syslog-ng start
  1. Example Configuration for Syslog-NG:
    1. Add the following to the END of /etc/syslog-ng/syslog-ng.conf:
      # My Switches
      source s_navneet { 
              udp(ip(0.0.0.0) port(514));
              tcp(ip(0.0.0.0) port(514)); 
      };
      
      destination d_navneet {
              file(
                      "/var/log/navneet/$HOST-$YEAR$MONTH$DAY.log"
                      perm(644)
                      create_dirs(yes)
              );
      };
      
       
      log { source(s_navneet); destination(d_navneet); };
      This will basically take ALL (udp/tcp 0.0.0.0) syslog data and place it into /var/log/navneet. The names of the files are based off the host name and date. For example, if you have switch named MYSWITCH and the current date is May 8th, 2014… the full path and file name would be: /var/log/navneet/nrathi-VBox-20140508.log
    2. *** DO NOT modify any other portion of the file unless you are certain you know what you are doing!
    3. Restart the syslog-ng service to implement changes:
      [root@myserver syslog-ng]# service syslog-ng restart
      Stopping syslog-ng:                                        [  OK  ]
      Starting syslog-ng:                                        [  OK  ]
    4. Delete Old Syslog-NG Files:
    5. Login as root Type:  
    6. crontab -e
    7. Add the following to your crontab file:
# Delete Old Syslog Files
# 3 AM, Every Sunday
0 3 * * 0 /usr/bin/find /var/log/cisco -maxdepth 1 -mtime 90 -name *.log -exec rm {} \;
Change the "90" to your desired number of days.

ClamAV Installation and Scanning on CentOS

  ClamAV open source antivirus engine


ClamAV is an open source (GPL) antivirus engine designed for detecting Trojans, viruses, malware and other malicious threats. It is the de facto standard for mail gateway scanning. It provides a high performance mutli-threaded scanning daemon, command line utilities for on demand file scanning, and an intelligent tool for automatic signature updates

CentOS 6 – 32-bit
rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

 CentOS 6 – 64-bit

rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

 CentOS 5 – 32-bit

rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

 CentOS 5 – 64-bit

rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
After running the above commands for your relevant CentOS version, the following file is created:
/etc/yum.repos.d/epel.repo
The above file can be edited directly to enable or disable the EPEL repo.

2. Install required ClamAV packages
yum install clamav clamd

3. Start the clamd service and set it to auto-start

/etc/init.d/clamd on
chkconfig clamd on
/etc/init.d/clamd start

4. Update ClamAV’s signatures

/usr/bin/freshclam
Note: ClamAV will update automatically, as part of /etc/cron.daily/freshclam.

B. Configure Daily Scan

In this example, we will configure a cronjob to scan the /home/ directory every day:

1. Create cron file:

vim /etc/cron.daily/manual_clamscan
Add the following to the file above. Be sure to change SCAN_DIR to the directory that you want to scan:
#!/bin/bash
SCAN_DIR="/home"
LOG_FILE="/var/log/clamav/manual_clamscan.log"
/usr/bin/clamscan -i -r $SCAN_DIR >> $LOG_FILE
Give our cron script executable permissions:
chmod +x /etc/cron.daily/manual_clamscan
You can even run the above script to ensure that it works correctly.
And you’re done! That should be the minimum required to
 00 10 * * * * root freshclam 
This command will update ClamAV database at 10 am every day.
Good Luck ....!!!

Saturday, March 8, 2014

Install Mysql 5.5 from source on Centos

To install MySQL from source you need to install some dependencies and tools co compile from source

MySQL released 5.5 version in December 2010. This version seems to be more efficient with improvements like better improved scalability on multi-core CPU, InnoDB storage engine becomes the default engine for new tables or integration of semisynchronous replication. This version is recommended for production environments. It's interesting for administrator to thinking about upgrading. Much systems executes 5.1 version and sometimes the upgrade can be scary ... Here we are going to see how to install 5.5 version on new systems. First we need to install some packages that are needed by MySQL. So installs (or be sure that they all have been installed): bison, bzr, cmake, gcc-c++ ncurses-devel.

# yum groupinstall 'Development Tools'

yum install -y bison bzr cmake gcc-c++ ncurses-devel

Then add new mysql account and group: 

# groupadd mysql
# useradd -r -g mysql mysql

Now we need download last mysql 5.5 tar.gz archive, choose the mirror directly on mysql website. 

wget http://URL_OF_MIRROR/mysql-5.5.16.tar.gz

Extracting tar.gz archive 

tar -xvzf /downloads/mysql-5.5.16.tar.gz

Now go into extracted directory and execute cmake: 

cd /downloads/mysql-5.5.16/
cmake . -DCMAKE_INSTALL_PREFIX=/opt/mysql \-DMYSQL_DATADIR=/var/lib/mysql \
-DSYSCONFDIR=/etc \
-DINSTALL_PLUGINDIR=/opt/mysql/lib/mysql/plugin

Note that I use /opt/mysql for basedir /var/lib/mysql for datadir, you can use others directories by specifing them with 'DCMAKE_INSTALL_PREFIX' and '-DMYSQL_DATADIR' options. When cmake finish to work, we can launch the make . 

make

Next, if we don't encounter errors, we launch the install: 

make install

Now, we create symbolic links to have mysql commands in shell: 

ln -s /opt/mysql/bin/* /usr/bin/

Assign owner and group:
 
cd /opt/mysql/
chgrp -R mysql .
chown -R root .
chown -R mysql data

Default database installation: 

scripts/mysql_install_db --user=mysql --datadir=/var/lib/mysql/

Copy a mysql default configuration file: 

cp support-files/my-medium.cnf /etc/my.cnf

Copy mysql init.d script and make it executable: 

cp support-files/mysql.server /etc/init.d/mysqld
chmod 755 /etc/init.d/mysqld

Now edit this init.d script for customize both basedir and datadir paths: 

vim /etc/init.d/mysqld
## replace
basedir=
basedir=

## by
basedir=/opt/mysql
datadir=/var/lib/mysql

Finally, we can launch mysql server and begin to use it: 

/etc/init.d/mysqld start

and we are done

Wednesday, February 12, 2014

Install HAproxy for high performance and HA

Install HA-proxy as load balance in webserver


Hello Guys, 

Welcome back again on my rathinavneet.blogspot.in.

 I will tell you how you can setup your HA proxy in 10 min for high availability from source so that if one of your web server down it wont affect you much.as your other server are working just fine and your company's or clients business is uninterrupted.

          HA  proxy work on transport layer protocol so same proxy can be use for multiple application.
It will server a great like same ha proxy can be used to configure the Mysql server as well as apache web-server or nginx or tomcat and many more.You will also get the statistics of the network in real time look at the image below.


 Look in the image you can see that  its showing the working web-server by green.Stop or not working or not responding web server by red.

Its working as proxy server as well as a load balance.

Lets start with installation and configuration.I am using ubuntu 12.04 as my host machine.
so 
apt-get install haproxy

This will do the trick and install haproxy fro me.
 We also need to enable the ha proxy in init script so edit the following file 

nano /etc/default.haproxy

 enable = 1
 
Then start with the configuration first of all backup your existing haproxy.conf file to haproxy.back file

mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.back

global
        log 127.0.0.1   local0
        log 127.0.0.1   local1 notice
        maxconn 4096
        user haproxy
        group haproxy
 
 
The log directive mentions a syslog server to which log messages will be sent. On Ubuntu rsyslog is already installed and running but it doesn't listen on any IP address. We'll modify the config files of rsyslog later.
The maxconn directive specifies the number of concurrent connections on the frontend. The default value is 2000 and should be tuned according to your VPS' configuration.
The user and group directives changes the HAProxy process to the specified user/group. These shouldn't be changed.

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    option redispatch
    maxconn 4096 
    timeout connect  5000
    timeout client  50000
    timeout server  50000
 
We're specifying default values in this section. The values to be modified are the various timeout directives. The connect option specifies the maximum time to wait for a connection attempt to a VPS to succeed.
The client and server timeouts apply when the client or server is expected to acknowledge or send data during the TCP process. HAProxy recommends setting the client and server timeouts to the same value.
The retries directive sets the number of retries to perform on a VPS after a connection failure.
The option redispatch enables session redistribution in case of connection failures. So session stickness is overriden if a VPS goes down.

On Web-server 1 & 2
Create a file in web-servers called check.txt using command  and we will configure the same in ha proxy to check for the particular file if file found when server is ping that means server is operational.
for HA-proxy and proxy will show it as green in the panel.

listen ApplicationCluster 192.168.1.100:80
       mode http
       stats enable
       stats auth nrathi:nrathi123
       balance roundrobin
       cookie JSESSIONID prefix
       option httpclose
       option forwardfor
       option httpchk HEAD /check.txt HTTP/1.0
 
       server webserver1 192.168.1.102:80 cookie A check
       server webserver2 192.168.1.103:80 cookie B check
 
 
This contains configuration for both the frontend and backend. We are configuring HAProxy to listen on port 80 for appname which is just a name for identifying an application. The stats directives enable the connection statistics page and protects it with HTTP Basic authentication using the credentials specified by the stats auth directive.
This page can viewed with the URL mentioned in stats uri so in this case, it is http://192.168.1.100/haproxy?stats

a demo of this page can be viewed here.

The balance directive specifies the load balancing algorithm to use. Options available are Round Robin (roundrobin), Static Round Robin (static-rr), Least Connections (leastconn), Source (source), URI (uri) and URL parameter (url_param).
Information about each algorithm can be obtained from the official documentation.
The server directive declares a backend server, the syntax is:
server <name> <address>[:port] [param*] 
  The name we mention here will appear in logs and alerts. There are many paratmeters supported by this directive and we'll be using the check and cookie parameters in this article. The check option enables health checks on the VPS otherwise, the VPS is
always considered available.
Once you're done configuring start the HAProxy service:
 
sudo service haproxy start
 
SO the sample configuration file is as follows:
 
 
global
        log 127.0.0.1   local0
        log 127.0.0.1   local1 notice
        #log loghost    local0 info
        maxconn 4096
        #debug
        #quiet
        user haproxy
        group haproxy

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
 
    option redispatch
    maxconn 4096 
    timeout connect  5000
    timeout client  50000
    timeout server  50000
 

listen webfarm 192.168.1.100:80
       mode http
       stats enable
       stats auth someuser:somepassword
       balance roundrobin
       cookie JSESSIONID prefix
       option httpclose
       option forwardfor
       option httpchk HEAD /check.txt HTTP/1.0
       server webA 192.168.1.102:80 cookie A check
       server webB 192.168.1.103:80 cookie B check 
 
 
# and thats it we are ready to go. 
 




Tuesday, January 28, 2014

Create Your ownCloud with Open Stack

               Creating Own Cloud  System with CentOS 6.5

Hello Guys,
This is Navneet again I will today writing this blog will help you to build your own cloud with Cent OS and open-stack
 To build it its not a rocket science. its simple follow the

I was looking for an easy way to quickly deploy Openstack on my CentOS environment, I found that there are many tools available to accomplish this in Ubuntu but very few for CentOS. Then I found this project packstack, packstack is a utility that uses Puppet modules to deploy various parts of OpenStack on multiple pre-installed servers over SSH automatically using python. I found that Redhat has started contributing to packstack and they have some very good documentation on how to quickly get going. For example, if you are installing an all in one configuration, meaning installing all the openstack modules in one server, you only need to run three commands to get the environment up and running. As of the date of this blog, packstack is only supported on Fedora, Red Hat Enterprise Linux (RHEL) and compatible derivatives of both.

[root@hq-openstack-control ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

First add the fedora repo – you can also install from github – and install packstack using yum
[root@hq-openstack-control ~]# yum install -y http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly-2.noarch.rpm
[root@hq-openstack-control ~]# yum install -y openstack-packstack
Configure NTP on the nodes. This is not a must but is a good to have. Install ntp and sync up with your ntp server. I used a public ntp server in my configuration.
[root@hq-openstack-control ~]# yum install ntp
[root@hq-openstack-control ~]# chkconfig ntpd on
[root@hq-openstack-control ~]# ntpdate pool.ntp.org
[root@hq-openstack-control ~]# /etc/init.d/ntpd start
Now, since I wanted to make modifications to the default packstack install, I generated a file that contained the install configuration. This file is called “answer file” and I put my configuration preferences in it. I then told packstack to use that file to do the install. This answer file is also used if you need to make changes to the openstack cluster, for example if you wanted to add a node. You would use the same process, make changes to the answer file to reflect that a new node has been added and again run packstack and point it to modified the answer file.
I generated the answer file
[root@hq-openstack-control ~]# packstack --gen-answer-file=/root/grizzly_openstack.cfg
[root@hq-openstack-control ~]# vi grizzly_openstack.cfg
The answer file defaults to putting all the openstack modules in one node.
I made changes to ensure that my swift node was installed in my compute node running on a UCS B200 blade. I left the swift proxy node in the control node. Node 172.17.100.71 is my compute node and node 172.17.100.72 is my control node.
# A comma separated list of IP addresses on which to install the
# Swift Storage services, each entry should take the format
# [/dev], for example 127.0.0.1/vdb will install /dev/vdb
# on 127.0.0.1 as a swift storage device(packstack does not create the
# filesystem, you must do this first), if /dev is omitted Packstack
# will create a loopback device for a test setup
CONFIG_SWIFT_STORAGE_HOSTS=172.17.100.71

#The IP address on which to install the Swift proxy service
CONFIG_SWIFT_PROXY_HOSTS=172.17.100.72
I installed Cinder in my compute node.
# The IP address of the server on which to install Cinder
CONFIG_CINDER_HOST=172.17.100.71
I installed nova compute in my compute node. If on a later date I wanted to add a second compute node, I would come make the changes here.
# A comma separated list of IP addresses on which to install the Nova
# Compute services
CONFIG_NOVA_COMPUTE_HOSTS=172.17.100.71
I also set public interface for Nova Network on the control node to be eth0
# Public interface on the Nova network server
CONFIG_NOVA_NETWORK_PUBIF=eth0
And set the private interface for Nova Network dhcp on the control nodes and private interface of the Nova
# Private interface for Flat DHCP on the Nova network server
CONFIG_NOVA_NETWORK_PRIVIF=eth1


# Private interface for Flat DHCP on the Nova compute servers
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
At this point I was done and was ready to start the install. During the install, you will be prompted for the compute nodes root password.
[root@hq-openstack-control ~]# sudo packstack --answer-file=/root/grizzly_openstack.cfg
Welcome to Installer setup utility
Packstack changed given value  to required value /root/.ssh/id_rsa.pub
Installing:
Clean Up...                                              [ DONE ]
Adding pre install manifest entries...                   [ DONE ]
Setting up ssh keys...root@172.17.100.72's password:
..
172.17.100.72_swift.pp :                                             [ DONE ]
172.17.100.72_nagios.pp :                                            [ DONE ]
172.17.100.72_nagios_nrpe.pp :                                       [ DONE ]
Applying 172.17.100.72_postscript.pp  [ DONE ] 
172.17.100.72_postscript.pp :                                        [ DONE ]
A few 5-15 minutes later the install will complete.
[root@hq-openstack-control ~]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth hq-openstack-control                 internal         enabled    :-)   2013-05-08 16:32:10
nova-cert        hq-openstack-control                 internal         enabled    :-)   2013-05-08 16:32:10
nova-conductor   hq-openstack-control                 internal         enabled    :-)   2013-05-08 16:32:10
nova-scheduler   hq-openstack-control                 internal         enabled    :-)   2013-05-08 16:32:10
nova-network     hq-openstack-control                 internal         enabled    :-)   2013-05-07 20:56:47
nova-compute     hq-ucs-openstack-compute-node-01            enabled    :-)   2013-05-08 16:32:19
To log in, you naviage to glance on your browser from http://controler_node_ip. Username isadmin. Password would have been auto generated for you when you created the answer file. You can get it by greping for the keystone password
[root@hq-openstack-control ~]# cat /root/grizzly_openstack.cfg  | grep -i CONFIG_KEYSTONE_ADMIN_PW
CONFIG_KEYSTONE_ADMIN_PW=2asdaf559d32asdfasdfa234bd1
and thats it.

Open go to another system open web browser and type the ip address of machine like 172.16.100.72 .

and you will find some thing like this enter your credentials and enjoy like
user:admin password in my case
2asdaf559d32asdfasdfa234bd1 

Friday, January 24, 2014

Create blog using ghost on Ubuntu 12.04

Create blog using ghost on Ubuntu 12.04


You can down load and install ghost on your local or on your existing web-server hosted on cloud.

for you need to download the source code of ghost from the ghost site in my case i use curl
$ curl -L https://ghost.org/zip/ghost-latest.zip -o ghost.zip
then unzip the code using following command 
$unzip -uo ghost.zip -d ghost 
After you successfully extracted Ghost open a terminal, if you haven't already, then:
  • Change into the directory you extracted Ghost to with the following command:
    $ cd /path/to/ghost
  • To install Ghost type:
    npm install --production
    note the two dashes
  • When npm is finished installing, type the following to start Ghost in development mode:
    $ npm start
  • Ghost will now be running on 127.0.0.1:2368
    You can adjust the IP-address and port in config.js

and we are done.

for production deployment we can use
You can use forever to run Ghost as a background task. forever will also take care of your Ghost installation and it will restart the node process if it crashes.
  • To install forever type npm install forever -g
  • To start Ghost using forever from the Ghost installation directory type NODE_ENV=production forever start index.js
  • To stop Ghost type forever stop index.js
  • To check if Ghost is currently running type forever list
 and most important.you need to use reverse proxy pass settings.

NameVirtualHost *:80

<VirtualHost *:80>
ServerName blog.nrathi.com
ProxyPass / http://127.0.0.1:2368/
ProxyPassReverse / http://127.0.0.1:2368/
#RequestHeader set "X-Forwarded-Proto" "https"

# Fix IE problem (http error 408/409)
#SetEnv proxy-nokeepalive 1

ProxyPreserveHost on
</VirtualHost>
and we are done we can access it on the IP address on which we have hosted our blog  it.

Wednesday, January 22, 2014

Automate Backup Process and Store on AWS S3

  Automate Backup Process and Store on AWS S3

Hello Guys,

This is in fact the problem with most of the system administrator that they want to automate the backup process and want to store there on a reliable and highly available location.
 Location from which data can be retrieve at very high speed if required in case of server crash and production copy lost and they also want to store the data in GBs.

In My case I have around i have to daily backup 100GBs of data from mysql and store it .
so I have used a trick and use a cheap storage but highly reliable.
I come to know about Amazon S3 storage server its very cost effective and highly available through out the world.

Then Second problem i face is interfacing the two systems to work together to solve my problem like automated backup should be done and it should be stored in a particular folder and after certain time it should delete the older one like backup rotation.

SO i have to write a script which create a lazy backup and store it on s3 at the same time it should delete the backup which i have created 7 days back as its not much use full for me.

So I have use s3cmd as a interface between my backup script and Amazone S3.
Installation is simeple no rocket science involved.As i am using ubuntu 12.04

sudo apt-get install s3cmd

i have created amazon account and created credentials etc.

after thet I have write a script to take a backup of all postgres databases.

#!/bin/sh
#### BEGIN CONFIGURATION ####
# set dates for backup rotation
NOWDATE=`date +%Y-%m-%d`
LASTDATE=$(date +%Y-%m-%d --date='1 week ago')
# set backup directory variables
SRCDIR='/tmp/s3backups'
DESTDIR='path/to/s3folder'
BUCKET='s3bucket'
# database access details
HOST='127.0.0.1'
PORT='5432'
USER='user'
#### END CONFIGURATION ####
# make the temp directory if it doesn't exist
mkdir -p $SRCDIR
# dump each database to its own sql file
DBLIST=`psql -l -h$HOST -p$PORT -U$USER \
| awk '{print $1}' | grep -v "+" | grep -v "Name" | \
grep -v "List" | grep -v "(" | grep -v "template" | \
grep -v "postgres" | grep -v "root" | grep -v "|" | grep -v "|"`
# get list of databases
for DB in ${DBLIST}
do
pg_dump -h$HOST -p$PORT -U$USER $DB -f $SRCDIR/$DB.sql
done
# tar all the databases into $NOWDATE-backups.tar.gz
cd $SRCDIR
tar -czPf $NOWDATE-backup.tar.gz *.sql
# upload backup to s3
/usr/bin/s3cmd put $SRCDIR/$NOWDATE-backup.tar.gz s3://$BUCKET/$DESTDIR/
# delete old backups from s3
/usr/bin/s3cmd del --recursive s3://$BUCKET/$DESTDIR/$LASTDATE-backup.tar.gz
# remove all files in our source directory
cd
rm -f $SRCDIR/*

Sanme way I have script for mysql too.

#!/bin/sh
#### BEGIN CONFIGURATION ####
# set dates for backup rotation
NOWDATE=`date +%Y-%m-%d`
LASTDATE=$(date +%Y-%m-%d --date='1 week ago')
# set backup directory variables
SRCDIR='/tmp/s3backups'
DESTDIR='path/to/s3folder'
BUCKET='s3bucket'
# database access details
HOST='127.0.0.1'
PORT='3306'
USER='user'
PASS='pass'
#### END CONFIGURATION ####
# make the temp directory if it doesn't exist
mkdir -p $SRCDIR
# repair, optimize, and dump each database to its own sql file
for DB in $(mysql -h$HOST -P$PORT -u$USER -p$PASS -BNe 'show databases' | grep -Ev 'mysql|information_schema|performance_schema')
do
mysqldump -h$HOST -P$PORT -u$USER -p$PASS --quote-names --create-options --force $DB > $SRCDIR/$DB.sql
mysqlcheck -h$HOST -P$PORT -u$USER -p$PASS --auto-repair --optimize $DB
done
# tar all the databases into $NOWDATE-backups.tar.gz
cd $SRCDIR
tar -czPf $NOWDATE-backup.tar.gz *.sql
# upload backup to s3
/usr/bin/s3cmd put $SRCDIR/$NOWDATE-backup.tar.gz s3://$BUCKET/$DESTDIR/
# delete old backups from s3
/usr/bin/s3cmd del --recursive s3://$BUCKET/$DESTDIR/$LASTDATE-backup.tar.gz
# remove all files in our source directory
cd
rm -f $SRCDIR/*

#Same script we can use with variations hope you know all it Enjoy..


 

 

Friday, January 17, 2014

installing GITLAB on ubuntu 12.04

Creating you own private small Git hub.

To Create a small local github.com for highly confidential projects or for the people who cant effort
to host the project on github.com

Prerequisite:-
You should have a good machine it can be virtually hosted but should have large hard disk (50 GB or 100GB )and  should have average amount of ram to server the contents (512 to 1024MB) will be great and Linux OS(Ubuntu 12.04 server) on top of it and most importent for the installation process to work you should have internet connection on the Linux bit.
 
Installation process :-

 To Install git lab You should have either of the following command line tools available with you on Linux box.

  1.  sudo apt-get install curl  # curl
  2. sudo apt-get install wget #wget
I will use curl in my case. 

curl https://raw.github.com/gitlabhq/gitlab-recipes/master/install/ubuntu/ubuntu_server_1204.sh | sudo domain_var=gitlab.nrathi.com sh

ones you have fire this rest of all the stuff is handle by this script.  

on your machine from which you want to access this gitlab or if you want to use it in network then make a entry on router or for single you you can use host entry 

if you are using linux as a client machine the 

/etc/hosts
<ip of linux box> gitlab.nrathi.com.


open browser and type gitlab.nrathi.com and you can see some thing like this.