Friday, June 20, 2014

How to Set Up The Ampache Streaming Music Server In Ubuntu 12.04 LTS

Arguably transcoding is simultaneously the most powerful and most frustrating feature of an ampache install but please bare with me because you'll be thankfull you got it to work. I'm going to explain a little about transcoding then I'm going to give MY version of the configuration file lines as they are in my file and the commands you need to run to install the additional transcoding software that's required.
The way transcoding works is that ampache takes the original file, which could be in any format, and feeds it to a transcoding program such as avconv, which needs to be installed separately, along with a selection of program arguments depending on how the transcoding needs to be done and into what format the final file needs to be. The transcoding program then attempts to convert the file from one format to the next using the appropriate codec which also needs to be separately installed.
Since the ampache team has no control over the programs and codecs that carry this out there is only minimal help available for the correct syntax of the transcoding commands. I think the idea is that you should read the manual that comes with each program... Anyway, if any of your music is stored in anything other than mp3 you will need to get transcoding to work. Firstly the transcode command tells ampache the main file to use to start the transcode process. Next the encode args are what ampache adds to the main transcode command in order to modify the output format to suit the requested output format. With the new HTML5 player different browsers will request output in different formats for example, Firefox will request ogg, and chrome will request mp3 hence your encode args lines must be correct for both requested formats if you expect the player to work on different systems. At the time of writing the latest version of explorer was not capible of using HTML5 players, the bug is at there end but hopefully they will fix it soon. Until then you will need to use firefox or chrome to access the HTML5 player.
Here are the config lines for my transcode settings you will need to find the lines in the config php file as before and edit them to look as follows:
max_bit_rate = 576
min_bit_rate = 48
transcode_flac = required
transcode_mp3 = allowed
encode_target = mp3
transcode_cmd = "avconv -i %FILE%"
encode_args_ogg = "-vn -b:a max\(%SAMPLE%K\,49K\) -acodec libvorbis -vcodec libtheora -f ogg pipe:1"
encode_args_mp3 = "-vn -b:a %SAMPLE%K -acodec libmp3lame -f mp3 pipe:1"
encode_args_ogv = "-vcodec libtheora -acodec libvorbis -ar 44100 -f ogv pipe:1"
encode_args_mp4 = "-profile:0 baseline -frag_duration 2 -ar 44100 -f mp4 pipe:1"
I may come back and edit these later to improve video transcoding but for now they'll do for music at least.
You may note the strange encode_args_ogg line. This is because at the time of writing the libvorbis library cant handle outbit sample rates less than 42 or so. For this reason I've added a bit to the line to force at least 42K bit rate.
OK save and exit the config file and we will install the missing programs and codec libraries.
If at any time you think you've ruined any particular config parameter you can open and check the distribution config file.
Do this with the command
sudo nano /usr/local/src/www/ampache-3.6-alpha6/config/ampache.cfg.php.dist

Transcoding software

Add the repository of multimedia libraries and programs:
sudo nano  /etc/apt/sources.list
Add the following two lines to your sources file:
deb http://download.videolan.org/pub/debian/stable/ /
deb-src http://download.videolan.org/pub/debian/stable/ /
Save and quit (ctrl -x then hit y).
Add the repository:
wget -O - http://download.videolan.org/pub/debian/videolan-apt.asc|sudo apt-key add -
Install the compiler and package programs:
sudo apt-get -y install build-essential checkinstall cvs subversion git git-core mercurial pkg-config apt-file
Install the audio libraries required to build the avconv audio transcoding:
sudo apt-get -y install lame libvorbis-dev vorbis-tools flac libmp3lame-dev libavcodec-extra*
Install the video libraries required for avconv video transcoding:
sudo apt-get install libfaac-dev libtheora-dev libvpx-dev
At this stage support for video cataloging and streaming in ampache is limited so I have focussed mostly on audio transcoding options with the view to incorporate video transcoding when support within ampache is better. I've included some video components as I get them to work.
Now we're going to try and install the actual transcoding binary "avconv" which is preferred over ffmpeg in ubuntu.
This part will seem pretty deep and complicated but the processes are typical for installing programs from source files in ubuntu.
First you create a directory to store the source files, then you download the source files from the internet, then you configure the compiler with some settings, then you make the program and its libraries into a package, then you install the package.
Note that the "make" process is very time consuming (and scary looking) particularly for avconv. Just grap a cup of coffee and watch the fun unfold until the command line appears waiting for input from you again.
Note also that some online guides suggest "make install" rather than "checkinstall". In ubuntu I prefer checkinstall as this process generates a simple uninstall process and optionally allows you to easily specify the package dependancies if you so choose enabling you to share the packages with others.

(required) Get the architecture optimised compiler yasm

sudo mkdir /usr/local/src/yasm
cd ~/downloads
wget http://www.tortall.net/projects/yasm/releases/yasm-1.2.0.tar.gz  
You could also go to http://www.tortall.net/projects/yasm/releases/ and check that there isnt a version newer than 1.2.0 if you so choose and replace the link above with its link.
Unzip, configure, build, and install yasm:
tar xzvf yasm-1.2.0.tar.gz
sudo mv yasm-1.2.0 /usr/local/src/yasm/
cd /usr/local/src/yasm/yasm-1.2.0
sudo ./configure
sudo make
sudo checkinstall
Hit enter to select y to create the default docs.
Enter yasm when requested as the name for the installed program package.
Hit enter again and then again to start the process.

(optional) Get the x264 codec - required to enable on-the-fly video transcoding

sudo mkdir /usr/local/src/x264
cd /usr/local/src/
sudo git clone git://git.videolan.org/x264.git x264
cd /usr/local/src/x264
sudo ./configure --enable-shared --libdir=/usr/local/lib/
sudo make
sudo checkinstall
Hit enter to select y and create the default docs.
Enter x264 when requested as the name for the install.
Hit enter to get to the values edit part.
Select 3 to change the version.
Call it version 1.
Then hit enter again and again to start the process.

(Required) Get the avconv transcoding program

cd /usr/local/src/
sudo git clone git://git.libav.org/libav.git avconv
cd /usr/local/src/avconv
sudo LD_LIBRARY_PATH=/usr/local/lib/ ./configure --enable-gpl --enable-libx264 --enable-nonfree --enable-shared --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libfaac --enable-libvpx > ~/avconv_configuration.txt
Note the bunch of settings we have set in build configuration - the most important for audio transcoding are --enable-libmp3lame --enable-libvorbis
libmp3lame allows us to transcode to mp3 and libvorbis allows us to transcode to ogg, which is required for the firefox HTML5 player.
Now we'll build avconv which takes ages. I've added the switch -j14 to runn multiple jobs during make. You may or may not have better luck with different values after the j depending on your own cpu architecture. 14 was best for me with my dual core hyperthreaded 8Gb RAM machine.
sudo make -j14
sudo checkinstall
Hit enter to select y to create the default docs.
Enter avconv when requested as the name for the installed program package.
Hit enter again and then again to start the process.
sudo ldconfig
sudo /etc/init.d/apache2 restart

Set up API/RPC access for external client program access

Next, in order that your iphone, android phone, or pimpache device  can access the music we need to set ACL/RPC and streaming permissions. NB: pimpache is not a typo but a "raspberry pi ampache client" project. It's currently under construction and located on github under "PiPro".
Go to a browser and login to ampache.
First we'll create an external user. In the web interface click the gray "admin" filing cabinet on the left.
Click the "add user" link.
Create a username such as <external user>.
Give it a GOOD password <external user password> but also remember that you may well be entering the password for it on your phone.
Set user access to "user".
Click "add user".
Click continue.
You can create users for different situations as well as different people. For example I have a "work" user with a lower default transcoding bit rate and a home user with a very high transcoding default bit rate.
Set the transcoding bitrate for a given user by going to the admin section (gray filing cabinet) then clicking "browse users" then the "preferences" icon.
The default transcoding bitrate is then set in the "transcode bitrate" section.
Logged in users may also set there own default transcode bit rate by clicking the "preferences" icon in the menu tab then streaming.
Now we'll add the API/RPC so the external user can get at the catalog.
In the web interface click the gray "admin" filing cabinet on the left.
Click the "show acls" link.
Now click the "add API/RPC host" link.
Give it an appropriate name such as "external" (no quotes).
  • level "read/write"
  • user external
  • acl type API/RPC
  • start 1.1.1.1 end 255.255.255.255
Click create ACL.
Click continue.
Now click the "add API/RPC host" link AGAIN.
Give it an appropriate name such as "external" (no quotes).
  • level "read/write"
  • user external
  • acl type API/RPC + Stream access
  • start 1.1.1.1 end 255.255.255.255
Click create ACL.
You may get the message "DUPLICATE ACL DEFINED".
That's ok.
Now click show ACLS and you should see two listed called external with the settings above - one for API/RPC and one for streaming.
You may see 4, thats ok as long ans external (or all) can get streaming and API access).
Now you can install and use many and varied clients to attach to your ampache server and start enjoying its musical goodness.
NB: The phones, viridian clients, etc will ask for a username, password and ampache url. Above, you setup an external username and password to use in just such a situation and the URL is the URL to your server from the outside world + /ampache.
eg: http://mypersonalampachesite.no-ip.biz/ampache
If you want to use an internal network address you may need to specify the ip address rather than the server host name depending on your router DNS system.

Catalog  setup

Finally after all that your music will hopefully have finished copying over and you can create your ampache music catalog.
Click on the gray "admin" filing cabinet in your ampache website.
Click "add a catalog".
Enter a simple name for your catalog eg "home".
Enter the path to the music folder you've been copying your files across to ie if you created a shared folder and copied your music to it as described above the path is: "/home/<ubuntu username>/music"
(watch out for capital letters here, swap out the <ubuntu username> for the username you used, and dont use the quotes " ).
Set the "Level catalog type" as local but note that you could potentially chain other catalogs to yours - how cool is that.
If you keep your music organised like I do then leave the filename and folder patterns as is.
Click to "gather album art" and "build playlists" then click "add catalog".

Final security tidyup

In the putty terminal. We'll restrict permissions to the extracted ampache folder to protect it from malicious software/people.
sudo chown www-data:www-data /usr/local/src/www/ampache-3.6-alpha6
sudo chmod -R 0700 /usr/local/src/www/ampache-3.6-alpha6/
That should do it.
If you need to move around in that directory again for some reason you will need to make the permissions more relaxed.
You can do this with
sudo chmod -R 0777 /usr/local/src/www/ampache-3.6-alpha6/
Dont forget to do
sudo chmod -R 0700 /usr/local/src/www/ampache-3.6-alpha6/
after to tighten it up again.

Now, go and amaze your friends and family by streaming YOUR music to THEIR PC or phone.

Troubleshooting

The best help I can offer is to look inside the log file if you followed the how-to above then you can find the log file at /var/log/ampache/.
cd /var/log/ampache
dir
Note the latest file name then access the log file with (for example):
 sudo nano /var/log/ampache/yyyymmdd.ampache.log
Please, if you have any advice re: transcoding commands feel free to leave helpful comments. I think help on this is really hard to come by.
For starters the best info I can find for avconv in general is at http://libav.org/avconv.html.
If you get permission errors when trying to copy to the music folder try again to relax the permissions on this folder with
sudo chmod 777 ~/music

Messed it up and want to start again from scratch?

Instead of reinstalling  ubuntu and LAMP and SAMBA you can delete ampache and its database with:
sudo mysql -u root -p
Enter your mysql password:
drop database ampache;
quit
sudo rm -R /usr/local/src/www/ampache-3.6-alpha6
cd ~/downloads
sudo tar zxvf ampache.tar.gz -C /usr/local/src/www
sudo chmod -R 7777 /usr/local/src/www/ampache-3.6-alpha6  
That should get you reset with all the default settings and ready to try again from the intial web logon.

How To Configure OCSP Stapling on Apache and Nginx

Introduction

OCSP stapling is a TLS/SSL extension which aims to improve the performance of SSL negotiation while maintaining visitor privacy. Before going ahead with the configuration, a short brief on how certificate revocation works. This article uses free certificates issued by StartSSL to demonstrate.
This tutorial will use the base configuration for Apache and Nginx outlined below:

About OCSP

OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked. It was created as an alternative to CRL to reduce the SSL negotiation time. With CRL (Certificate Revocation List) the browser downloads a list of revoked certificate serial numbers and verifies the current certificate, which increases the SSL negotiation time. In OCSP the browser sends a request to a OCSP URL and receives a response containing the validity status of the certificate. The following screenshot shows the OCSP URI of digitalocean.com.
OCSP URI

About OCSP stapling

OCSP has major two issues: privacy and heavy load on CA's servers.
Since OCSP requires the browser to contact the CA to confirm certificate validity it compromises privacy. The CA knows what website is being accessed and who accessed it.
If a HTTPS website gets lots of visitors the CA's OCSP server has to handle all the OCSP requests made by the visitors.
When OCSP stapling is implemented the certificate holder (read web server) queries the OCSP server themselves and caches the response. This response is "stapled" with the TLS/SSL Handshake via the Certificate Status Request extension response. As a result the CA's servers are not burdened with requests and browsers no longer need to disclose users' browsing habits to any third party.

Check for OCSP stapling support

OCSP stapling is supported on
  • Apache HTTP Server (>=2.3.3)
  • Nginx (>=1.3.7)
Please check the version of your installation with the following commands before proceeding.
Apache:
apache2 -v
Nginx:
nginx -v
CentOS/Fedora users replace apache2 with httpd.

Retrieve the CA bundle

Retrieve the root CA and intermediate CA's certificate in PEM format and save them in a single file. This is for StartSSL's Root and Intermediate CA certificates.
cd /etc/ssl
wget -O - https://www.startssl.com/certs/ca.pem https://www.startssl.com/certs/sub.class1.server.ca.pem | tee -a ca-certs.pem> /dev/null
If your CA provides certificates in DER format convert them to PEM. For example DigiCert provides certificates in DER format. To download them and convert to PEM use the following commands:
cd /etc/ssl
wget -O - https://www.digicert.com/CACerts/DigiCertHighAssuranceEVRootCA.crt | openssl x509 -inform DER -outform PEM | tee -a ca-certs.pem> /dev/null
wget -O - https://www.digicert.com/CACerts/DigiCertHighAssuranceEVCA-1.crt | openssl x509 -inform DER -outform PEM | tee -a ca-certs.pem> /dev/null
Both sets of commands use tee to write to the file, so you can use sudo tee if logged in as a non-root user.

Configuring OCSP Stapling on Apache

Edit the SSL virtual hosts file and place these lines inside the <VirtualHost></VirtualHost> directive.
sudo nano /etc/apache2/sites-enabled/example.com-ssl.conf
SSLCACertificateFile /etc/ssl/ca-certs.pem
SSLUseStapling on
A cache location has to be specified outside <VirtualHost></VirtualHost>.
sudo nano /etc/apache2/sites-enabled/example.com-ssl.conf
SSLStaplingCache shmcb:/tmp/stapling_cache(128000)
If you followed this article to setup SSL sites on Apache, the virtual host file will look this:
/etc/apache2/sites-enabled/example.com-ssl.conf
<IfModule mod_ssl.c>
    SSLStaplingCache shmcb:/tmp/stapling_cache(128000)
    <VirtualHost *:443>

            ServerAdmin webmaster@localhost
            ServerName example.com
            DocumentRoot /var/www

            SSLEngine on

            SSLCertificateFile /etc/apache2/ssl/example.com/apache.crt
            SSLCertificateKeyFile /etc/apache2/ssl/example.com/apache.key

            SSLCACertificateFile /etc/ssl/ca-certs.pem
            SSLUseStapling on
    </VirtualHost>
</IfModule>
Do a configtest to check for errors.
apachectl -t
Reload if Syntax OK is displayed.
service apache2 reload
Access the website on IE (on Vista and above) or Firefox 26+ and check the error log.
tail /var/log/apache2/error.log
If the file defined in the SSLCACertificateFile directive is missing, a certificate an error similar to the following is displayed.
[Fri May 09 23:36:44.055900 2014] [ssl:error] [pid 1491:tid 139921007208320] AH02217: ssl_stapling_init_cert: Can't retrieve issuer certificate!
[Fri May 09 23:36:44.056018 2014] [ssl:error] [pid 1491:tid 139921007208320] AH02235: Unable to configure server certificate for stapling
If no such errors are displayed proceed to the final step.

Configuring OCSP stapling on Nginx

Edit the SSL virtual hosts file and place the following directives inside the server {} section.
sudo nano /etc/nginx/sites-enabled/example.com.ssl
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/ssl/private/ca-certs.pem;
If you followed this article to setup SSL hosts on Nginx the complete virtual host file will look like this:
/etc/nginx/sites-enabled/example.com.ssl
server {

        listen   443;
        server_name example.org;

        root /usr/share/nginx/www;
        index index.html index.htm;

        ssl on;
        ssl_certificate /etc/nginx/ssl/example.org/server.crt;
        ssl_certificate_key /etc/nginx/ssl/example.org/server.key;

        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/ssl/private/ca-certs.pem;
}
Do a configtest to see if everything is correct.
service nginx configtest
Then reload the nginx service.
service nginx reload
Access the website on IE (on Vista and above) or Firefox 26+ and check the error log.
tail /var/log/nginx/error.log
If the file defined in ssl_trusted_certificate is missing a certificate an error similar to the following is displayed:
2014/05/09 17:38:16 [error] 1580#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: ocsp.startssl.com
If no such errors are displayed proceed to the next step.

Testing OCSP Stapling

Two methods will be explained to test if OCSP stapling is working - the openssl command-line tool and SSL test at Qualys.

The OpenSSL command

This command's output displays a section which says if your web server responded with OCSP data. We grep this particular section and display it.
echo QUIT | openssl s_client -connect www.digitalocean.com:443 -status 2> /dev/null | grep -A 17 'OCSP response:' | grep -B 17 'Next Update'
Replace www.digitalocean.com with your domain name. If OCSP stapling is working properly the following output is displayed.
OCSP response:
======================================
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: 4C58CB25F0414F52F428C881439BA6A8A0E692E5
    Produced At: May  9 08:45:00 2014 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: B8A299F09D061DD5C1588F76CC89FF57092B94DD
      Issuer Key Hash: 4C58CB25F0414F52F428C881439BA6A8A0E692E5
      Serial Number: 0161FF00CCBFF6C07D2D3BB4D8340A23
    Cert Status: good
    This Update: May  9 08:45:00 2014 GMT
    Next Update: May 16 09:00:00 2014 GMT
No output is displayed if OCSP stapling is not working.

Qualys online SSL test

To check this online go to this website and enter your domain name. Once testing completes check under the Protocol Details section.
Qualys SSL report

Additional reading

How To Use Icinga To Monitor Your Servers and Services On Ubuntu 14.04

Prerequisites

To complete this tutorial, you will require root access to an Ubuntu 14.04 VPS. Instructions to set that up can be found here (steps 3 and 4): Initial Server Setup with Ubuntu 14.04.
Also, if you want to set up the mail notification feature, you will need to properly configure Postfix. Instructions to do that can be found here: How To Install and Setup Postfix on Ubuntu 14.04. Postfix is installed along with the Icinga packages, but it can be configured after Icinga is set up.

Install Icinga

We will install Icinga using packages. Also, we will use MySQL as our DBMS--PostgreSQL, SQLite, and Oracle are the other supported options.
Run the following command to add the Icinga PPA to your package manager:
sudo add-apt-repository ppa:formorer/icinga
Then update your apt package database:
sudo apt update
Now install Icinga and MySQL with apt:
sudo apt install icinga icinga-doc icinga-idoutils mysql-server libdbd-mysql mysql-client
Now you will be presented with a series of prompts regarding your Icinga installation. Here is a list of the prompts, and how you should answer them:
  • MySQL Configuration: Enter a new MySQL root user password
  • PostFix Configuration: Select "Internet Site"
  • PostFix Configuration: Enter your Fully Qualified Domain Name (example.com, for example)
  • Configuring icinga-cgi: Enter "icingaadmin" user's password (login to access Icinga).
  • Configuring icinga-common: Enter "No" to enabling external commands
  • Configuring icinga-idoutils: Enter "Yes" to configuring database for icinga-idoutils with dbconfig-common
  • Configuring icinga-idoutils: Select "mysql" as the database type
  • Configuring icinga-idoutils: Enter MySQL root password (that you just assigned above)
  • Configuring icinga-idoutils: Enter a new icinga-idoutils database user password
Icinga is now installed, but we still need to configure a few things before we can start it. Note that Apache HTTP server and Postfix were installed as part of that process.
Add Apache user (www-data) to nagios group:
sudo usermod -a -G nagios www-data
Enable the ido2db daemon to start on boot up, which stores Icinga events and configurations in the database. Edit Icinga default configuration:
sudo vi /etc/default/icinga
Change the value of IDO2DB to yes, so it looks like the following:
IDO2DB=yes
Save and quit. Now start the ido2db service:
sudo service ido2db start
Enable idomod module by copying the sample idoutils.cfg file to Icinga's active configuration:
sudo cp /usr/share/doc/icinga-idoutils/examples/idoutils.cfg-sample /etc/icinga/modules/idoutils.cfg
Now Icinga is configured and ready to be started:
sudo service icinga restart
Let's try out the Icinga user interface.

Accessing the Icinga User Interface

Go to http://yourhost/icinga, and log in using the icingaadmin login that you set up during the Icinga installation.
You should see that Icinga is monitoring one host, localhost (your Icinga server), and seven services, like this:
Icinga Initial Overview
The top row shows that the single monitored host is "Up", and the bottom row shows that there are seven "OK" monitored services.
If the status of localhost is "Down", you might need to change the permissions of your ping command. Run the following command to allow the nagios user to use the ping command:
sudo chmod u+s `which ping`
Let's add more hosts and services to be monitored!

Ways To Monitor With Icinga

There are two main ways to monitor hosts and services with Icinga:
  1. Monitoring "publicly available services"
  2. Monitoring via an agent that is installed on a remote host to gather and send data to Icinga
Icinga Monitoring Methods
With the first method, publicly available services refers to services that are accessible across the local network or the Internet. Common examples include HTTP, mail, SSH, and ICMP ping. This method is useful for monitoring systems that you can't (or don't want to) install an agent on, and also for monitoring user facing network interfaces.
To implement the second method, we will install NRPE as an agent on remote hosts to monitor their local resources. This will allow Icinga to monitor things like disk usage, running processes, and other system stats that the first method can't achieve.

Method 1: Monitoring Publicly Available Services

Because the first method simply monitors listening services, the configuration for this method is done all on the Icinga server. Several things can be monitored with this method, so we will demonstrate how to monitor a public interface of a web server.
Create a file with the name of your host, with this command (substitute yourhost with your own hostname):
sudo vi /etc/icinga/objects/yourhost.cfg
Now add the following, replacing the values of host_name with your own hostname (in both places), alias with a description of the host, and address with the value of your host's public IP address:
define host {
        use                     generic-host
        host_name               web-1
        alias                   A Web Server
        address                 107.170.xxx.xxx
}

define service {
        use                     generic-service
        host_name               web-1
        service_description     HTTP
        check_command           check_http
}
Now save and quit. Reload your Icinga configuration to put any changes into effect:
sudo service icinga reload

Method 2: Monitoring Via an Agent

As mentioned earlier, we will be using NRPE as our agent to gather remote host data for Icinga. This means that NRPE must be installed on all hosts that will be monitored with this method, and the Icinga server also needs to be configured to receive data for each host.
Let's go over installing NRPE.

Installing NRPE On a Remote Host

On a host that you want to monitor, update apt:
sudo apt update
Now install NRPE and Nagios Plugins:
sudo apt install nagios-plugins nagios-nrpe-server
Look up the name of your root filesystem (because it is one of the items we want to monitor):
df -h /
We will be using the filesystem name in the NRPE configuration to monitor your disk usage (it is probably /dev/vda). Now open nrpe.cfg for editing:
sudo vi /etc/nagios/nrpe.cfg
The NRPE configuration file is very long and full of comments. There are a few lines that you will need to find and modify:
  • server_address: Set to the private IP address of this host
  • allowed_hosts: Set to the private IP address of your Icinga server
  • command[check_hda1]: Change /dev/hda1 to whatever your root filesystem is called
The three aforementioned lines should look like this (substitute the appropriate values):
server_address=client_private_IP
allowed_hosts=nagios_server_private_IP
command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/vda
Note that there are several other "commands" defined in this file that will run if the Icinga server is configured to use them. Also note that NRPE will be listening on port 5666 because server_port=5666 is set. If you have any firewalls blocking that port, be sure to open it to your Icinga server.
Save and quit. Then restart NRPE to put the changes into effect:
sudo service nagios-nrpe-server restart
Once you are done installing and configuring NRPE on the hosts that you want to monitor, you will have to add these hosts to your Icinga server configuration before it will start monitoring them.

Add Remote Host To Icinga Server Configuration

On your Icinga server, create a new configuration file for each of the remote hosts that you want to monitor in /etc/icinga/objects. Replace yourhost with the name of your host:
sudo vi /etc/icinga/objects/yourhost.cfg
Add in the following host definition, replacing the host_name value with your remote hostname (I used "wordpress-1" in my example), the alias value with a description of the host, and the address value with the private IP address of the remote host:
define host {
        use                     generic-host
        host_name               wordpress-1
        alias                   My first wordpress server
        address                 10.128.xxx.xxx
        }
Then add any of these service blocks for services you want to monitor. Note that the value of check_command determines what will be monitored, including status threshold values. Here are some examples that you can add to your host's configuration file:
Ping:
define service {
        use                             generic-service
        host_name                       wordpress-1
        service_description             PING
        check_command                   check_ping!100.0,20%!500.0,60%
        }
SSH (notifications_enabled set to 0 disables notifications for a service):
define service {
        use                             generic-service
        host_name                       wordpress-1
        service_description             SSH
        check_command                   check_ssh
        notifications_enabled           0
        }
Load:
define service {
        use                             generic-service
        host_name                       wordpress-1
        service_description             Current Load
        check_command                   check_load!5.0!4.0!3.0!10.0!6.0!4.0
        }
Current Users:
define service {
        use                             generic-service
        host_name                       wordpress-1
        service_description             Current Users
        check_command                   check_users!20!50
        }
Disk Space:
define service {
        use                             generic-service
        host_name                       wordpress-1
        service_description             Disk Space
        check_command                   check_all_disks!20%!10%
        }
If you're wondering what use generic-service means, it is simply inheriting the values of a service template called "generic-service" that is defined by default.
Now save and quit. Reload your Icinga configuration to put any changes into effect:
sudo service icinga reload
Once you are done configuring Icinga to monitor all of your remote hosts, let's check out the user interface.

User Interface Example

After setting up a monitoring on a few hosts with either monitoring method, go to your Icinga user interface (http://youricingaserver.com/icinga, acingaadmin login), then click on the Service Detail link. You should see a list of all of the services that you set up monitoring for.
As an example, here are two hosts that are being monitored using the configuration files that were described above. web-1 HTTP service is being monitored via its normal HTTP port, indicating that its web server is responding with an OK status, and wordpress-1 is showing that all its monitored services are OK.
Icinga User Interface Example
Icinga has a plethora of features, so feel free to browse the interface to see what you can discover about your hosts and services.

Conclusion

Now that you monitoring your hosts and some of their services, you might want to spend some time to figure out which services are critical to you, so you can start monitoring those. You may also want to set up notifications so, for example, you receive an email when your disk utilization reaches a warning or critical threshold or your main website is down, so you can resolve the situation promptly or before a problem even occurs.
Good luck!

How to Configure Remote Backups Using Bacula in an Ubuntu 12.04 VPS

Client Installation

No local backups will be stored on the remote client machine, so not all of the bacula components need to be installed.
Install the the bacula-fd (file-daemon) and the bconsole (bacula console) on this machine with apt-get using the bacula-client metapackage:
sudo apt-get update
sudo apt-get install bacula-client
The necessary components are now installed and ready to be configured.

Client Machine Configuration

The configuration of the client environment is relatively straightforward. We will only be editing the bacula file daemon configuration file. Open the file with root privileges with the following command:
sudo nano /etc/bacula/bacula-fd.conf
We need to change a few items and save some information that we will need for our server configuration. Begin by finding the Director section.
The bacula director is located on the backup VPS. Change the "Name" parameter to the hostname of your backup server followed by "-dir".
You also need to copy the password that bacula generated for your client file daemon to some place that you'll have available when you are configuring your backup server settings:
Director {
  Name = BackupServer-dir
  Password = "u2LK-yBrQzfiEsc6NWftHEhymmdPWsklN"  # Copy this password for later reference!
}
Next, we need to adjust one parameter in the FileDaemon section. We will change the "FDAddress" parameter to match the IP address or domain name of our client machine. The "Name" parameter should already be populated correctly with the client file daemon name:
FileDaemon {                          # this is me
  Name = ClientMachine-fd
  FDport = 9102                  # where we listen for the director
  WorkingDirectory = /var/lib/bacula
  Pid Directory = /var/run/bacula
  Maximum Concurrent Jobs = 20
  FDAddress = ClientMachine.DomainName.com
}
We also need to configure this daemon to pass its log messages to the backup cloud server. Find the Messages section and change the "director" parameter to match your backup cloud server's name.
Messages {
  Name = Standard
  director =  BackupServer-dir = all, !skipped, !restored
}
Save the file and exit.
Check that your configuration file has the correct syntax with the following command:
sudo bacula-fd /etc/bacula/bacula-fd.conf
If the command returns no output, the configuration file has valid syntax. Restart the file daemon to use the new settings:
sudo service bacula-fd restart
The client machine is now correctly configured.
In this example, we would like to restore to a folder on this same machine. Create the file structure and lock down the permissions and ownership for security with the following commands:
sudo mkdir -p /bacula/restore
sudo chown -R bacula:bacula /bacula
sudo chmod -R 700 /bacula
The client machine is now configured correctly. Next, we will configure the backup cloud server to pull the file data from the client.

Backup Server Configuration

Log into the backup cloud server to complete this stage of the configuration.
The bulk of the configuration is actually done on the backup server. That is because the bacula "director" manages all other bacula processes and must be able to communicate correctly with a number of different components.
To start, open the "bacula-dir.conf" file with administrator privileges:
sudo nano /etc/bacula/bacula-dir.conf

Job Configuration

Begin by finding the Job Section. The current configuration is named "BackupClient1" and is used for the backup server's local backup. We need to change the name to reflect this:
Job {
  Name = "LocalBackup"
  JobDefs = "DefaultJob"
}
Now that we have identified the first job as backing up on the local machine, we want to create a similar job for backup up our remote client. To do this, copy and paste the job definition below the one you just modified.
Change the name to reflect the fact that this is a remote backup scenario. The "Client" parameter identifies our remote client file daemon as the target for our backup. The Pool parameter allows bacula to store its remote backups separate from our local backups. We will define the pool we're referencing later in the file:
Job {
  Name = "RemoteBackup"
  JobDefs = "DefaultJob"
  Client = ClientMachine-fd
  Pool = RemoteFile
}
Next, define a place for the remote backups to restore. We will use the directory that we created on the client machine to restore remote backups.
Find the "RestoreFiles" job definition. Copy the current entry and paste it below. We will then modify some entries to label it accurately and work with client machine:
Job {
  Name = "RestoreRemote"
  Type = Restore
  Client=ClientMachine-fd
  FileSet="Full Set"
  Storage = File     
  Pool = Default
  Messages = Standard
  Where = /bacula/restore
}

Client Configuration

Find the Client definition. We will change the "Address" parameter to reflect our actual backup cloud server IP address instead of using localhost. The password should already be set correctly for the local machine.
Client {
  Name = BackupServer-fd
  Address = BackupServer.DomainName.com
  FDPort = 9102
  Catalog = MyCatalog
  Password = "CRQF7PW-mJumFtENX2lqGvJ6gixPTyRQp"          # password for Local FileDaemon
  File Retention = 30 days            # 30 days
  Job Retention = 6 months            # six months
  AutoPrune = yes                     # Prune expired Jobs/Files
}
The next step is to actually define the client machine that we've been referencing in our configuration. Copy the Client entry we just modified paste it below the current definition. This new definition will be for the remote machine that we are backing up.
Match the name to your client machine's hostname followed by "-fd". The "Address" line needs to match the client machine's IP address or domain name as well.
Finally, this is where you enter the password that you copied from the remote client's file daemon configuration file. Make sure that you modify this password value, or else bacula will not function correctly.
Client {
  Name = ClientMachine-fd
  Address = ClientMachine.DomainName.com
  FDPort = 9102 
  Catalog = MyCatalog
  Password = "u2LK-yBrQzfiEsc6NWftHEhymmdPWsklN"          # password for Remote FileDaemon
  File Retention = 30 days            # 30 days
  Job Retention = 6 months            # six months
  AutoPrune = yes                     # Prune expired Jobs/Files
}

Storage Configuration

Next, change the "Address" parameter in the Storage section with the IP address or domain name of the backup VPS. Once again, the password should already be correct here:
Storage {
  Name = File
# Do not use "localhost" here   
  Address = BackupServer.DomainName.com                # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = "097dnj3jw1Yynpz2AC38luKjy5QTnGoxS"
  Device = FileStorage
  Media Type = File
}

Pool Configuration

Find the Pool definitions section. We will first add a parameter to the "File" pool definition. Add the "Label Format" parameter to the definition and choose a prefix to name local file backups. For this guide, local backups will have "Local-" as a prefix.
Pool {
  Name = File
  Pool Type = Backup
  Recycle = yes                       # Bacula can automatically recycle Volumes
  Label Format = Local-
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 365 days         # one year
  Maximum Volume Bytes = 50G          # Limit Volume size to something reasonable
  Maximum Volumes = 100               # Limit number of Volumes in Pool
}
Next, we need to copy the section we just modified and paste it below the current entry. This will be set up for remote backup storage.
Change the name of the new pool to reflect its job of storing remote backups. Also, change the prefix by adjusting the "Label Format" parameter to be "Remote-"
Pool { 
  Name = RemoteFile
  Pool Type = Backup
  Recycle = yes                       # Bacula can automatically recycle Volumes
  Label Format = Remote-
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 365 days         # one year
  Maximum Volume Bytes = 50G          # Limit Volume size to something reasonable
  Maximum Volumes = 100               # Limit number of Volumes in Pool
}
Save and close the file.

Editing bacula-sd.conf

Next, open the "bacula-sd.conf" file with root privileges:
sudo nano /etc/bacula/bacula-sd.conf
Change the "SDAddress" parameter to reflect the backup server's IP address or domain name:
Storage {                             # definition of myself
  Name = BackupServer-sd
  SDPort = 9103                  # Director's port
  WorkingDirectory = "/var/lib/bacula"
  Pid Directory = "/var/run/bacula"
  Maximum Concurrent Jobs = 20
  SDAddress = BackupServer.DomainName.com
}
Save and close the file.

Checking the Configuration and Restarting Services

Check the configuration with the following commands:
sudo bacula-dir /etc/bacula/bacula-dir.conf
sudo bacula-sd /etc/bacula/bacula-sd.conf
If no output is returned, the configuration files have valid syntax. If this is the case, restart the daemons to use the new settings:
sudo service bacula-director restart
sudo service bacula-sd restart

Testing Remote Backups

Log into the bacula console to test the backup functionality.
sudo bconsole
Test that the bacula director can connect to the remote machine by typing the following:
status
Status available for:
     1: Director
     2: Storage
     3: Client
     4: All
Select daemon type for status (1-4): 
Choose #3 to check on the client connection and then select the remote machine:
3: Client
2: ClientMachine-fd
It should return a summary with some statistics, confirming that we can connect to the remote file daemon.
Run a test backup of the remote system by typing the following command:
run
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
A job name must be specified.
The defined Job resources are:
     1: LocalBackup
     2: RemoteBackup
     3: BackupCatalog
     4: RestoreFiles
     5: RestoreRemote
Select Job resource (1-5): 
Select the "RemoteBackup" option to run a backup of the remote machine. Type "yes" to begin the backup:
2: RemoteBackup
The director will send the backup task to the remote file daemon and which will pass its information to the backup server's storage daemon. You can check the status of the job using the "status" command as we did above. You should also check the messages using the "messages" command.
messages
If you continue to check messages, eventually you will receive a summary of the backup operation. It should contain the line "Termination: Backup OK" if everything went as expected.

Testing Remote Restore

Now, test the restore functionality:
restore all
Choose the "Select the most recent backup for a client" option. Select the remote client that we have just backed up:
5: Select the most recent backup for a client
2: ClientMachine-fd
You will be dropped into a file tree where you are able to select the files you would like to restore with the "mark" and "unmark" commands.
We have chosen to restore everything, so we can just type "done" to move on. Select the job that we defined for remote restoration and type "yes" to run the restoration:
done
2: RestoreRemote
Again, you can check the restoration with the "status" and "messages" commands. You should eventually get a summary in the messages that contains the line "Termination: Restore OK". This means the restoration was successful. Type "exit" to leave the bacula console.
exit

Checking the Filesystem

We can check that our remote backup file has the correct file format with the following command:
sudo ls /bacula/backup
LocalBackup   Remote-0002
As you can see, our backup file for the remote system has adapted the naming conventions we supplied. The local backup is not named according to our convention because it is from before our changes.
If we log into our remote client machine, we can check our restore with the following line:
sudo ls /bacula/restore
bin   dev  home        lib    media  opt run   selinux  sys  var
boot  etc  initrd.img  lost+found  mnt   root sbin  srv      usr  vmlinuz
As you can see, we have restored the filesystem to this folder correctly.

How To Use BackupPC to Create a Backup Server on an Ubuntu 12.04 VPS


Thursday, June 19, 2014

Ubuntu root sound problem

Sound service does not start when you logged in from root user.
We need to add sound service in startup application so it starts automatically while you logged in from root.
Sound will mute while you logged in from root.
ubuntu no sound icon root login
To enable sound click on dash home button
ubuntu enable sound dash home
Search for startup application. Click on Startup application icon
ubuntu enable startup application
Click on add button
Fill the following
Name: Audio
Command: pulseaudio --start --log-target=syslog
comment: start audio service
click on close
ubuntu enable add startup program
reboot the system
logged in again from root account and check sound.
ubuntu enable enable sound icon
We have enabled sound service for root account.

Install Google Chrome Browser Command

Step #1: Download Google Chrome

You need to visit the following url to grab the .deb file. Make sure you download 32 bit or 64 bit .deb version:
=> Download Chrome Browser
Fig.01: Download Google Chrome 32/64 bit deb package for Ubuntu Linux
Fig.01: Download Google Chrome 32/64 bit deb package for Ubuntu Linux

Download 32 bit version

Open a terminal (press CTRL-ALT-t and type the following wget command to grab .deb file:
$ cd /tmp
$ wget https://dl.google.com/linux/direct/google-chrome-stable_current_i386.deb

Download 64 bit version using command line

Open a terminal (press CTRL-ALT-t and type the following wget command to grab .deb file:
$ cd /tmp
$ wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb

Sample outputs:
--2013-05-07 15:19:24--  https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
Resolving dl.google.com (dl.google.com)... 173.194.36.39, 173.194.36.35, 173.194.36.40, ...
Connecting to dl.google.com (dl.google.com)|173.194.36.39|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 42434828 (40M) [application/x-debian-package]
Saving to: `google-chrome-stable_current_amd64.deb'
 
100%[======================================>] 4,24,34,828  466K/s   in 89s
 
2013-05-07 15:20:53 (466 KB/s) - `google-chrome-stable_current_amd64.deb' saved [42434828/42434828]
 

Step #2: Install .deb file

Type the following command to install 32 bit version:
$ sudo dpkg -i google-chrome-stable_current_i386.deb
Type the following command to install 64 bit version:
$ sudo dpkg -i google-chrome-stable_current_amd64.deb
Sample outputs:
Fig.02: Errors were encountered while processing
Fig.02: Errors were encountered while processing

Step #3: Fixing "errors were encountered while processing" error

Type the following command to fix the error and install Chrome:
$ sudo apt-get -f install
Sample outputs:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
  libnss3-1d libxss1
The following NEW packages will be installed:
  libnss3-1d libxss1
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 22.0 kB of archives.
After this operation, 162 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 http://in.archive.ubuntu.com/ubuntu/ precise-updates/main libnss3-1d amd64 3.14.3-0ubuntu0.12.04.1 [13.4 kB]
Get:2 http://in.archive.ubuntu.com/ubuntu/ precise/main libxss1 amd64 1:1.2.1-2 [8,646 B]
Fetched 22.0 kB in 0s (23.6 kB/s)
Selecting previously unselected package libnss3-1d.
(Reading database ... 197880 files and directories currently installed.)
Unpacking libnss3-1d (from .../libnss3-1d_3.14.3-0ubuntu0.12.04.1_amd64.deb) ...
Selecting previously unselected package libxss1.
Unpacking libxss1 (from .../libxss1_1%3a1.2.1-2_amd64.deb) ...
Setting up libnss3-1d (3.14.3-0ubuntu0.12.04.1) ...
Setting up libxss1 (1:1.2.1-2) ...
Setting up google-chrome-stable (26.0.1410.63-r192696) ...
update-alternatives: using /usr/bin/google-chrome to provide /usr/bin/x-www-browser (x-www-browser) in auto mode.
update-alternatives: using /usr/bin/google-chrome to provide /usr/bin/gnome-www-browser (gnome-www-browser) in auto mode.
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place

How do I use Google Chrome?

Simply type the following command:
google-chrome
google-chrome http://www.cyberciti.biz/

Or use GUI option > Press Windows key > Type Google or Chrome in unity search bar:
Fig.03: Starting Google Chrome browser
Fig.03: Starting Google Chrome browser
So there you have it-the ultimate and fast Google Chrome running on Ubuntu Linux:
Fig.04: Google Chrome in action
Fig.04: Google Chrome in action

How to enable root in Ubuntu 12.04

root is the top administrator account in linux. You can do whatever you want to do while you logged in from root account. This causes security risk. Even a small mistake in file system can crash entire OS.You could type a command incorrectly and destroy the system.
So Ubuntu developers made a conscientious decision to disable the administrative root account by default in all Ubuntu installations. This does not mean that the root account has been deleted or that it may not be accessed. It merely has been given a password which matches no possible encrypted value, therefore may not log in directly by itself.
As a network administrator you may need to enable root account in several circumstances. In this tutorial we would enable root account in Ubuntu 12.04.
During the installation of Ubuntu we need to create one user account. By default this account has permission to run sudo command with his own password. Log in from this account.
enable root normal user login screen
We need Terminal to execute commands. Click on Ubuntu button, type terminal in search box and click on terminal icon. Alternatively you can launch terminal directly by pressing ALT+CTRL+T key combination.
enable root open terminial
Set password for root account. Run following command
sudo passwd root
Just remember, when sudo asks for a password, it needs YOUR USER password. And when it say Enter new UNIX password enter the password which you set for root account.
enable root set password for root
root account is enabled. Now we need to enable login option. Run following command
sudo sh -c 'echo "greeter-show-manual-login=true" >> /etc/lightdm/lightdm.conf'
enable root enable ghrapic command
Reboot the system and Login from root
enable root reboot command
While you logged from root account in Ubuntu 12.04 username will show Guest in notification area.
enable root root desktop
You can confirm from terminal that you logged in as root account. Open terminal and run who am i command
enable root who am i
While logged in from root account several services like google chrome, sound would not run. Please check our other articles from Ubuntu 12.04 tips and tricks to enable these services for root account.
If you wish to disable root account again follow these steps.
Reboot system Logged in from the account which you have created during the installation process
enable root normal user login screen
run following command
sudo passwd -dl root
enable root disable root
Reboot system again and try to login from root.
enable root login error with root
Enabling the Root account is rarely necessary. Almost everything you need to do as administrator of an Ubuntu system can be done via sudo or gksudo. We suggest you to check our other articles to learn how to perform administrator task without enabling root account.

Wednesday, June 18, 2014

Install MySQL Workbench 5.2.44 in Ubuntu 12.10 or 12.04

1. Open a terminal window.
2. Type in the following commands then hit Enter after each.
sudo add-apt-repository ppa:olivier-berten/misc
sudo apt-get update
sudo apt-get install mysql-workbench
3. To start the application, use this command.
mysql-workbench &
For more, see the original article at the link below.

Installing default JRE/JDK

This is the recommended and easiest option. This will install OpenJDK 6 on Ubuntu 12.04 and earlier and on 12.10+ it will install OpenJDK 7.
Installing Java with apt-get is easy. First, update the package index:
sudo apt-get update
Then, check if Java is not already installed:
java -version
If it returns "The program java can be found in the following packages", Java hasn't been installed yet, so execute the following command:
sudo apt-get install default-jre
This will install the Java Runtime Environment (JRE). If you instead need the Java Development Kit (JDK), which is usually needed to compile Java applications (for example Apache Ant, Apache Maven, Eclipse and IntelliJ IDEA execute the following command:
sudo apt-get install default-jdk
That is everything that is needed to install Java.
All other steps are optional and must only be executed when needed.

Installing OpenJDK 7

To install OpenJDK 7, execute the following command:
sudo apt-get install openjdk-7-jre 
This will install the Java Runtime Environment (JRE). If you instead need the Java Development Kit (JDK), execute the following command:
sudo apt-get install openjdk-7-jdk

Installing Oracle JDK

The Oracle JDK is the official JDK; however, it is no longer provided by Oracle as a default installation for Ubuntu.
You can still install it using apt-get. To install any version, first execute the following commands:
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
Then, depending on the version you want to install, execute one of the following commands:

Oracle JDK 6

This is an old version but still in use.
sudo apt-get install oracle-java6-installer

Oracle JDK 7

This is the latest stable version.
sudo apt-get install oracle-java7-installer

Oracle JDK 8

This is a developer preview, the general release is scheduled for March 2014. This external article about Java 8 may help you to understand what it's all about.
sudo apt-get install oracle-java8-installer

Managing Java (optional)

When there are multiple Java installations on your Droplet, the Java version to use as default can be chosen. To do this, execute the following command:
sudo update-alternatives --config java
It will usually return something like this if you have 2 installations (if you have more, it will of course return more):
There are 2 choices for the alternative java (providing /usr/bin/java).

Selection    Path                                            Priority   Status
------------------------------------------------------------
* 0            /usr/lib/jvm/java-7-oracle/jre/bin/java          1062      auto mode
  1            /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java   1061      manual mode
  2            /usr/lib/jvm/java-7-oracle/jre/bin/java          1062      manual mode

Press enter to keep the current choice[*], or type selection number:
You can now choose the number to use as default. This can also be done for the Java compiler (javac):
sudo update-alternatives --config javac
It is the same selection screen as the previous command and should be used in the same way. This command can be executed for all other commands which have different installations. In Java, this includes but is not limited to: keytool, javadoc and jarsigner.

Setting the "JAVA_HOME" environment variable

To set the JAVA_HOME environment variable, which is needed for some programs, first find out the path of your Java installation:
sudo update-alternatives --config java
It returns something like:
There are 2 choices for the alternative java (providing /usr/bin/java).

Selection    Path                                            Priority   Status
------------------------------------------------------------
* 0            /usr/lib/jvm/java-7-oracle/jre/bin/java          1062      auto mode
  1            /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java   1061      manual mode
  2            /usr/lib/jvm/java-7-oracle/jre/bin/java          1062      manual mode

Press enter to keep the current choice[*], or type selection number:
The path of the installation is for each:
  1. /usr/lib/jvm/java-7-oracle
  2. /usr/lib/jvm/java-6-openjdk-amd64
  3. /usr/lib/jvm/java-7-oracle
Copy the path from your preferred installation and then edit the file /etc/environment:
sudo nano /etc/environment
In this file, add the following line (replacing YOUR_PATH by the just copied path):
JAVA_HOME="YOUR_PATH"
That should be enough to set the environment variable. Now reload this file:
source /etc/environment
Test it by executing:
echo $JAVA_HOME
If it returns the just set path, the environment variable has been set successfully. If it doesn't, please make sure you followed all steps correctly.