In same cases, one of my server is having problem delivering static content due to low I/O disk capabilities. So we need something to help our web service to deliver and cache the static content (mostly pictures and HTML) to help reduce the load of the main server.
You can refer to the diagram below to get clearer picture:
As usual, I will be using CentOS with Varnish cache in front of it. Varnish will also be the failover if one of the web server is down. All servers behind cache server will communicate using internal IP, so the web servers are not expose to outside world.
Variable as below:
OS: CentOS 6.2 64bit
Web1: 192.168.100.11
Web2: 192.168.100.12
Cache Server RAM installed: 16GB
Domain: supremedex.org
Web1: 192.168.100.11
Web2: 192.168.100.12
Cache Server RAM installed: 16GB
Domain: supremedex.org
1. Install Varnish is super easy:
$ rpm --nosignature -i http://repo.varnish-cache.org/redhat/varnish-3.0/el5/noarch/varnish-release-3.0-1.noarch.rpm $ yum install varnish
2. We should tell Varnish how do it start. Open /etc/sysconfig/varnish and make sure following line is uncommented and having correct value:
-----------------------------------------------------------------------------------------------------------------------------------
NFILES=131072 MEMLOCK=82000 RELOAD_VCL=1 VARNISH_VCL_CONF=/etc/varnish/default.vcl VARNISH_LISTEN_PORT=80 VARNISH_ADMIN_LISTEN_PORT=8888 VARNISH_SECRET_FILE=/etc/varnish/secret VARNISH_MIN_THREADS=2 VARNISH_MAX_THREADS=1000 VARNISH_THREAD_TIMEOUT=120 VARNISH_CACHE_SIZE=12G VARNISH_CACHE="malloc,${VARNISH_CACHE_SIZE}" VARNISH_TTL=120 DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ -t ${VARNISH_TTL} \ -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ -u varnish -g varnish \ -S ${VARNISH_SECRET_FILE} \ -s ${VARNISH_CACHE}"
-----------------------------------------------------------------------------------------------------------------------------------
3. Lets configure Varnish. Open /etc/varnish/default.vcl and make sure following line uncommented:
---------------------------------------------------------------------------------------------------------------------------
# Define the internal network subnet. acl internal { "192.168.100.0"/24; } # Define the list of web servers # Port 80 Backend Servers backend web1 { .host = "192.168.100.11"; .probe = { .url = "/server_status.php"; .interval = 5s; .timeout = 1s; .window = 5;.threshold = 3; }} backend web2 { .host = "192.168.100.12"; .probe = { .url = "/server_status.php"; .interval = 5s; .timeout = 1s; .window = 5;.threshold = 3; }} # Define the director that determines how to distribute incoming requests. director web_director round-robin { { .backend = web1; } { .backend = web2; } } # Respond to incoming requests sub vcl_recv { # Set the director to cycle between web servers. set req.backend = web_director; if (req.url ~ "^/server_status\.php$") { return (pass); } # Pipe these paths directly to Apache for streaming. if (req.url ~ "^/backup") { return (pipe); } # Always cache the following file types for all users. if (req.url ~ "(?i)\.(png|gif|jpeg|jpg|ico|swf|css|js|html|htm)(\?[a-z0-9]+)?$") { unset req.http.Cookie; } } sub vcl_hash { } # Code determining what to do when serving items from the Apache servers. sub vcl_fetch { # Don't allow static files to set cookies. if (req.url ~ "(?i)\.(png|gif|jpeg|jpg|ico|swf|css|js|html|htm)(\?[a-z0-9]+)?$") { # beresp == Back-end response from the web server. unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. if (req.url ~ "^/?$") { set obj.status = 302; set obj.http.Location = "http://dl.dropbox.com/u/68546782/maintenance.jpg"; } }
—————————————————————————————————————————
4. Now we need to create a PHP file for each back end server to make sure PHP and Apache services are running well. This is for Varnish monitoring purpose. Create a new file called server_status.php under Apache root document (in my case is /var/www/html):
$ touch /var/www/html/server_status.php $ chown nobody.nobody /var/www/html/server_status.php
And add following line:
<?php echo "Status: OK"; ?> |
5. So now we can start Varnish using following command:
$ service varnish start |
6. Lets check whether Varnish is working fine:
$ netstat -tulpn | grep varnish tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 10200/varnishd tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 10199/varnishd tcp 0 0 :::80 :::* LISTEN 10200/varnishd tcp 0 0 :::8888 :::* LISTEN 10199/varnishd |
7. Point the domain itself to the cache server as below:
supremedex.org A 202.133.14.80 www CNAME supremedex.org |
8. Once the propagation completed, you should able to access the website directly via web browser by accessing http://www.supremedex.org/.
0 comments:
Post a Comment