Showing posts with label server. Show all posts
Showing posts with label server. Show all posts

Sunday, March 31, 2013

Setting Up munin on Ubuntu Server 13.04 with nginx

This post explains how to configure munin on a single Ubuntu 13.04 server that uses nginx.

Monitoring is supposed to save time by debugging and predicting / avoiding catastrophes. However, setting up munin on Ubuntu was a time-consuming trial-and-error process for me. The official instructions and various blog posts that cover this topic skip important steps, such as having monit's FastCGI processes start automatically at boot time. I have documented the setup that worked for me, hoping that others can reuse my work.

Ubuntu packages

Run the following command to install the packages needed for munin.
sudo apt-get install munin munin-node spawn-fcgi libcgi-fast-perl

The following sections configure the munin packages.

munin configuration

Write the munin configuration below to /etc/munin/munin-conf.d/90-fcgi

graph_strategy cgi
html_strategy cgi
cgiurl_graph /munin/munin-cgi-graph

nginx configuration

Write the nginx configuration below to /etc/nginx/sites-enabled/munin.conf

server {
  listen 443 ssl;
  listen 80;
  charset utf-8;
  server_name munin.your-domain.com;

  location ~ ^/munin/munin-cgi-graph/ {
    fastcgi_split_path_info ^(/munin/munin-cgi-graph)(.*);
    fastcgi_param PATH_INFO $fastcgi_path_info;
    fastcgi_pass unix:/var/run/munin/fastcgi-graph.sock;
    include fastcgi_params;
  }

  location /munin/static/ {
    alias /etc/munin/static/;
    expires modified +1w;
  }

  location /munin/ {
    fastcgi_split_path_info ^(/munin)(.*);
    fastcgi_param PATH_INFO $fastcgi_path_info;
    fastcgi_pass unix:/var/run/munin/fastcgi-html.sock;
    include fastcgi_params;
  }

  location / {
    rewrite ^/$ munin/ redirect; break;
  }
}

This configuration assumes that you have a DNS entry set aside for reaching the monit pages. I have separate DNS entries for all my applications, and they're all CNAMEs for the (same) machine that they're running on.

Once you're done tweaking the script, reload nginx.
sudo /etc/init.d/nginx reload

FastCGI daemons

Write the script below to /etc/init.d/munin-fcgi

#!/bin/bash

### BEGIN INIT INFO
# Provides:          munin-fcgi
# Required-Start:    $remote_fs $syslog $network
# Required-Stop:     $remote_fs $syslog $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start munin FCGI processes at boot time
# Description:       Start the FCGI processes behind http://munin.*/
### END INIT INFO

graph_pidfile="/var/run/munin/fcgi_graph.pid"
# Ubuntu 12.10: /usr/lib/cgi-bin/munin-cgi-graph
graph_cgi="/usr/lib/munin/cgi/munin-cgi-graph"
html_pidfile="/var/run/munin/fcgi_html.pid"
# Ubuntu 12.10: /usr/lib/cgi-bin/munin-cgi-html
html_cgi="/usr/lib/munin/cgi/munin-cgi-html"

retval=0

. /lib/lsb/init-functions

start() {
  echo -n "Starting munin graph FastCGI: "
  start_daemon -p ${graph_pidfile} /usr/bin/spawn-fcgi -u munin -g munin \
      -s /var/run/munin/fastcgi-graph.sock -U www-data ${graph_cgi}
  echo
  echo -n "Starting munin html FastCGI: "
  start_daemon -p ${html_pidfile} /usr/bin/spawn-fcgi -u munin -g munin \
      -s /var/run/munin/fastcgi-html.sock -U www-data ${html_cgi}
  echo
  retval=$?
}
stop() {
  echo -n "Stopping munin graph FastCGI: "
  killproc -p ${graph_pidfile} ${graph_cgi} -QUIT
  echo
  echo -n "Stopping munin html FastCGI: "
  killproc -p ${html_pidfile} ${html_cgi} -QUIT
  echo
  retval=$?
}

case "$1" in
  start)
    start
  ;;
  stop)
    stop
  ;;
  restart)
    stop
    start
  ;;
  *)
    echo "Usage: munin-fcgi {start|stop|restart}"
    exit 1
  ;;
esac
exit $retval

Make the script executable, and fix some permissions while you're at it.

sudo chmod +x /etc/init.d/munin-fcgi
sudo chown munin /var/log/munin/munin-cgi-*

Now that the init.d script is in place, start it and have it run on every boot.

sudo /etc/init.d/munin-fcgi start
sudo update-rc.d munin-fcgi defaults

Debugging

You should be able to point your browser to http://munin.your-domain.com and see many graphs. If that doesn't work out, the logs below should give you a clue as to what went wrong.

  • /var/log/nginx/error.log
  • /var/log/munin/munin-cgi-graph.log
  • /var/log/munin/munin-cgi-html.log

Conclusion

I hope that you have found my post useful, and I hope that it helped you get your munin monitoring setup up and running in a manner of minutes. I look forward to your comments and feedback!

Sunday, December 16, 2012

Free SSL for Your Web Application


This post documents my experience adding SSL support to my Web applications. Most of the steps here are applicable even if you choose a different SSL certificate provider and/or if you host your Web application on Heroku or Google App Engine.

Sign up with StartSSL

StartSSL offers free personal SSL certificates. These are not 30-day trials. The certificates will show your name, not a company name, but do give you all the technical benefits of SSL, such as encryption, the ability to run a SPDY server, and the ability to host background pages in Chrome.

https://www.startssl.com/?app=1

The Web UI leaves a lot to be desired, but their pricing can't be beat. You need to perform the following steps.

  • Create an account
  • Verify your e-mail address
  • Get a client certificate
  • Export your client certificate somewhere safe
  • Verify your domain
  • Generate a CSR (details below)
  • Obtain a SSL certificate for your server
  • Build a SSL certificate chain (details below)

Generate a CSR

Most online tutorials have you generate a password-protected RSA key, which cannot be used automatically. Most deployment tools, as well as Heroku and Google Apps, require un-encrypted RSA keys behind your certificates. Use the command below to generate a CSR with an un-encrypted key.

openssl req -new -newkey rsa:2048 -sha256 -nodes -keyout server_name.pem -out server_name.csr -batch

Replace server_name in the command with your server's name. If you want to generate CSRs for other providers, leave out the -batch at the end of the command and carefully answer the openssl prompts.

At the time of this writing, the StartSSL Web UI expects the CSR in a textarea input field, so open server_name.csr and copy-paste its content into the provided textarea.

After the CSR is provided to StartSSL, the .csr file can be deleted. However, hang on to the .pem file storing your server's private key!

Build a certificate chain

StartSSL's Web UI currently provides the server certificate in a textarea on a page that also points to the relevant CA certificates. However, most server software expects all the certificates to be bundled in a single file. Follow these steps to put together the certificate bundle.

First, open cert.cer in a text editor and copy-paste the StartSSL certificate text from the textarea.

gedit server_name.cer

Download StartSSL's root CA and intermediate CA certificates.

curl https://www.startssl.com/certs/ca.pem > ca.pem
curl https://www.startssl.com/certs/sub.class1.server.ca.pem > ca2.pem

Last, put together the certificate bundle.

cat server_name.cer ca2.pem ca.pem > server_name.crt
rm server_name.cer ca.pem ca2.pem

For consistency, replace server_name in the command above with the same name that you used for the .pem file.

Set up your server for SSL

If you use nginx as your front-end server, you're in luck. Merge the bits below into your configuration file, and your application should support SSL without any further changes.

http {
  listen 443;
  ssl on;
  ssl_certificate /path/to/server_name.crt;
  ssl_certificate_key /path/to/server_name.pem;

  location / {
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Host $host;
  }
}

Thursday, April 29, 2010

Ubuntu 10.04 on Mac Mini

This post describes a short procedure for installing Ubuntu 10.04 on a Mac mini. The instructions were initially written (and should still work) for older Ubuntu versions, down to 8.10. However, I haven't tested the old versions for a long time.

The process is roughly the same as my previous instructions for Ubuntu 8.04, but the write-up is more detailed to address the questions I have received last time. So don't the post length scare you!

Outline
The article starts with a warning describing the shortcomings of installing Ubuntu on the newest Mac Mini model. That is followed by a step-by-step guide for installing Ubuntu which should work on any Mac (tested only on Minis though). The post ends with instructions on how to restore your dual-boot solution if Mac updates break it, and a list of (community-contributed) tricks for getting some hardware to work on the latest Mini.

Warning (skip if you're sure you want Ubuntu)
Ubuntu will not work seamlessly on the new Mac mini (model MacMini3,1 with 5 USB ports). This is based on the release version of Ubuntu 10.04.

Wireless does not work right after installation, so Ethernet is needed, at least for bootstrapping. Once the system gains Internet access, it offers to install a Broadcom STA wireless driver which offers good performance. Grub 2 will get stuck and not show the boot menu about one times in five. This is uncomfortable if you're planning to use your Mini as a server that sits somewhere far away. For desktop usage, you'll probably want to install the nVidia proprietary drivers. On the open-source nouveau drivers, my screen doesn't get recognized when using mini-Display Port, but it works reasonably well when connected via the mini-DVI port. Sound doesn't work out of the box, and you'll need to hack your configuration files to get it to work (see below).

Method
  1. Use a Leopard or Snow Leopard (Desktop or Server) install disc to get your Mac in pristine form. This step is intended to undo anything that might have gone wrong in your previous attempts. You can skip it if you have a fresh install.
  2. Use Software Update to ensure you have all your updates installed. Update and reboot until there are no updates left.
  3. Start up Boot Camp Assistant (press Apple+Space to open Spotlight, then start typing the name until the application appears) and create a "Windows" partition.
  4. Do not let Boot Camp Assistant reboot your computer. Use Apple+Q to quit once it's done partitioning.
  5. Download and install the latest version of rEFIt (0.14 at the time of this writing) from http://refit.sourceforge.net/
  6. Open up Terminal (use Spotlight if you don't have it on your dock) and type the following commands:
    cd /efi/refit
    ./enable-always.sh
  7. Insert your Ubuntu CD, and shut down the computer, then power it back up.
  8. You should see the rEFIt boot screen.
  9. Select the Ubuntu CD (it should have a penguin on it) and go through the normal installation process. If rEFIt doesn't recognize the Ubuntu CD, power-cycle your Mac, and use Bootcamp to boot the Ubuntu CD - press and hold the Alt key as soon as the computer starts up, until the BootCamp screen shows up; select the CD image.
  10. When you have to do partitioning, choose Manual. Remove the Windows partition (the big FAT32 partition at the end). Create an ext4 partition (be sure to to allow for swap space) and set it to mount to /, then create a swap partition. If you're unfamiliar with partitioning a Linux system, read the recommendations below.
    • Click on the FAT32 partition, then click the Delete Partition button.
    • Click on the free space entry at the bottom, then click the New partition button. Select Ext4 journaling file system under Use as:, check the Format the partition: box and select / as the Mount point:. Now subtract twice your RAM size from the partition size. For example, if your partition size is 53575 Mb and you have 1Gb of RAM, you would write in 51527, which is 53575 - 2*1024. Press OK when you're done.
    • Click on the free space entry at the bottom, then click the New partition button. Select swap area under Use as: then press OK.
  11. Unless you know what you're doing, do not change the Advanced settings on the last installation screen. Specifically, don't change the default Grub installation location (hd0).
  12. When the installation is done, the Mac will reboot (if you're lucky) or beep multiple times. If it beeps, turn it off (press the power button for 4 seconds) then turn it back on.
  13. When you get the rEFIt boot screen, go to Partitioning Tool (bottom row, second icon from the left). It will prompt you if you want to update the MBR to reflect the GPT. Press Y, and watch the system reboot.
  14. Power down the system by pulling the power cord. Then power up again.
  15. Select Macintosh HD, and make sure you can boot into OSX. If it doesn't boot after 2 minutes, power cycle (see previous step) again.
  16. Optionally, switch the boot default to Linux. Open up /efi/refit/refit.conf in TextEdit, and uncomment the line saying #legacyfirst (at the bottom of the file).
  17. Reboot your Mac mini, and enjoy choice!
OSX software updates might remove your ability to dual-boot. The colorful boot menu will disappear, and your computer will boot into Mac OSX. If that happens, you can restore dual-booting with the following commands (step 6 from the guide above):
  • Open up Terminal (use Spotlight if you don't have it on your dock) and type the following commands:
    cd /efi/refit

    ./enable-always.sh
Hint: you can bookmark this post to have the solution handy. I won't mind :)

Getting Hardware to Work (community-contributed)
The tips here should help if you want to go beyond Ubuntu's out-of-the-box hardware support. These were contributed by others, and I just put them together on one page.
Sound (by nonspeaking) - not needed after Ubuntu 9.10 Beta
To get the sound working, add the following line to /etc/modprobe.d/alsa-base.conf
options snd-hda-intel model=imac24

Motivation
If you're curious, the following reasons determined me to write this post
  • My readers commented on my installation method for Ubuntu 8.04, and said it doesn't work for 8.10. Takeaway: please do comment! I listen :)
  • I got a new Mac Mini (MB464LL/A, 5 USB ports) to replace the one that was stolen from me.
Enjoy Ubuntu on your Mac!
Please leave a comment if you find a shorter way, or if something is not working for you.

Sunday, March 22, 2009

Your Web Server and Dynamic IPs

This post describes the techniques I'm using to host my application from a server whose IP changes over time. The post assumes the server's IP only changes when the server is not in use, and therefore I do not address servicing requests during the IP change. Instead, I am concerned with restoring the mapping between the server's DNS entries and its IP in an automated and reasonably quick manner.

Overview
I signed up for dynamic DNS service. This gives me a DNS name that points to any IP I want, and some software that I install on my server to automatically change the DNS name. Then I set the user-visible DNS hostname (www.something.com) as a CNAME pointing to the dynamic DNS hostname.

The technique generalizes to serving multiple applications (with separate domains) from a single server. The DNS entries for all the applications are set as CNAMEs pointing to the server's dynamic DNS entry. The HTTP port on the server is owned by a reverse proxy and load balancer dispatching requests to each application's backends based on the Host: header in the HTTP request.

Dynamic DNS Service
You can get dynamic DNS for free. I use dyndns.com's service, and it worked for me. If you want to shop around, here's a list of providers, courtesy of Google Search.

Once you sign up for service, you should get a hostname (like victor.dyndns.com) that you can point to any IP. This host name will be transparent to your users, so you don't need to worry about branding when choosing it. Your only worry is having to remember it.

The important decision you have to make here is the TTL (time-to-live) of your entry. This is the time it takes to propagate an IP change. Shorter values have the advantage that your server can be accessed quickly after it is moved. Longer values mean the IP address stays longer in the users' browser cache, so they have to do DNS queries less often. This matters because the dynamic DNS adds an extra DNS query that users' browsers must perform before accessing your site, which in turn adds up in the perceived latency of your site. Your TTL choice will be a compromise between availability after a move and the average latency increase caused by the extra DNS lookup.

Dynamic DNS Updater
To make the most out of your dynamic DNS service, you need software that updates the IP associated with the DNS hostname.

My Rails deployment script automatically configures the updater for me (source code here). I use ddclient, because it's recommended by my dynamic DNS service provider.

In order to use DynDNS on Ubuntu:
  1. sudo apt-get install ddclient
  2. Edit /etc/init.d/ddclient and replace run_daemon=false with run_daemon=true
  3. Use the following configuration in your /etc/ddclient.conf
pid=/var/run/ddclient.pid
use=web, web=checkip.dyndns.com/, web-skip='IP Address'
protocol=dyndns2server=members.dyndns.org
login=dyndns_username
password='dyndns_password'
dyndns_hostname

The updater will start on reboot. If you want to start it right away,
sudo /etc/init.d/ddclient start


Other Options
If you use DynDNS, but don't run Linux, they have clients for Windows and OSX. If you don't use DynDNS, this Google search might be a good start.

My home router (running dd-wrt) uses inadyn. I don't like that on my server, because it takes my password on the command-line, so anyone that can run ps will see my password.


Application DNS Setup
Having done all the hard work, you close the loop by setting up a CNAME mapping your application's pretty DNS name to the dynamic DNS hostname. If you don't want to pay for a domain, you can give out the dynamic DNS hostname to your users... but it's probably not as pretty.

The process for setting up the CNAME mapping depends on your domain name provider (who sold you www.something.com). The best source of instructions I know is the Google Apps Help. If you use that, remember to replace ghs.google.com with your dynamic DNS hostname.

Debugging
Chances are, your setup will not come out the first time. Even if that doesn't happen, your setup might break at some point. Your best aid in debugging the DNS setup is dig, which comes pre-installed on Mac OSX and most Linux distributions.

Run dig www.something.com, and you'll get an output that looks like this:
moonstone:~ victor$ dig www.mymovienights.com
(irrelevant header, removed)
;; QUESTION SECTION:
;www.mymovienights.com.        IN    A

;; ANSWER SECTION:
www.mymovienights.com.    1742    IN    CNAME    chubby.kicks-ass.net.
chubby.kicks-ass.net.    2    IN    A    18.242.5.133

;; Query time: 211 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)

(irrelevant footer, removed)
I removed the part that is completely uninteresting, and made interesting parts bold. The answer section shows a DNS chain built following this post. If your chain doesn't look like this, you know where to fix the error. If everything looks good here, but you still can't reach your server, the problem is either at the networking layer (can you ping the server?) or at the application layer (your load balancer or application server is misconfigured).

Another interesting result you get from dig is the query time, which shows the latency introduced by DNS to the users who visit your site for the first time. Unfortunately, this doesn't give accurate numbers if dig's answer is in some DNS cache, so be sure to account for that in some way when measuring latency.

Monitoring
I use Google's Webmaster Tools because they provide free monitoring. The overview is sufficient to see if the site is up or down. If you have a Gmail account and use it frequently, you can embed a gadget showing your site's status into your Gmail view.

Multiple Applications
I use the same server for multiple Web applications. I have a separate DNS hostname for each application, and they all point to the same dynamic DNS hostname via CNAMEs.

On the server, I use nginx as my reverse proxy because it is fast and it can be reconfigured with no downtime, as it's serving user requests. You can use apache if you prefer, using these instructions.

My reverse proxy setup is done automatically by my Rails deployment script (source code here). Here's how you can get a similar configuration:
  1. sudo apt-get install nginx
  2. For each application, create a file in /etc/nginx/sites-enabled/ with the following configuration
upstream application_name {
    server 127.0.0.1:8080;
  }

  server {
    listen 80;
    server_name www.something.com;
    root /path/to/your/application/html/files;
    client_max_body_size 48M;
    location / {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header Host $host;
      proxy_redirect false;
      proxy_connect_timeout 2;
      proxy_read_timeout 86400;

      if (-f $request_filename) {
        break;
      }

      if (-f $request_filename/index.html) {
        rewrite (.*) $1/index.html break;
      }
      if (-f $request_filename.html) {
        rewrite (.*) $1.html break;
      }
      if (!-f $request_filename) {
        proxy_pass http://application_name;
        break;
      }
    }
  }
This configuration handles requests for www.something.com by serving static files directly through nginx when they are available, and by forwarding the HTTP requests to your application server at port 8080 otherwise. If you do not want to serve static files from nginx, remove the root clause, and all the if clauses. Tweak any other numbers as needed.

Of course, you cannot use port 80 for any of your application servers.

The server will start on reboot. If you want to start it right away,
sudo /etc/init.d/ddclient start

DNS Prefetching
If you're worried about the latency added by the extra layer of DNS, you can use prefetching to go around this limitation. DNS prefetching is a fancy name for tricking the user to do a DNS lookup for your hostname, before he/she interacts with your application.


If you're wondering whether this prefetching thing actually matters, know that Google uses DNS prefetching in Chrome. Sadly, most Web developers don't have enough leverage over their users to convince them to install custom software.

Firefox supports link prefetching, and you can find it useful if your users install a widget / gadget that's served from a CDN (e.g. Google Gadgets).

You can also be more creative by looking at the bigger picture. For instance, if your users install an application of yours on their mobile phones, those phones will likely do DNS queries using your users' home routers. So, if your mobile application synchronizes with the server using a sync interval that's smaller than the TTL on your DNS entries... you've killed most of the latency.

Motivation
My servers have been hosted in random places. I've had my application server in my dorm room, in my friends' dorm rooms, and in random labs around MIT.

Given that my servers travel so much, I like to keep them light (Mac Mini or Dell Studio Hybrid) and I want to be able to move them without any manual configuration change. This means the servers can be headless, and that my friends can move the servers for me, without the need any training.

Conclusion
Thanks for reading, and I hope you found this post useful. Please leave a comment if you have any suggestion for an easier or better setup.

Saturday, April 26, 2008

Ubuntu 8.04 on Mac Mini

Update: please read my new post if you want to install Ubuntu 8.10 or 9.04.

So, Ubuntu 8.04 came out yesterday. I promptly got my Mini to not boot by trying to install it using the same procedure I used for 7.10. One day later, I figured out a (small) sequence of steps that yields a working dual-boot of Leopard and Ubuntu 8.04. Here's what I did:

  1. Use the Leopard (Desktop or Server) install disc to re-partition the disk to 1 partition, then install Leopard.
  2. Install Software Updates. Needs to be done twice.
  3. Start up Boot Camp, and resize your OSX partition. Quit Boot Camp when it offers to start the Windows installation.
  4. Download and install rEFIt: http://refit.sourceforge.net/
  5. Make rEFIt take over the boot process (it should do that by itself but that doesn't work for me):

    • Open Terminal
    • cd /efi/refit

    • ./enable-always.sh


  6. Optionally, switch the boot default to Linux: open /efi/refit/refit.conf in TextEdit, and uncomment the line saying #legacyfirst (at the very bottom)
  7. Reboot and insert your Ubuntu 8.04 disc. The rEFIT screen should pop up. Don't worry if you've never seen it before. You'll notice the difference.
  8. Start installing Ubuntu. Choose manual partitioning, delete the big FAT32 partition that Boot Camp created (leave the EFI partition alone though), and create the root and swap partitions in the free space.
  9. On the last install screen, click Advanced, and replace (hd0) with (hd0,2). This is necessary so that Grub installs in the right place.
  10. Upon rebooting, go to Partitioning Tool (second icon in the bottom row) in rEFIt. It will offer to update the MBR to reflect the EFI partition table. Accept. Then the Mini will reboot again.
  11. Power off the Mini. Then power it back on. Now you can boot Ubuntu or Leopard.
I know that the last steps look like black magic (pulled out of my behind). I didn't get it to work otherwise. These steps work. Please comment if you find a shorter sequence.

Asides from getting Ubuntu to boot, rEFIt is handy because it rescues you in case something blows up. When I bricked the Mini, I was running on Apple's boot loader. rEFIt was still be able to boot OSX, even when I messed up the grub install options.

Enjoy your Mini servers :)