Last year at EU and Asia MozCamps, it was our pleasure to listen and talk to our community members about an area commonly overlooked: IT. We had great workshops, we got to know some awesome people and we listened to your needs.
Clearly, building a stronger Mozilla community needs a stronger IT presence and we were delighted to take the first steps towards this. Mozilla began offering virtual private servers to qualifying communities. Why was that necessary? We found out that most of our communities run their own projects and local websites on shared platforms, old hardware and with very few resources. Nowadays lot of communities have their own VPS without sharing the resources with anyone.
A new question arises though: how does one manage the VPS? Not everyone is a linux sysadmin and the internet has an overwhelming amount of information about “best practices”. Sometimes you can even find contradictory advice. We figured that we can share with you some of the practices we, Mozilla IT, have developed and are following. This post is not intended to be a complete “linux system administration” guide but merely what can you do to secure and better manage your server without being a full time sysadmin.
Choosing the “perfect” linux distribution
You have the VPS, and you have to install an operating system on it. Fedora? Debian? Ubuntu? Slackware? There are hundreds of distributions and since developers and potential systems administrators come from different environments, it can be hard to find a distro everybody is happy with. Your focus should be on a distribution that:
- has easy to maintain updates. You don’t want to spend days compiling and patching. Even if you insist on compiling from source you should still use a package manager.
- uses signed packages
- is updating regularly and provides a mailing-list (or equivalent) for security related updates
Who makes the final decision? People who will actually perform sysadmin tasks on the server should decide (all opinions count, of course, but let’s make it easier for them). At Mozilla, we use mainly RedHat on our servers, so why not give CentOS or Fedora a try?
Got root? (securing your system)
Your root password
Change the default root password. A password can be also a passphrase: easy to type in a console and it doesn’t have to l00k l1k3 th1s. Be sure to remember it or store it in a safe place.
Files and processes
Check for running processes and open ports when you’re done setting up things (nmap, netstat, lsof are your friends). Kill anything that you do not need (and make sure it doesn’t start at next boot). If in doubt, consult other people.
Check for world writable files and setuid binaries. Chmod 777 is never the solution, especially when running web applications that have a history of vulnerabilities. Consider running Bastille, a sanity-checker for file permissions and common configurations. You can use tools (like samhain, OSSEC, tripwire) to monitor the checksums of system files. Store the checksum db on a secure host.
Do not allow services or scripts/cron/etc run as root. It’s not hard. Run them as ‘nobody’ or better, as a user created for the task (or group of tasks). If root is needed, use sudo with the NOPASSWORD option, and allow the user to run *only* the very specific commands the script needs (man sudoers contains detailed examples of how this can be achieved).
Have security in mind with everything you do. Is this secure to the best of my knowledge? This will avoid many fundamental design mistakes, just by using common sense:
- Who has access? Identify what are the exposed interfaces (who is allowed to enter data in my login forms, prompts, etc? Is this restricted or open to anyone?).
- Is my code/script secure? What side effects could it have if someone discovers a bug in it?
- Should I password protect this?
Always be a minimalist. Start with a simple framework and add only necessary parts. Customization != INSTALL ALL THE THINGS. Many distros offer the *base* package set on install. Even the base package set can afford to lose a few services. Bluetooth for example…not on our servers.
This goes for kernel modules too. Kernel modules are most often drivers, and that’s where a lot of the kernel exploits are found. Don’t load modules you don’t need. Newer kernels auto-load modules only when necessary, but it is a good idea to unload any you do not need.
Disable root logins. Disable clear text passwords. Enable PubKey authentication (instruct your users to generate a passphrase for their keys and never make their private key public!). Don’t allow remote passwords at all. Enforcing keys comes with the added bonus of easier shell scripting. sshd_config man page is a good place to start looking into these settings.
Don’t use SSH agent forwarding unless necessary (don’t leave sessions open with the agent forwarded, don’t use agent forwarding by default, use -A when you need it). It is trivial to use your key to access other systems by anyone who is root on the system you SSH into, to access other hosts (even if they can’t steal the key, it is usable until you disconnect). Disallow tunneling/port forwarding if not necessary.
Disabling SSH root logins doesn’t mean less power. Using sudo you have more granularity for granting permissions and access to your server.
But…what is sudo? A funny definition is here. sudo allows users to run programs with the security privileges of another user (normally the superuser, or root). Its name is a concatenation of the su command (which grants the user a shell of another user, normally the superuser) and “do”, or take action. 
Sometimes, it’s better to have a password-less sudo for a specific group: you don’t want to type your SSO company password in clear on a compromised host.
Using sudo allows better auditing: you have a trace of who did what, even using sudo su – or sudo -s. Kernel auditing can reveal which user issued the command (auditd for example can do this for you).
There’s also an advantage in encouraging users to type “sudo” before every command they want to execute as root, instead of working in a root shell and accidentally doing damage.
Keep your system up to date. System vulnerabilities can be fatal. Always better to prevent than to repair.
Keep your applications up to date. Anything that’s not installed via the OS package manager needs manual updating. Web applications are most vulnerable to attacks because of their public facing nature. Check regularly for updates for your Drupal/Wordpress/blog app. Don’t forget the plugins.
Join the mailing list for software you depend on (Linux kernel, Apache, MySQL, etc) and keep an ear to the ground for security vulnerabilities.
Define package update policies. What do you upgrade and how often? Did a new version break your application? Bite the bullet sooner rather than later.
Disasters do happen. Systems go down and usually at a bad time. Having a disaster recovery plan saves you a lot of time and rage when the site is down. If possible, have a backup VPS with your downtime message ready. If you can flip DNS to your backup site your visitors now will have some info and aren’t raging waiting on the browser timeout. Don’t forget to setup the glue on your DR nameserver with your registrar.
Have backups: check with your hosting provider if they offer free backups (if it’s a VM most of them do daily snapshots). If not, you should at least create database dumps and filesystem backups of your most important files. Ideally save them outside of your VPS. Remember RAID 5 is not a backup or a disaster recovery plan.
Backups require a two pronged approach. Protection against accidental file erasure or corruption and a total data loss disaster.
Monitoring your server
It’s better for you to be the first to know when your website goes down, instead of finding out from Twitter or other channels. There are services like pingdom.com that offer one free service check. Keep in mind that ping is never a good monitoring tool. Instead you should use a HTTP check for a 200 status combined with several keywords found on your mainpage.
Use a friend and trade monitoring. Some webmasters have a monitoring ring comprised of a dozen or so separate sites.
Look into monitoring the performance of your services. Use tools that can monitor and measure page response time for example. This is especially important if you advertise or experience regular traffic spikes.
People will come and go, and your system will have new admins working on it. You should keep in mind a few rules:
- Simple tasks that you do rarely: well, keep doing them manually (don’t over complicate).
- Hard things that you do rarely: document what you do.
- Easy tasks that you do often: automate. BE LAZY. Automate all the things!
- Hard things that you do often: acquire, get more help, and try to reach the point to document and automate.
Ask yourself this question: how will I scale my services if the traffic grows and my current server won’t be able to handle it? Having things automated makes deployments and service expansion easier.
Use a version control system to keep track of your code changes and rollbacks. SVN/GIT are super easy to use and implement. Enforce versioning and *OMG its broken* goes to *No problem, revert*. Document the commits (=fill the commit messages, no exceptions!).
Use an automation software solution. We use multiple tools in Mozilla IT: Puppet, svn/git/hg, pxe, and various shell scripts. A proper puppet configuration ensures your system will stay at a desired state at all times (the SSH configurations we talked about earlier, user management, sudo, system updates etc). Alongside puppet we use version control so we can control and track changes to the infrastructure configurations.
Automation doesn’t have to be elaborate or costly. Automation can be as easy as shell scripts and cron jobs which keep a system up to date. Although we do not recommend this for a large system given modern tools like Puppet. With Puppet a single admin has great power to control a large environment. Deploying packages across your infrastructure should be easily accomplished weather you are using a robust system like Puppet or simple scripts (and ssh keys!).
Use the right tool for the job. We all love Apache, but depending on your needs, nginx or lighttpd might suit you better. Scaling isn’t simply throwing servers at traffic. Its planning, proper software/development choices, tuning, and the right tools that allow us react quickly to traffic/usage requirements. Growing a system to meet demand without reinventing the wheel every time we add a rack is scaling.
phpmyadmin: A lot of people use and love it, although it’s a constant target for web exploits. Combine that with weak mysql passwords and it can prove fatal to your databases. Our advice is don’t use phpmyadmin in the first place. But if you really need to, take the following precautions:
- Setup Apache basic authentication in front of it. 
- Use strong mysql passwords.
- Put it behind SSL. If you can’t get a certificate, use a self signed one. 
FTP is evil and must die. Seriously. FTP was specified in an RFC in 1985. Among other flaws, it sends your username and password in clear text to the server. Pretty neat huh? Use SFTP instead. SFTP is Secure File Transfer Protocol (over SSH). It was designed by the Internet Engineering Task Force (IETF) as an extension of the Secure Shell protocol (SSH) version 2.0 to provide secure file transfer capability. There are SFTP clients for every operating system: in Windows my favorite is WinSCP, and in linux and Mac you can use scp/sftp directly from the command line.
We hope that applying some of these tips will enhance the security on your server and that they will give you a better nights sleep. We are looking forward to your comments here, or on our community IT mailing list . You are also more than welcome on our public #it channel on irc.mozilla.org!