Copy “React to Change in Number of Connected Monitors using Macro Scheduler”
(Text is selected. +C to copy.)
Copy “Automated output of form fields into a table”
(Text is selected. +C to copy.)
For my latest project, Auto Swatch, I needed a process management system to handle the availability of critical services, like nginx, memcached, PostgreSQL, and the application's Gunicorn processes. We use Supervisor at Dwaiter, so this was my natural choice.
"Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems."
Steve Losh said that the trick to getting anything working with Supervisor is to make sure it doesn't daemonize. But, alas, there are other tricks.
This blog post concerns getting PostgreSQL working with Supervisor.
First, install PostgreSQL normally. On Ubuntu 10.10, the default way of
running PostgreSQL is via Unix socket. This socket lives at
/var/run/postgresql. Now, believe it or not, this creates a problem
when trying to run PostgreSQL in foreground mode, as required by
Supervisor. More on that later.
To understand the problem, you have to understand how PostgreSQL runs
normally. Typically, you run PostgreSQL via init script, like
/etc/init.d/postgresql start. This script runs a bunch of code, some
of which actually creates the dir
/var/run/postgresql and sets the
Problem: when you run PostgreSQL in foreground mode with Supervisor, it
tries to open the socket in
/var/run/postgresql. This will work if
you've previously run PostgreSQL with the init script, because the init
script created and set the permissions for that dir. However, after
reboot, that dir will be gone, and PostgreSQL won't be able to open the
socket there, resulting in failure. So, we either need to move the
socket (meaning a config change with your app), or make PostgreSQL run
on a TCP port instead. TCP port it is.
To get things running with Supervisor:
- Stop PostgreSQL:
sudo /etc/init.d/postgresql stop
- Move the init script somewhere safe:
sudo mv /etc/init.d/postgresql ~/somewhere-safe(so it won't startup in daemon mode)
- Edit the PostgreSQL config:
sudo vim /etc/postgresql/8.4/main/postgresql.conf
- Line 49: comment out
external_pid_file(not needed for TCP mode)
- Line 63: comment out port (unless you want to change the default port of 5432)
- Line 68: change
With this config, PostgreSQL will default to using TCP port 5432 instead of a Unix socket.
Here's what my Supervisor conf looks like for PostgreSQL:
[program:postgresql] user=postgres command=/usr/lib/postgresql/8.4/bin/postmaster -D "/var/lib/postgresql/8.4/main" process_name=%(program_name)s stopsignal=INT autostart=true autorestart=true redirect_stderr=true `
Now, you might also get an error about PostgreSQL not being able to read
the config file at
I had to symlink:
sudo ln -s /etc/postgresql/8.4/main/postgresql.conf /var/lib/postgresql/8.4/main/postgresql.conf
Your mileage may vary. Good luck.
Copy “Using PostgreSQL with Supervisor on Ubuntu 10.10”
(Text is selected. +C to copy.)
Yeah, the title is pretty long. I really couldn't come up with anything better. Anyways, after the recent switch to Media Temple for all of my sites as a result of this debachle, I needed some peace of mind with regards to file changes and database backups for each of my sites. I also thought it'd be cool if I didn't have to do anything and would get an email every night of the 'status' of the server and the script's results.
So, I wrote a bash script. After writing it, I realized I probably could have / should have written the script in either Python or Perl, but learning some shell scripting was neat, too.
The script does four specific things to fit my four specific needs (I'm greedy):
- Run 'svn status' on multiple checked-out projects from my SVN repository to see if there have been any local modifications to the file structure (such as a user uploading a file).
- Run 'mysqldump' on several databases my sites use.
- Transfer an archive of all of those MySQL dumps to an external server, for backup purposes.
- Email me the statuses of each of these processes.
So how does it work? Well, it's simple:
First, define a temp file we'll use for the email message we'll send out at the end of the script, and set a variable to the current date and time:
Next, set the 'IFS' to a new-line character. We want to separate our list of SVN directories and MySQL connection strings in a line delimited format.
Define the directories you'd like to run 'svn status' on:
Next, define the MySQL databases you'd like to dump:
Now, define the directory where you want to put the database dumps, and eventually the archive of all of them:
After we put each of the individual SQL dumps into an archive, we'll want to transfer the file to an external server for backup purposes. We'll use FTP, but you can use SCP if you'd like. For FTP, define your host, user, pass and directory:
After the script has run, it will send off an email with the report of its successes or failures. Set your email preferences at the bottom of the script:
That's about it for setting up the script variables. The rest of the script essentially does what it's supposed to do, and is pretty self explanatory.
As always, feel free to leave any questions or suggestions for improvement of the script.
Copy “A quick shell script for backing up databases, FTPing them to a remote server, and notifying me of any file changes”
(Text is selected. +C to copy.)
There has been a boatload of discussion amongst the Drupal community regarding best practices for managing developement, staging and production environments with a Drupal codebase. The reason this is usually a sore subject for many Drupalers lies in Drupal's heavily database dependent site configuration and management. Thus, it becomes more difficult to manage Drupal sites across different development environments with the tools typically used for this.
Software to help us
Many developers are used to managing software codebases with either CVS or Subversion (both are revision control systems). These systems make it easy to manage file-based software releases, rollbacks, development branches, etc. However, because of Drupal's database-usage strategy, managing and moving around Drupal codebases is not as beneficial as it is with other software.
So what exactly are the options we have for managing our sites? Well, I'm going to run through the process by which we are currently managing a live site between development and production (sans-staging). I've also linked to many other articles and theories on this topic at the end of this post.
A few months ago, I wrote a blog post on Painless Drupal revision control with CVS and Subversion on a shared host. That post is an good read for those interested in simply getting up and running with a Drupal codebase utilizing CVS and Subversion for local revision control, as well as easy upgrades from Drupal.org's CVS. While that blog post focuses on simply getting setup, this post will be more geared towards the issues we currently face with that setup, the proposed workarounds, and the strategies we personally implement.
A sample scenario
I'll start off with the site that we're currently in active development with, and have also already launched. The site is currently sneaking by under the radar, and we're going to keep it like that for a while, so we'll refer to the site as 'Project X'.
Project X began life as a simple CVS checkout from Drupal.org onto my local machine. At that same time, I also ran CVS checkouts of all the modules that I knew I would need for the project. When I had that fairly set, I imported the entire project into our corporate Subversion repository. I then deleted the codebase from my local machine, and checked out a working copy of the project back onto my machine. The project is being developed by myself and one other developer, so he checked out a copy to his local machine as well.
I went about installing Drupal as normal, knowing that I'd be storing development connection settings in our /sites/default/settings.php. This is so when we release the software, we would be more specific with the definition of the settings.php file in /sites/projectx.com/settings.php. With that setup, we can retain the same codebase for both development and production environments. Drupal will look for 'projectx.com' on both servers (dev and prod), but since the development 'servers' are simply our local machines, it will fall back to /sites/default. Within our /sites/default/settings.php, we pointed the database to a MySQL server we run in-house that we can both connect to.
At this point, it should be noted that the codebase we both have checked out is from 'trunk'. We always develop on trunk. That is, of course, until we have a reason to branch off into separate branches. This is a smaller project, however, so we simply build on trunk for now.
Drupal configuration and customization
I go about building the theme within my trunk checkout, committing changes, adding files, etc. Our other developer, we'll call him 'Pete', is hacking away at a new module we're building to take care of some special functionality. He's commiting his changes, too. Every once in a while, we'll tell each other to run updates to grab the latest code from trunk. This is especially important when adding new modules. If you need to add a new module into trunk, download (or CVS checkout) the module into your codebase, then add and commit to trunk. Before you enable the module, tell Pete to run an update on his codebase (he'll probably have no clue what you're talking about). We don't want to go about enabling a module, resulting in the database change, and have Mr. Pete access the site with the now enabled module in the database, but no module files to support it. In fact, I'm not sure really what would happen, perhaps a black hole, probably nothing. Either way, I'll leave it to someone else to find out.
That's pretty much it for developing pre-production. We make our changes, have our fun, build some stuff, etc. The fun times come for when we want to launch the site on our production server.
Before you release your first version, you'll want to setup your settings.php for the production site. Create the directory /sites/projectx.com and copy the 'settings.php' from /sites/default to the directory. Modify the projectx.com settings.php, specifically the $db_url (line 93 for Drupal 6). Set the correct DB connections here to your production database. That'll be it for the settings.php file.
Now you'll need to dump and import the development database into the production database that you've setup. Since this is the first release, you don't need to worry about overwriting anything.
Tagging our first release
Once we've finished developing on our local machines, have duplicated the development database to the production database, and have finished our final commits from both machines to the repo, we're ready to checkout a copy onto the production server. However, before we do that, we should keep in mind our future development patterns. We will surely want to be able to continue developing on trunk while not having to worry about our production codebase. For that, we use 'tags'. Each time we have a software release we feel is ready for production, we release a new version, and switch the production version to use the latest release.
The quickest way to do this is to SSH to the server that hosts your repositories. The following command (svn copy) will copy your current trunk build to your very first tag:
Once that's set, you're ready to checkout the tagged release to your production server. Head over to the server, and checkout the 1.0.0 release:
If you've setup the settings.php correctly, the site should be good to go. That's it for the initial launch. The site's done, right? Wrong.
Now that the site is live and accumulating data, we need to change our development habits. The development database is no longer the 'master', as there have been changes to the production database that we don't want to overwrite with development data. While we haven't devised a brilliant solution for merging development and production data, we've realized that we don't really need to.
When we're ready to begin a new 'development cycle', we clone the production database, and completely dump and rebuild the development database with the production database. I wrote a stupid quick production to development bash script to handle this for us. Much easier than doing it manually, anyways. This is by no means a cutting-edge development process, but it seems the most logical for us. This is a fairly small project that doesn't really warrant some of the more in-depth development environments that I've linked to at the end of this article.
So now that we've cloned the production DB to the development DB, we've got all of the content available to us for testing with. The majority of our development is done in two areas:
- Theme development
- Custom module development
Theme development is heavily (if not all) file-based, so this development strategy caters well to that. Custom module development is heavily file-based, but can also be heavily database dependent. We find that, while not having a solid development to production database migration process, manually setting up the module in production really isn't that much work. When I first delved into this problem, I wanted a solid, complete and foolproof solution to migrating development database changes to production. Unfortunately, that just isn't available, and once I came to terms with that, I realized I'm not all that upset about it.
If you develop often, and release often, you'll probably agree with me. Surely, if you're building 4 new themes, 20 new modules, installing 6 contrib modules, and expecting to not have to do any work when migrating to production, you're in for a treat. If you're doing that, however, shouldn't you have rolled that into your initial release?
Ah, I digress. So that's our general strategy. So what happens when we're ready to release our new-fangled changes on development?
When we release upgrades to the software, we simply create a new tag. When you're ready to tag the current trunk build as a new release, simply:
Once you've done that, you're ready to upgrade your production checkout to the latest release. But, how?
We use the 'svn switch' method. Essentially, we're switching a current working copy to a new subversion project URL. Subversion takes care of the changes between those URLs, with the 'switch' command. When ready to release 1.0.1, we head to the production server working copy and:
Subversion makes the appropriate changes to the working copy to reflect changes made from REL-1-0-0 and REL-1-0-1. Win.
Managing production filesystem changes
So now we're done, right? Not really. What happens if there are changes to the filesystem on your production server, such as user file uploads, pictures, etc? Let's say Jon uploads a picture of a drunk cat for his profile picture. We want those file changes to be stored in our repository, as well. You might not want to, and if that's the case, you can skip this part. If you do, that's where 'svn merge' comes in handy. The merge command will essentially 'merge' differences between two sources into a working copy.
Before you can merge the changes, you need to commit the appropriate changes you want merged to your tagged release. From the production checkout, run 'svn add' on the files that were added. Then, commit your changes. Be careful to not commit file modifications that you did not specifically want merged.
You'll need to run the merge from a trunk checkout, since you want to merge the changes from a tagged release into trunk. From a working copy of trunk:
You'll note the use of '--dry-run'. Run the command once as a 'dry run' to see the changes before you actually do them. This is very useful. When you're satisfied with the file changes, remove the '--dry-run' and re-run.
With an 'svn status', you'll see the local file modifications to your trunk checkout. If you're still happy, commit the changes to trunk, and you're done.
I always do the merge after (and only directly after) I upgrade the production copy to the latest tagged release. That way, the changes from tag to trunk only include the file changes or additions that occurred on production, and not file changes on trunk.
So that's about it for our entire development lifecycle.
The above solution will probably only suffice for small-scale Drupal productions. It may or may not be what you're looking for. Fortunately, there are many brilliant minds in the Drupal community, and there are quite a few alternatives for 'development to staging to production lifecycle' solutions:
Copy “My thoughts on small-scale Drupal development to production environments with CVS and Subversion”
(Text is selected. +C to copy.)
(Text is selected. +C to copy.)
Twenty-four hours ago, I was deploying applications from development to production environments in about 5-6 steps. Today, tomorrow and every day in the future, I'm doing it in 1 step, with Fabric.
...a simple pythonic remote deployment tool. It is designed to upload files to, and run shell commands on, a number
of servers in parallel or serially. These commands are grouped in tasks (regular python functions) and specified in a 'fabfile.' It is a bit like a dumbed down Capistrano, except it's in Python, doesn't expect you to be deploying Rails applications, and the 'put' command works. Unlike Capistrano, Fabric wants to stay small, light, easy to change and not bound to any specific framework.
It is awesome. But don't let me tell you, let me show you.
We have lots of projects floating all over the place, all neat and tidy in our Subversion repository. When I'm ready to start building a new feature or fix a bug, I like to have a copy of the production database for that application on my local machine for development. Most of the time, these applications are Drupal or Django based. When I'm ready to start building, I do something like this:
The shell script I run essentially does this:
- log into the remote production server
- take a snapshot of the production database
- save the dumpfile
- log out of the server
- transfer the file from the production server to my workstation
- remove the existing development database from my local MySQL installation
- create a new database to contain the production database
- import the production database into the new database
It's nice that the script takes care of all of that for me — but, you see, there are 10-20 of those files (one for each project). This becomes an enormous headache when we change servers or add a new feature to the script. We needed a better solution.
Enter Fabric. I got it up and running in about 5 minutes, and began to migrate that shell script into one 'fabfile'. However, since I needed to use exactly the same functions for every project, we needed to share the functions with all projects. Fabric makes this easy.
In each project root, there is a file named 'fabfile.py', with the following contents:
Important note: The reason I use config.fab_hosts instead of 'set(fab_hosts = ['...'])' is because I've built my Fabric installation from the git master branch. If you've downloaded the 0.0.9 package, use:
The fabfile sets some basic variables for the server we're connecting to, the user we connect with, and MySQL credentials. The very first thing the file does is import a file named 'fabric_global.py', which contains the following:
At first glance, it may look a little confusing, but this code is something like 10% of what it would be if it were duplicated for each project (and with all the additional commands).
The 'fabric_global.py' file defines two functions that we can run on a codebase. One is for updating the development database with the production database, and the other is for simply committing file changes and updating them on the server.
Now, when I need to grab the production database at the beginning a project, I simply do this:
Well crap, that's a lot easier.
When it's time to deploy my code changes, I simply do:
If I had already committed my changes and simply wanted to update them on the production server, I would just do:
Quite amazing, if you ask me.
Also, Fabric can do a whole lot more than what I just demonstrated, so checkout the docs.
I'd like to thank the Fabric team for probably preserving a few years of my lifespan. Also, it should be noted that the current version of Fabric is 0.0.9, which should give you an idea of the amount of awesomeness to eventually come to newer releases.