The idea couldn't be simpler. Choose an interesting landscape, compose an image of it, then take a photograph every day for an entire year, capturing the weather, the changing seasons, the ephemeral comings and goings of people, animals, and vehicles, and whatever secular changes might occur as the Earth traced its orbit around the Sun. I have been fascinated by time-lapse photography ever since I saw some of the Disney nature films as child. More recently, I've assembled time-lapse animations of solar and lunar eclipses, and the transit of Venus in June, 2004. I have also experimented with automated generation of time-lapse movies of clouds by remote controlling a digital camera connected to a computer's USB port with the Picture Transfer Protocol (PTP). The experience obtained through and tools developed for these projects made me confident enough to undertake what amounted to a year-long photo shoot, something I've hankered to try since I first saw Dennis di Cicco's famous analemma photo, a print of which has hung on my wall for more than twenty years.
What motivated me to finally undertake the project was the knowledge that the large open field to the east of Fourmilab which has been a hay-mow for time immemorial was soon to be built up with three houses. This would wreck the open view in that direction (which is one of the few which isn't blocked by trees), and render a photo shoot done after all the building was complete distinctly uninteresting. However, a time-lapse movie which showed the construction in progress would have its own intrinsic interest, documenting how Swiss residential construction was done at the start of the twenty-first century and illustrating the relentless advance of urbanisation, in this case gobbling up land which has been devoted to agriculture since the days of the Roman Empire.
You can view a medium resolution version of the movie, hosted on YouTube, with the player below.
If you aren't able to use the above player, or wish to download the movie to view offline, in higher definition, or in another format, you can do so from the following links.
These are big files, and depending on the speed of your Internet connection, load on the hosting site, and congestion along the way, may take a long time to download. It is, therefore, an excellent idea to save the file to your local hard drive (with most browsers, by right-clicking the link and then choosing the appropriate menu item), then playing the local copy. That way, if you wish to play the movie several times, you needn't endure (nor burden the Internet with) a lengthy download every time. The Windows Media Player .avi edition is encoded with the Microsoft MPEG-4 version 2 video CODEC and MP3 audio. It should play on almost any Windows system with Microsoft Media Player, and on Linux/Unix systems with applications such as MPlayer and Totem. The QuickTime movies were created with Apple QuickTime 7 Pro on Windows XP using the H.264 video CODEC and AAC 128 Kb audio, and may be played on Linux with MPlayer or Totem.
There is a tremendous amount of detail in this movie: little things you may miss as the frames advance one day per second. Many film buffs long dreamt of the opportunity of scrutinising cinema classics on a Moviola, and rejoiced when laserdiscs and later digital video easily permitted frame by frame examination. Present-day computer media players are surprisingly primitive in this regard; it seems ridiculously difficult to single-frame step and jog/shuttle movies with them. (Of course, I'm not using fancy commercial video production software, but then neither are the vast majority of the audience.)
Those interested in perusing the daily details in this film are invited to visit:
where every frame of the movie is accessible, in either sequential or random order, as high-quality JPEG images more detailed than video still frames, with additional information about the circumstances of each picture and the occasional “Did you notice?” item pointed out.
The balance of this document describes in detail the production of the movie, from taking the individual daily photographs through the final assembly of the compressed audio/video file. With the exception of the Apple QuickTime Pro package (used only to encode the QuickTime editions of the movie), no proprietary commercial software products were used in this project: only freely available open source packages, running under the GNU/Linux operating system. Several custom tools were developed to automate the production process; they are described below and may be downloaded from a link at the end of this section.
For the photographic vantage point, there was really only one choice: the same window in the Fourmilab conference room whence I photographed the first part of the transit of Venus; it is the only east-facing window whose view isn't obstructed by a tree or some other obstacle.
For the photography, I decided to use my trusty Nikon D70 digital SLR with its NIKKOR AF 28–80 mm zoom lens at the minimum focal length of 28 mm, equivalent to a 42 mm slightly wider than normal lens on a 35 mm camera. This nicely framed the image, avoided the distortion of wider lenses, and kept some confusing obstacles on either side out of the picture. All images were shot with fixed focus at infinity, since nothing in the frame is close enough to be out of focus with an infinity setting even at the maximum aperture. All pictures were taken with fully automatic aperture and shutter speed; since almost all of the pictures were taken in mid-afternoon when the Sun would be in the sky behind the camera, typical exposures were 1/400 second at f/10, with larger aperture and longer exposure time when cloud cover and/or fog reduced the available light.
The camera was mounted on a Gitzo G1228/G1270M tripod, with the legs and centre post at maximum extension and the camera mounting leveled with a spirit level. To preserve image registration from day to day, I aligned the camera on the tripod so that the desired framing was obtained after pushing two of the tripod legs up against a steel radiator in front of the window. Objects in the frame are all sufficiently distant so the perspective shift from small differences in side-to-side placement are imperceptible.
This photographic project did not require me to dedicate either the camera or the tripod for the entire year! When I needed to use the camera for something else, I simply unscrewed it from the tripod, changed the memory card, did whatever I wanted with it, and then when I was done, I reattached the camera to the tripod, replaced the original memory card, and then restored the left-to-right alignment of the camera on the mounting plate by repeatedly “shooting in”—taking images, comparing them to the last one I captured before removing and replacing the camera, and twiddling the camera alignment until the “before” and “after” images lined up properly on the camera's LCD display (i.e., no apparent jump when toggling back and forth). This sounds tedious, but in fact it generally only took around five or ten trial shots and a minute or so to get it right. As it happens, I never did need to use the tripod for another project, but if I had, it shouldn't have been difficult to realign it, since all of the settings were the easily-reproduced case of maximum extension.
All pictures were shot at the camera's maximum 3008×2000 pixel
resolution and stored on a 2 Gb CompactFlash memory card with
JPEG compression and image data recorded in EXIF format. An entire
year's images do not even half fill the memory card. In the interest
of sleeping soundly, every couple of weeks I copied new images from
the memory card to a computer whose file system is backed up nightly
Bacula, but since space wasn't
a problem, I also left the complete set of images on the memory card.
You can't be too careful.
Every day, usually in the mid-afternoon, as that's when the Sun is behind the camera position and high enough in the sky to minimise shadows (on those rare occasions it isn't cloudy here), I simply opened the window, positioned the camera with the two front tripod legs touching the radiator, verified that the lens was at minimum zoom and infinity focus, and made the exposure. When cloud cover was extremely dense or thick ground fog was present, the little flash on the top of the viewfinder would pop up idiotically, trying to flash-fill the great outdoors. On such occasions I would push it back down, disable flash, and proceed with the shot.
After taking each daily photo, I would check it on the camera's LCD viewfinder, flipping back and forth between it and the previous day's exposure. This is an excellent way to discover if you've accidentally bumped the lens and changed the zoom or focus settings, mispositioned the tripod, or shifted the camera azimuth, resulting in misalignment. In these cases, I would delete the most recent image from the memory card, remedy the problem, and re-shoot until I was happy with the registration.
The alignment procedures described above and the daily check that each new image is registered with the previous guarantees that the images are more or less aligned; there are no gross differences in the framing among all the images. However, perfect alignment is simply impossible: there will always be slight differences due to flexure in the tripod, how the tripod legs sit upon the carpet, and innumerable other factors. Further, even if the images were in perfect alignment when viewed on the camera's LCD monitor, that doesn't mean they'll line up at full resolution or even at the reduced scale used in the final movie. If you simply stitch together the raw daily images, there's a disturbing jitter, both up-and-down and side-to-side, which makes the movie look like it was shot whilst operating a jackhammer.
When I undertook the project, one of my interests, springing from an entirely different venture, was extraction of three-dimensional information from successive images taken from a moving vehicle: “motion stereo”. I had some ideas about clever ways to do this, and I figured I that registering slightly misaligned images of a slowly changing scene would be an easier problem on which to test them. The basic concept was to pick one image as a reference, then for each of the other images, apply an edge detection filter to both it and the reference, subtract the two, compute an energy metric of the difference, and then iterate over lateral and vertical shifts (and possibly scale factors and rotations, although I didn't think this would be necessary in this case) so as to minimise the energy. This would probably be an expensive process computationally, especially since I intended to kludge it together with Netpbm command line tools glued together with a Perl script, but there was no real-time constraint in processing these images and, hey, if it came to that, I have no aversion to multi-month computations!
Well, like many clever ideas (or at least most of mine), this worked perfectly for all the easy cases and fell squarely on its face for the difficult ones: frames with obscuration of distant objects by fog, lots of spurious edges generated by falling snow, phony edges created by shadows in late-afternoon clear sky images or receding patches of snow, etc. After trying various stratagems to ameliorate these problems, I implemented a scheme which permitted me to supply “hints” for images the automatic process stumbled on, but the fussier I became about perfect alignment, the more images I had to hint. Anyway, this is where things stood at the start of January 2006 when I found myself in an airport departure lounge waiting for a flight which was delayed three hours from its scheduled time. There wasn't Internet access there, but I had my trusty development machine with all the images and tools on it, and thinking things over I concluded that it would actually be less work to simply measure registration points in all of the images than to carefully examine the results of automatic registration and provide hints where it goofed. With lots of time and little else to do, I started measuring images and noting the pixel locations of two objects which were easily identified in almost all of the images at the top left and top right of the frame: a tile at the front peak of a house at the left and the rightmost top of a conifer tree at the right. Having two points would permit me to correct not just horizontal and vertical registration, but also rotation and scale should that prove necessary (it didn't, but measuring a second point took little additional time and also permitted sanity checks to find typos in the co-ordinates I entered). The result of this process was a file which begins like this:
dsc_0013 99 547 2076 719 dsc_0014 79 561 2055 732 dsc_0015 86 559 2050 730
The first field is the name of the image copied from the camera's memory card, less the “.jpg” extension, and the two pairs of numbers give the column and row pixel co-ordinates of the two reference points in the image. I was rather sloppy in the visual alignment of these early images; the bulk of the images are consistent within about five pixels from frame to frame, including the one or two pixel judgement call in deciding where to mark the point in the zoomed-in image. The images were loaded, zoomed, and measured using The GIMP.
Once all of the images had been measured, a simple Perl program, regmanual.pl, salvaged from the wreckage of the automatic registration scheme, was used to align the images, crop them to a consistent size, then scale them to the chosen movie frame size of 800×549 pixels. The registration program extracts the date and time when the image was taken from the EXIF information saved by the camera and uses Netpbm utilities to insert the image time and date in the upper left of the frame. To avoid generation losses due to compression, the individual registered frames are saved as lossless full-colour PNG files.
From the inception of the project, I knew I wanted to set the movie to Antonio Vivaldi's Le quattro stagioni (The Four Seasons), op. 8 (1723). Early in the project, I scoured the Web for MIDI transcriptions of this work. There are many, of widely differing quality. I settled on one of the larger and more complex arrangements which I found on an individual's MIDI sequence collection page. Unfortunately, neither this page nor the meta-data in the sequence files identifies the person who created this version. Whoever you are, thank you very much—you did a great job!
Since I was looking at a six minute target running time for the movie at one second per day plus title and end credits, it was clear that I'd have to excerpt Vivaldi's work to fit. I was originally going to do this by editing the MIDI sequence, but upon auditioning it, I discovered that the excerpts I wanted to use could be extracted and composited just as well working with the audio files created from the sequence, so that's how I decided to proceed. I synthesised the MIDI sequences to a .wav audio file using the TiMidity++ program, producing CD quality 44.1 kHz stereo PCM audio files for each season. These files were then loaded into the Audacity audio editor, where the selected passages were extracted, fade-outs and fade-ins applied at transitions, and small pitch-preserving tempo adjustments performed so the run time for each season's music was the same as that of the corresponding video frames. Finally, silent passages were added at the start and end to play during the title and end credits. All of these editing steps were done using lossless encoding formats, yielding a 61 megabyte .wav file for the final soundtrack.
The title and end credit images were created manually with The GIMP. The background image used in the title was taken in 1998 from the same window as the movie, long before any construction began in that area. The rainbow in the image is genuine, as are the cows; additional details about this image are available.
The frame images and soundtrack were assembled into the final movie file using the MEncoder component of the MPlayer software suite. The precise command used to create the movie was:
mencoder "mf://registered/*.png" \ -audiofile music/seasons/seasons.wav -oac mp3lame \ -mf fps=1 -ovc lavc -lavcopts vcodec=msmpeg4v2 \ -info 'name=Les Quatre Saisons:artist=John Walker:copyright=Public domain' \ -o movie.avi
This command was actually run from a Perl program called makemovie.pl which can do a number of other things associated with automatic time-lapse movies, none of which were used here. The “-info” specifications are somewhat more long-winded in the original; they have been abbreviated above to better fit on the page. All of the functional and format options are identical to those used to produce the final movie. The registered directory contains the images for the registered frames, the title, and the end credits. The file names in this directory are craftily chosen so that when MEncoder sorts them alphabetically the frames will be shown in the intended order.
The CODEC options compress the audio to an MP3 stream and the video with Microsoft MPEG-4 version 2, which may seem an eccentric choice, especially since, notwithstanding its name, this is not actually even an MPEG-4 CODEC. In its favour, however, is the fact that movies encoded in this format play on Windows 2000 and Windows XP machines with Windows Media Player “out of the box”; on other Windows platforms at most a free CODEC download from Microsoft is required. Most Linux and other Unix-based media players such as MPlayer/GMPlayer and Totem play files in this format without difficulty. If you use the more standard mpeg4 video encoding, Windows users cannot play the movie without installing an auxiliary CODEC, which is an insuperable barrier for many people. As for QuickTime, MEncoder is supposed to be able to produce QuickTime movies, but every time I try it, I end up with a memory dump instead of a movie. In order to make QuickTime editions available, I used the Apple QuickTime 7 Pro encoder on Windows XP, starting with the original PNG frames and the soundtrack in wav format. I selected the H.264 video and AAC 128 Kb audio CODECs for the QuickTime editions; these movies play on my Linux system with Mplayer or Totem. The iPod edition was re-sampled to 320×220 pixels to accommodate the small screen on that device.
The frame by frame Web tree is generated automatically by a Perl program named makejpj.pl from a frame database file, framelist.csv. It accesses Web resources to obtain the companion images, invokes Netpbm utilities to prepare them in the format presented on the page, and generates all the index document structure and cross-links. The main index document for these pages was hand generated by modifying one of the automatically produced frame pages.
If you are interested in embarking on a project like this yourself, you're welcome to download the custom Perl programs I used in producing the movie and frame by frame Web tree:
The registration and frame information files used to produce the movie are included to illustrate the format of the data expected by the programs. These programs are all one-off hacks intended for this specific job. They are sensitively dependent on the environment of the system on which they are run, and all will have to be modified for use in other projects. These programs are all in the public domain and may be used in any way you like without any restrictions whatsoever, but they are utterly unsupported—you are entirely on your own.
The following open source packages were used in the production of the movie; the custom programs you can download above are “glue” which invoke these tools to do the real work.
This document is in the public domain.