DIY 35mm Film Scanner

Raspberry Pi-based

Liam Davies

Issue 61, August 2022

This article includes additional downloadable resources.
Please log in to access.

Log in

Get back into classic film photography, or reminisce on old photo negatives with this Raspberry Pi-powered film scanner!

Over the past couple of years, the art of film photography has undergone a massive resurgence. The word ‘massive’ is an understatement. Film cameras from the 1960s are now selling for thousands of dollars, and all sorts of niche film camera shops are popping up all over Sydney (and we would expect, many other cities too). Rolls of film and their associated development processes now will easily set you back a good restaurant dinner.

There are some technical aspects of film that digital cameras simply cannot capture, such as how highlights are handled, the colour rendering and the grain of the film crystals.

But, most people probably couldn’t care less about the nitty gritty - part of the new-found attraction to film photography is the nostalgic feel of the photos it produces. Instead of taking 200 photos on a DSLR in a few minutes, a film camera requires creative restraint with choosing what to (and what not to) photograph - unless you’ve got a few million bucks laying around.

In any case, there are two parts between getting a film roll out of the camera and having a digital or physical copy of it. The first part is the chemical development process itself, with developer, fix and stop solutions - done in a dark room. While it’s a delicate and sophisticated process, it’s not the focus of this project for two reasons - a) We avoid teaching chemistry at DIYODE and b) We didn’t want the chemicals to stink out our workshop!

We’re instead focusing on the process of scanning, which is typically an added cost on top of development at most film labs. Commercial film scanners usually range in the hundreds to thousands of dollars, and while they’re usually a good deal for film photographers who are serious about shooting, the cost is hard to justify for casual shooters.

If you have a smartphone or DSLR/mirrorless camera, this project is a dead-simple way of converting both old and new film negatives to digital images, preserving them forever.

How It Works

Scanning a film negative is conceptually a dead simple process. All we need to do is photograph the negative, backlit by a white light.

This process can’t be done on a standard home or office printer scanner, since backlight is required, not light applied from the scanning side. There are some 3D printed accessories available online that use two sets of mirrors to redirect the scanner flatbed light to the rear of the film negative, but they don't have enough resolution to be greatly useful.

While we’ve made makeshift backlights before with laptop screens and a camera on a tripod, it’s not quick to set up or easy to scan a bunch of negatives in a row. We want to automate the film scanning process in this project, using a Raspberry Pi and stepper motors to scan each photo effortlessly.

To actually digitise the negative, we’re using a DSLR to take a photo of the negative and then processing the image on the Raspberry Pi to make the finished scan. Sure, it’s a bit ironic using a modern digital camera to photograph something that came out of an analogue camera. Regardless, authentically reproducing a film negative in high resolution for a much lower price tag than a commercial scanner is something that we found interesting, which is why we started this project.

Scanner Controls

While we could include all interface elements on the Raspberry Pi Software directly, we think it’s a better idea to include the most important direct controls on the film scanner itself.

We are using a rocker switch that returns back to centre once unpressed, set up to move the film spool slider left and right.

There are also four other buttons on the film scanner. The first button moves the film slider along to the next exposure, the second button fires the shutter on the camera and downloads the image, and the third button activates automatic mode where this process will be done sequentially until it reaches the end of the roll. This way, the user has complete control over how individual negatives are scanned but, once tuned, the scanner can do the remaining automatically. There is also another button to toggle black and white mode, where the scanner will convert any input images to black and white.

Since we are using stepper motors to move along the film negatives, we can also easily reproduce moving film exposures from reel to reel. Long reels of 36 exposures can be done with a press of a button, or shorter sections of cut negative (as many film stores make) can be inserted individually.

What Camera To Use?

We obviously need a camera to photograph the backlit negatives before we download them to our Raspberry Pi. The short answer is that you can probably use the one you already have! You could use a DSLR, mirrorless camera, or even your smartphone. However, higher resolution and magnification configurations will increase the quality of the scanned negatives. A macro lens and mirrorless/DSLR camera comes to mind here.

We used a Canon EOS 6D and a 50mm f1.8 lens (the “nifty-fifty” as known by many photographers). To increase the magnification, we used a set of cheap extension tubes in between the body and lens mount. These are great because they do not affect the optical performance of your camera - they don’t include any form of glass.

The main effect of these is that they reduce both the minimum focus distance and maximum focus distance, so you can’t keep them on your camera all the time. You can also use a dedicated macro lens, but these are often prohibitively expensive to the casual photographer. We tried out a 100mm f2.8 Macro Canon lens, and while it worked great, it costs as much as a commercial film scanner so there isn’t much point to using it.

Increasing magnification also has the side effect of making the focal plane significantly smaller, which makes more of the image out of focus. We need to stop down the camera (i.e. make the lens’ aperture smaller) in order to make sure the entire negative is in focus. Stopping most lenses down to f/9 or f/11 also provides the best sharpness and vignetting performance, at the expense of increasing shutter speed - usually up to a second or two.

For this project, we’ll be leaving our camera on the JPEG output option. This is probably considered blasphemy to any photographers reading, but there are a few reasons why we are choosing to do this.

First of all, RAW data normally used by professional cameras requires processing and development in a computer software such as Adobe Lightroom before it can be exported as JPEG. While there is probably a way of doing RAW development directly on a Raspberry Pi, it adds another step to the process which probably doesn’t need it. Second, most cameras include inbuilt lens correction and white balance settings that generate very high quality JPEGs straight out of the camera. Distortion in camera lenses is a big problem for film scanning, because it makes a flat and rectangular film negative appear to bulge in the corners or edges of the photo.

The Prototype

We’re creating a temporary setup for film scanning which should make testing our hardware and software a breeze. There won’t be 3D printing involved for this part of the build, we’re essentially just experimenting.

Parts Required:JaycarAltronicsPakronics
1 x 12V White LED Strip (12-15 LEDs)ZD0570X3194AADA887
1 x Raspberry Pi Zero W-Z6303A-
1 x Micro-USB OTG CableWC7725P1921AADA1099
1 x Micro-USB to USB-A Data Cable#WC7724P1897ADF-FIT0265
1 x Acrylic SheetHM9509H0725-
Diffusing Sheet---
DSLR, Mirrorless Camera or Smartphone---

Parts Required:

*This connects your camera to the Raspberry Pi. Substitute it with the suitable cable for your camera.

The Scanning Setup

As we mentioned in the introduction, you need a good backlight to make film scanning work properly. LEDs are a great option for this as they are low-power, keep relatively cool and have very predictable colour temperature. However, there are a variety of sources that could be used for this.

A very small LCD tablet or laptop screen that would otherwise be heading to the trash is a great option for the backlight, although some disassembly and reverse-engineering is required. We gave this a go, removing the broken LCD layer of an old laptop and finding the probe points on the control board to activate the backlight. The main issue we found is that the laptop screen was way too

big to fit into any practical design. If you’re going this route, do NOT use an OLED screen - the pixels generate their own light, as opposed to a white backlight panel illuminating through a liquid-crystal panel. Note that some older laptops use CFL screens, which use dangerous high voltages.

A far simpler route is to use off-the-shelf LEDs and make your own diffusing screen. Since LEDs won’t flatly illuminate a film negative, we need to make our own diffuser screen. There are limitless ways to make this screen, which we experimented with.

We first tried to use a backlight unit that is typically used to illuminate 16x2 LCD character screens, however, it used only one LED and was much dimmer than anticipated. We also blew it up within a few minutes because we read the datasheet wrong!

After that, we tried using a 3mm clear acrylic sheet which we sanded with an orbital sander to give it a frosty appearance. We then propped it about 50mm off the surface of the bench, and popped a 12V white LED strip underneath. This helped to diffuse the light, however, we ran into another problem - uneven patches of light transmission across the acrylic. If we tried to scan a negative on it, there would be small blotches on the image.

We used a few different thin materials to help smooth out the light. Anything that had paper in it just made the problem worse thanks to the uneven grains inside.

But not all was lost, we pulled the diffuser plastic off the front of the backlight unit we first tried and taped it to another clear sheet of acrylic. This worked wonders and was a super smooth backlight!

A critical element of making a good backlight is spacing the LEDs at the right distance from the screen. Too close and the individual LEDs create uneven lighting (and additionally warm up the acrylic, potentially causing it to warp). Too far and the light is too weak and the scanner is needlessly big. We found a distance of 50mm-60mm works well. You can see in the photo below the LEDs are too close to the acrylic panel.

Of course, this experimentation could be avoided if you have a salvageable backlight from some appliance or device. We encourage you to get creative!

Now that we have a backlight, it’s time to set up your camera to scan the negatives. The short version is that you want the camera directly above the negatives, as close as you can manually focus the lens. We used a cheap tripod for this, with one of the legs shorter to ‘lean’ it over our backlight. To take a test photo, set your aperture to f/9-f/13, ISO to 100, and your shutter speed to expose appropriately (assuming your camera lets you select these settings). This ensures that the noise level is low and the image as sharp as possible. As the aperture of your camera decreases, more of the image is in focus and the sharper your lens is. We don’t recommend going beyond f/13 because most lenses will lose resolution thanks to the physical diffraction limit.

To keep the whole negative in focus, you may need to weigh down the sides to keep it flat. Some cameras have inbuilt editing programs to invert and adjust the image, and there are multiple apps available for smartphones that will do the process in real-time. However, taking photos on the camera and editing them manually is tedious - we have a mini processing powerhouse at our disposal.

Here’s a clue: it rhymes with Raspberry Pi Zero!

Setting Up Raspberry Pi

There are a couple of pieces of software to set up before we can get our Raspberry Pi running with our DSLR.


gPhoto2 is a camera control program that runs on UNIX based systems like Raspberry Pi, which has a comparatively simple set of commands to download images from the camera. Together with its associated library program libgphoto2, it can be used to programmatically interface in the same way as the command line interface.

We recommend checking out the “What You Need” page at the end of this project to see the technical requirements of gPhoto2 and libgphoto2.

To install it, open your Raspberry Pi’s terminal and type these commands:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install git make autoconf libltdl-dev libusb-dev libexif-dev libpopt-dev libxml2-dev libjpeg-dev libgd-dev gettext autopoint
git clone
git clone

We’ve now installed the prerequisite libraries and cloned the libgphoto2 library to our Pi. Next, we need to build both the libgphoto2 and gphoto2 libraries. This can take a while, so be patient. If you run into errors, try rerun the commands in order. If you get a permission denied error, try adding ‘sudo’ to the beginning of the offending command. To make your life easier, we’ve also provided the commands we’re running here in an ‘.sh’ shell script so you can just run the script using the ‘bash’ command and it’ll be done for you.

cd ~/libgphoto2
autoreconf --install --symlink
sudo make install
cd ~/gphoto2
autoreconf --install --symlink
sudo make install

Next, we need to make a couple of tweaks to get the software to play nice. These commands are adapted from a PiMyLifeUp tutorial. The link can be found in the project resources.

if ! grep '/usr/local/lib' /etc/
  echo '/usr/local/lib' | sudo tee -a /etc/
/usr/local/lib/libgphoto2/print-camera-list udev-rules version 201 group plugdev mode 0660 | sudo tee /etc/udev/rules.d/90-libgphoto2.rules
/usr/local/lib/libgphoto2/print-camera-list hwdb | sudo tee /etc/udev/hwdb.d/20-gphoto.hwdb

These commands look a bit gibberish, but in reality, all they’re doing is setting up the gPhoto2 software to talk to gphoto2lib and to your camera. The last two commands are adding the udev rules, which help with hardware identifiers supported by the gPhoto2 program. Again, all of this can be executed in one command with the script we have included.

Once gPhoto2 is finished installing, you can test it by running:

gphoto2 --version

If the program’s version appears, it’s all good to go. Next, we need to plug in our camera. Since the Pi Zero W doesn’t include any full-size USB type-A ports, we need to use a Micro-USB to USB-A OTG adapter to break it out. These are quite affordable and handy to have. Make sure it is plugged into the Data USB port of the Pi, not the Power USB port.

Then, use whatever data cable that is suitable for your camera to plug it into the Pi. Older cameras usually use Mini-USB or Micro-USB, while newer ones have USB-C. Make sure to turn off WiFi in your camera’s settings as some won’t open their USB interface until it’s disabled. We initially tried to use a USB hub between the Pi and the camera to allow the use of a USB flash drive to save the resulting images onto, but we found the cheap USB hub did not maintain a stable connection to our camera so we did away with it.

GPhoto supports a huge variety of cameras, with various degrees of functionality. When we tried our Canon EOS R with this software, we found that the internal data buffer of the camera prevented image files over 5MB transferring to the Pi. This is probably due to an internal bug of GPhoto2, which limited us to transferring Medium-size JPEGs to the Pi. Once we switched to our Canon 6D, we had no trouble using the program. Head to to see if your camera is supported.

To test to see if your camera is recognised, type ‘lsusb’ into the terminal and you should see the terminal window look like this:

pi@scanner:~/gphoto2 $ lsusb
Bus 001 Device 002: ID 04a9:3250 Canon, Inc. EOS 6D
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

We’ll create a new directory for our film scans and do a quick photo test to ensure everything is working.

mkdir FilmScanner
cd FilmScanner
mkdir photos
cd photos
gphoto2 --capture-image-and-download

The last command should signal your camera to take a photo with the currently selected settings. You may want to switch your camera to Manual Focus and Manual mode for testing, so autofocus and metering doesn’t slow down the process.

After it takes the photo, the photo will be downloaded to the FilmScanner/photos/ directory. You can also add the ‘--filename=”test.jpeg”’ parameter at the end of the gphoto2 command to output a specific photo filename.

You can also change photo settings by running this command:

gphoto2 --list-config

This will list all currently available configuration options for your camera, including settings such as Aperture, Shutter Speed, Image Format and Quality. For example, we can set the image quality to Large Fine JPEG like this:

gphoto2 --set-config="imageformat"=10

For the prototype build, we’re just using the command line interface. We’ll write a full Python-based program to control the motors, register button presses and trigger the camera to take a photo. However, this is enough to test that our project will work properly.

The Main Build

With the experimentation done, it’s time to put together the final build! Most of the software is installed, so all we have to do is some assembly and write some code.

Additional Parts Required:JaycarAltronicsPakronics
1x DPDT Switch Centre Off Spring ReturnSK0980S3243-
2x IP67 Rated Dome Pushbutton SwitchSP0657S0960SS110990055
1x Pushbutton Push-on MomentarySP0716S1080-
1x Red LED Illuminated SwitchSP0706S0920-
1x 56Ω Resistor*RR0542R0528DF-FIT0119
2x Arduino Compatible 5V Stepper MotorXC4458Z6330ADA858
1x ULN2003 16-Pin Darlington Transistor Array IC^ZK8855Z3000-
1x 12V-5V DC-DC Buck Converter%-Z6334ADA2190
12x 4mm x M3 Brass Inserts*--ADA4255
1x 2.1mm Barrel JackPS0519P0621A-
1x 20x2 Female Header Pins or equivalent*HM3230P5387ADA1979
2x 5-pin JST XH Female Connector*PT4457P5745DF-FIT0255
2x 4-pin JST XH Female Connector*PT4457P5744DF-FIT0255
2x 4-pin JST XH Male Connector*PT4457P5754DF-FIT0255
12x M3 10mm Screws*HP0403H3120ADF-FIT0280

Additional Parts Required:

* Quantity used, item may only be available in packs. ^ This IC can also be removed from the driver board bundled with the XC4458 stepper motor.

% In our experimentation, the Raspberry Pi and stepper motors draw 600mA max. We recommend 1A or more for the converter.

3D Printing

We’ve spent a bit of time modelling some 3D printed parts that make it easy to build a reliable film scanner. While there are a few parts that need printing, it’s reliable at keeping your film flat as the two stepper motors feed through the film negatives.

We printed our parts in yellow and black 3D printed filament, however, you may wish to experiment with other colours for a different look. In retrospect, we should have printed the parts in neutral colours to reduce colour casting on the film scans.

The yellow light box will hold our three sets of LED strips, and will also be home to the Raspberry Pi on top. It bolts to the side brackets, which hold the two stepper motors and their film spools. The spools themselves slot directly into the 5mm of the stepper motors and sit on a ball bearing at the bottom of each bracket.

There are a fair number of brass inserts that need to be added to the prints. We used M3 4mm inserts and pushed them in with a fat tipped soldering iron. There are 12 brass inserts to be added to the light box and 6 on each of the spool holders. We found it helpful to partially assemble the scanner to hold the inserts in place.

At this point, it’s worth sitting the front cover on the scanner to check the film negative can smoothly slide under it. When we finish assembling the scanner, we can adjust how much clearance the film spool has by adding washers to the screws that fix the front cover.

We found that the film wasn’t sitting flat on the backlight, so we added some double-sided tape to the underside of the front cover. We didn’t peel the outward facing sticky side to act as a low-friction area for the film to slide along.

The film was also getting stuck on the 3D printed lip next to the acrylic, so we added some additional electrical tape to smooth it out.

We then added the bearings to the spool holders. These are flanged bearings, so they have a small lip on the top side to prevent them falling through the print. They are just press-fitted into the plastic.

The physical film slider is done. Now time to work on the control board!

Control Board

We’re making a “hat” that will sit on top of the Raspberry Pi Zero, in the same way an Arduino shield extends the GPIO pins. We’re using a double-sided prototype board for this, which we soldered two banks of 20 female headers onto. It’s worth soldering these while they’re inserted into the Pi to keep them straight.

Next, we soldered a 16-pin DIP socket into the board, two rows away from the header rows. This will be used to house the ULN2003 driver. There are four GPIO pins that will be used to drive the stepper motor via the internal transistor Darlington array of the ULN2003. Note that there is more detailed wiring information on the Fritzings and schematic of this project.

We then soldered the 5-pin JST connectors to the outputs of the ULN2003. These will be used to connect the poles of the two stepper motors, which are both in parallel. Since we don’t need to control each motor individually, we can just drive both the same way. The 28BJY-48 stepper motors we are using are very common and include a pre-soldered JST connector, which makes connection to our circuit a breeze. The only quirk with these motors is that they are unipolar, as opposed to bipolar as many smaller NEMA-sized motors are.

The upshot of this is that there are five wires for the four sets of coils in the motors. One wire is usually connected to power (5V) and it is then centre-tapped to the other sets of coils. Our ULN2003 can then sink the current through each coil. When we get to programming, we need to set up control for these motors from within the Raspberry Pi.

We then soldered in 2 sets of four-pin JST connectors for the button box interface. There are eight total connections for the button box - 6 buttons, 1 LED and a ground wire. Since we don’t have any eight-wide JST connectors, we had to split them up and use two sets of four pins instead. We carefully positioned ours to match up with the GPIO pins we’re using. After adding a 56Ω resistor for limiting current to the LED in the button box, we can now move onto the power wiring.

When designing this project, we juggled around a few different solutions for providing power to the Raspberry Pi, the control board, the backlight LEDs and the stepper motors all in one power cable. We eventually settled on using a 12V supply from a 2.1mm barrel jack. These are very common connectors, and the classic ‘wall-wart’ adapters can be picked up very cheaply.

However, we can’t feed 12V directly into the Raspberry Pi, as it’d kill it very quickly. In fact, our stepper motors probably can’t handle 12V either - while we tested it with good results for torque and speed, they were getting noticeably toasty. A candle that burns twice as bright burns half as long!

In any case, the main part of our project that needs 12V is the backlight LEDs. We opted to use a buck converter to step the 12V down to 5V for the remaining components in the circuit. The board can be soldered flat onto the prototyping board thanks to the castellated holes on the side of the buck converter.

We can now start connecting all the components together. We added black and red power wires between the 12V socket and the buck converter, then between the buck converter and the ULN2003. Note that when making this, we ran out of yellow wire which we usually use for 12V lines. The Fritzing shows a more clear version of which voltages are connected to what. While we were at it, we also added the 2-pin JST socket for the light box backlight in the bottom-right corner.

All that we have left to do on this board is to connect the power to the Raspberry Pi. Since we’re not powering it via the USB port, we can directly connect the buck converter’s 5V output to the 5V on the Raspberry Pi. To finish things off, we connected the 12V input line to the backlight connector.

Before plugging it into your Raspberry Pi, we highly recommend pausing and doing a sanity check on the circuit. The Raspberry Pi’s GPIO pins are not as tolerant as Arduino pins, as they are 3.3V only, not 5V. Check that there are no shorts on the board, and once a 12V barrel jack is connected, check that 5V appears on the output of the buck converter.

Now, just push it into the header pins on the Raspberry Pi! Be sure to push it straight down, as we bent the pins countless times by carelessly inserting and removing our newly-made shield.

Button Box

As we mentioned earlier on, we’re building a button box that will control all aspects of the scanning process. This way, if no headless terminal is attached to the scanner, it can still be used. It has small icons across the top of the panel that show which buttons are used for what. After printing, we weren’t happy with the low contrast of the indented icons so we tried the melted crayon trick.

We have used these in projects like the USB Rubber Ducky in Issue 18 - simply heat a crayon above the area in which you wish to fill in with a colour. It’s a little messy but a fairly simple way of creating multi-colour prints. We found that if the crayon overflows onto the face of the model, it tends to seep into the print lines and can’t be removed without some very vigorous sanding. So, we used masking tape and some careful stencilling with a craft knife to create an area in which the crayon can’t flow.

As shown in the following image, it does overflow quickly. We found holding the crayon 20-30cm off the work surface while melting it helped to fill the whole mold area as it dripped down.

After peeling back the tape, we cleaned up some of the messier areas with a craft knife. You can see the technique struggles with detail in text fonts and areas with a point (e.g. arrows). In any case, we popped the rocker switch and four buttons into the panel. They have crush washers to help secure the buttons in place, so remember to slide them on as the buttons are inserted!

Wiring is fairly simple in the button box. There are six different buttons that can be pressed (including the rocker switch), all of which need a wire soldered to. On the opposite terminal of each button we added a brown wire, which is the ground line. This is connected such that when the button is pressed, it connects its sensing wire to ground. The LED on the Auto Mode button is connected in the same way, with its negative side connected to ground - although the LED is polarised. Check the schematic if it’s difficult following the photos.

Finally, we crimped two sets of four-pin JST connectors onto the button box outputs. If we ever need to disconnect the button box, it’s easy to do so with these connectors. With the physical build done, we did a quick short test with a multimeter, and plugged everything in for a test run!

Headless Development

There are a few different ways of developing code on the Raspberry Pi Zero W without having access to a directly connected monitor and peripherals. While it can be done over SSH (check out the AI-Powered Dog Trainer in Issue 49 and 50 for more information), we ended up trying a new method for writing Python code on the Zero.

Newer Python programmers may be familiar with the simple but sophisticated IDE Thonny - but it turns out it has a unique feature that even experienced tinkerers will love. It has an inbuilt remote development system that allows easy code writing on a Windows, Linux or Mac machine which can then be instantly executed on a remote Raspberry Pi. This is a good alternative to using terminal-based editors like Nano or using remote development modules in Visual Studio Code - the latter doesn’t support the Zero W.

Regardless of whether you’re planning on using the remote development module in Thonny, you will need some way of accessing the Pi Zero’s terminal - we used a SSH connection since we already had it set up. You may wish to change the Pi’s hostname to make it easier to access on the local network. You can do this by typing the following into the terminal:

sudo raspi-config

Then, navigate to System Settings > Hostname with your arrow keys and select Hostname. After entering a new hostname and restarting your Pi, you should be able to see it on your WiFi network under that name.

To connect to the Pi using Thonny, you’ll need to first install it. Head to to download and run it for your platform.

Once it’s all set up, head to Tools > Options > Interpreter and select “Remote S".

Remote Python 3 (SSH)". As this point, you'll be asked to provide a host and username, which we entered as 'scanner' and 'pi' respectively.

After you click OK, you'll be asked to provide your Pi's password. Once you've authenticated and logged in, you now have access to the Pi's filesystem. When opening a file, you have access to either the local computer or the remote Pi. We were impressed by the fluidity of the Thonny editor when it comes to editing remotely, as it allows instant execution and debugging.

Now that the Raspberry Pi is connected through Thonny, we can write code as naturally as we would locally. This significantly sped up our development time as we could modify code and run it immediately, without having to save, exit and run a Python file from the terminal each time.

The Code

Without the code, this project is a plastic paperweight. Let’s dig in! While it’s not the most scalable solution in the world, our code is included all in one Python file for easy execution. If you’re not interested in the nuts and bolts of this code, or you want to dig in further, feel free to download it in the project files.

import sys
import time
import RPi.GPIO as GPIO
import signal
import logging
import os
import subprocess
from PIL import Image
import PIL.ImageOps
import gphoto2 as gp

First up, we have a bunch of different libraries to import. Many of these are already system libraries, such as ‘sys’, ‘os’ and ‘time’. We are also importing PIL (Python Imaging Library) to invert and process the output images, as well as gphoto2 for interfacing directly with our camera.

phase_pins = [5, 6, 13, 26]
steps = [[1, 0, 0, 1],
         [1, 0, 0, 0],
         [1, 1, 0, 0],
         [0, 1, 0, 0],
         [0, 1, 1, 0],
         [0, 0, 1, 0],
         [0, 0, 1, 1],
         [0, 0, 0, 1]]

These variables are for controlling the two stepper motors through the ULN2003 driver. The ‘phase_pins’ hold the four phase wire assignments, so that we can drive them by driving those GPIO pins high. The ‘steps’ variable holds the sequence of steps that we energise in order. Unlike regular DC motors, stepper motors require this sequence to accurately move the motor, one step at a time.

We’re using ‘half’ stepping in this project so we essentially double the precision we have with positioning the motors. Although it’s not the focus of this project, if you’re interested we documented plenty of information about stepper motors in the L293D Motor Driver guide in Issue 31.

button_inputs = [17, # Rocker Left Switch
     27, # Rocker Right Switch
     22, # Next Photo Button
     25, # Capture Button
     24, # Automatic Mode Button
     23]  # Black and White Mode Button
led_output = 12 #Automatic Mode LED
#Whether we have enabled auto mode or BW mode.
bw_mode = False
auto_mode = False
#How many steps are desired for each press of the 'next photo' advance button
picture_steps = 1000

If you’re only interested in running this code for your project, you shouldn’t need to read much further than the variables above. These are essentially parameters that change the functionality of the scanner depending on their assignments. ‘Button_inputs’, for example, should be changed to match your GPIO assignments. The code beyond this actually makes the scanner work!

def capture_image():
  callback_obj = gp.check_result(gp.use_python_logging())
  camera = gp.Camera()
  print('Capturing image')
  file_path = camera.capture(gp.GP_CAPTURE_IMAGE)
  —- code omitted —
  print(f'Opening image {}...')
  image =
  print(f'Inverting image {}...')
  image = PIL.ImageOps.invert(image)
  print(f'Equalizing image {}...')
  image = PIL.ImageOps.equalize(image)
  if bw_mode:
    print("Converting image {} to grayscale...")
      image = PIL.ImageOps.grayscale(image)
  print("Finished processing image.")
  export_name = "scan_" +'',export_name), quality=95)
  print("Saved " + export_name)

This is where the camera is actually called to capture the image! We will call this function later, but you can see we’re interfacing with the gPhoto2 library, telling it to fire the camera’s shutter and download the image to the Pi.

We then open that image (i.e. load it into the Pi’s RAM) and we can start processing it! Of course, you could omit this step and process the images manually, but that defeats the point of writing Python code to control our film scanner. The first thing we do is invert the images, as any film scanner would do. Note that to produce better results, you should set your camera’s white balance to give a neutral tone as possible with the backlight illuminated.

We then equalise the image, which enhances contrast and removes colour casts. This works by analysing the histogram of the image’s colour channels (i.e. how common each brightness level of the image is) and “stretching” it to provide a lengthened graduation of the image. Since it operates on each colour channel independently, the blue cast that typically results from inverting an orange negative magically disappears!

Photo from Camera
Invert Image
Equalize Image

However, we did find that the process can blow out highlights significantly if done on images with very high contrast. The process can be commented out if it isn’t desired. We then check if the black and white mode is enabled, in which case we convert the image to grayscale.

Finally, we export the image and save it onto the Raspberry Pi.

def scanner_loop():
  global current_step
  while True:
    for i, b in enumerate(button_inputs):  
      if GPIO.event_detected(b):
      if auto_mode:
        GPIO.output(led_output, int((time.process_time() - start)) % 2 == 1)
        GPIO.output(led_output, GPIO.LOW)
        #Rocker Loop Switch
        if not GPIO.input(button_inputs[0]) or not GPIO.input(button_inputs[1]):
          current_step = int((time.process_time() - start) * 600)
          current_step %= len(steps)
          if not GPIO.input(button_inputs[0]):
            current_step = len(steps) - 1 - current_step

This is the main loop of the program. This program essentially operates as a loop, so that negatives can be scanned until the user has finished a reel. In every loop, it first checks if there are any buttons that have triggered events.

This is detailed more in the full code, but all it’s essentially doing is polling for FALLING events of each button. This is inverted behaviour since we’re using the pulldown resistors of our Raspberry Pi - i.e. all buttons are active-low.

If we’re in Auto Mode, we need to loop blinking the Auto Mode LED, capturing an image and moving the slider forward. This variable is toggled elsewhere in the code. Finally, we poll the rocker switch and check if the film spools need to be moved. We can regulate how fast the film moves by changing the ‘600’ number.

After saving and uploading through Thonny, just enter in the directory where you want files to be saved and start using the scanner!

----- DIYODE Film Scanner -----
Setting up GPIO...
Please enter your output directory. (Just press enter for default location.)
Default Directory: /home/pi/FilmScanner/photos/
Directory > 
Using default directory: /home/pi/FilmScanner/photos/
Scanner Ready! Please insert negatives!

To get started, insert your chosen negatives into the left side of the scanner. Be sure to line up the perforations on the spool with those on your negatives. Then, use the right-facing rocker switch to move the film along until it reaches the centre of your camera's frame.

If you’re finding the film getting stuck on the spools, you may wish to use side cutters to cut off the offending plastic. Since it’s a very quick 3D print, we made a few different versions of these spools to get the film to move smoothly.

This is a good opportunity to reposition or focus your camera if need be. Once it’s positioned correctly, hit the capture button on the button box and check your camera fires. Due to the slow nature of PIL (Python Image Library), processing takes a few seconds, but after it inverts, equalises and saves, the resulting image should be visible on the Raspberry Pi’s filesystem.

You can also change the movement step variable ‘picture_steps’ in the scanner Python file to customise how far the stepper reels should move when the ‘advance’ or ‘auto mode’ buttons are pressed.

If you’re scanning black-and-white negatives, press the BW button to engage black and white mode. This will take a grayscale photo of the negatives, which will remove any colour casts associated with photographing them.

We found ourselves using the Auto Mode quite a lot once the movement variables were set correctly. It’s not an unattended process though - film negatives can sometimes get kinked going into the scanner, and over multiple scans the predetermined movement steps tend to drift, causing the photos to get increasingly cut off.

It’s also important to compare the difference between scanning results of a commercial lab versus our DIY scanner. We got two rolls of film developed and requested the reels uncut - that way, we can scan one roll without constantly reloading.

It may seem like an uphill battle to win against a commercial lab, but most film labs return fairly low resolution image files unless professional TIFF quality is requested. This costs a lot more, so it’s not worth it for most people. The 20MP camera we shot our film scans on is much higher than the 2MP returned by the lab.

Looking closely, the colour quality is significantly better on the lab’s scanner. Our basic image equalising technique has its limits. While our resolution is better at the centre of the image, using a camera lens to project a perfectly flat object onto a camera sensor doesn’t fare well for distortion and other optical problems. A proper scanner has no ‘distortion’ in this sense as it doesn’t have a focal length.

In any case, our scanner does quite well comparatively thanks to its even backlight and 3D printed scanner holding the negatives flat. We’re quite happy with its performance!

DIY Scanner
Commercial Film Lab

Where To From Here?

We hoped this project was an interesting read on mixing modern tech with old-fashioned film photography. The mention of film photos will mean something different to everyone - it could mean making new memories with film cameras, or it could mean finding boxes of decades-old negatives in the attic.

For that reason, this project has many different avenues depending on your interests and resources available. We were originally inspired to make this project seeing a Super8 movie film scanner online, powered by a Raspberry Pi. This would allow digitisation of very old movies, not just individual 35mm negatives. Slides, medium format or even large format negatives could be digitised in a similar way.

We’ve left multiple brass insert points on the bottom of the light box to be mounted to any permanent or semi-permanent camera setup you might have. For example, you may wish to make a photography jig out of timber or 3D printed parts that holds your DSLR at the perfect distance and angle - no tripod needed!

If you don’t have a DSLR, or don’t want to use one because of the distortion or quality losses involved with them, there are other ways of accomplishing this project too. It may be possible to modify a flatbed scanner to scan in higher resolution, and implement a backlight system. Add an automated slide roller system similar to what we have, and viola - a DIY film scanner that operates the same way as a commercial one!

If you’re happy with the physical construction of our project, there are other tweaks on the software side that could be made to improve it. We were originally planning to add Google Photos integration into this project, however, we needed to do a lot of fiddling around with API keys to accomplish something we weren’t prioritising.

It would also be interesting seeing a full GUI run by the Raspberry Pi Zero. It could be run directly on the Pi’s graphics, so a HDMI display could be plugged in to monitor progress. Alternatively, a website could be set up that can be accessed with a smartphone or computer. This way, the project is completely wireless and can be used anywhere in the house. In any case, automatic cropping, parameter fine-tuning, and live image review can all be done in real-time.

Be sure to tag us with your DIY film photography creations at @diyodemag