Projects

DIY Night Vision Goggles using Raspberry Pi

Liam Davies

Issue 71, June 2023

This article includes additional downloadable resources.
Please log in to access.

Log in

Build your own Raspberry Pi powered night vision goggles! Complete with an Iron Man style HUD and headset!

When we introduce projects, we usually spend a bit of time delving into the history, uses and justifications for what we build.

This Night Vision project is a little different - we’re building it just because we can; and it’s awesome. This is definitely one of those projects that warrant an absolute boasting to anybody who spots it on a shelf in your house.

You may have seen similar projects online, however, in this project, we’re going a little further and adding an Iron-Man style HUD to our display. We chose a bunch of sensors that we thought were cool and added their data outputs in real time onto the screen. Temperature data, compass direction, and gyroscope data are all shown on the VR headset as you explore at night. We’re also tying off our project with an awesome 3D printed enclosure to help manage the cable mess we usually see with night vision projects.

Believe it or not, we can actually think of a few genuinely useful uses for this project! Observing wildlife at night time can be done without scaring them off with flashlights, and navigating dark rooms without disturbing people in your house can be done with ease. However, the largest factor for developing this project is, of course, the coolness factor. Who wouldn’t want their own pair of customised night-vision goggles?

By the way, if you have a legitimate need for night vision, chances are, you’re not going to be building one yourself out of a VR headset and a Raspberry Pi. In any case, don’t use this for illegal trespassing - it won’t make you invisible on CCTV surveillance, for example. In fact, we’d wager it makes you even more visible thanks to the extremely bright Infrared LEDs mounted on the front of the unit. Be responsible!

How It Works

Okay, we can’t resist talking a bit about NVGs and the history behind them. They’re just so cool! Night vision optics, more commonly seen in their head-mounted form as NVG (Night Vision Goggles) can work in a few different ways. Traditionally, NVGs are ‘image intensifying’ devices that use the - limited - existing ambient light to amplify a corresponding signal into a stronger image. The photomultiplier tube is responsible for this process, which is a vacuum tube consisting of a photocathode, microchannel grid and an image plate. This process doesn’t require any electronics besides powering the photocathode itself, so it’s simple. No illumination or microcontrollers needed!

The alternative method of NVGs is Active Illumination, where an infrared-sensitive imaging device is combined with a corresponding illumination technique. Before CMOS (complementary metal-oxide-semiconductor) cameras became commonplace, this was a job typically reserved for power-hungry and low-resolution CCD (charged-coupled device) camera sensors. However, as CMOS sensors became more sensitive, lower-cost and higher sensitivity (a measurement camera nerds like to call Quantum Efficiency, or QE), many imaging applications switched to CMOS sensors. CMOS sensors also have dramatically faster readout times, which is good news for the framerate and smoothness of NVGs.

On the topic of framerate, we need to be aware of how it - and other visual properties - affect our project. If we were making a trail camera, or some other night vision monitoring device, framerate is a low priority because it’s not absolutely crucial that the image is displayed in real-time. However, our Night Vision Goggles are a headset - the vast majority of the user’s visual input is from our night vision. Therefore, whatever is shown on the screen directly affects the user’s coordination and motor skills. A slow framerate might not only be frustrating or difficult to use, but also sickening. However, framerate is only part of making an immersive and comfortable experience.

A great example of this is the original Oculus Rift Dev Kit released almost exactly 10 years ago at the time of writing. VR is a well-developed technology now, but at its commercial inception, it was plagued with a number of technical issues. Framerate was limited to 60Hz and the resolution was only 1280 x 800 - spoiler alert, that’s still better than the specs of this project! Regardless, many users complained about motion sickness and the apparent “stutter” of the unit. Latency, framerate, resolution, motion tracking sample rate, and many other factors affected the quality of the VR experience.

Bottom line, the human visual system is extremely well-tuned for detecting inaccuracies, delays, and stuttering in visual input, which informed quite a few of our development decisions during programming and part selection. We’ll talk more about this in the “Choosing a screen” section!

Image Credit: Sebastian Stabinger - Wikipedia

Choosing A Camera

Image Credit Pi My Life Up

CMOS cameras have been getting a reputation for awesome performance in a tiny package. We had a few different options for this project, including the modern Raspberry Pi HQ camera, however, we didn’t do this because the HQ camera, by default, includes an IR filter that needs to be removed forcibly. If you want to make the HQ camera IR-sensitive, you’ll need to disassemble it and remove the blue-tinted IR filter with a knife. The HQ camera is also somewhat pricey, so we opted to stick with an older Raspberry Pi camera.

You’ll often find these cameras labelled as “NoIR” cameras, confusingly. This isn’t referring to the IR sensitivity, this is referring to the lack of the aforementioned IR filter. We picked a camera with a few extra goodies, too! Our camera has two high-power infrared LEDs, which is good news for our active illumination vision. It also has an automatic IR switching filter, which means the camera can switch between IR sensitivity and regular daytime viewing.

By the way, if you’re wondering why this filter is necessary at all, it turns out daylight has a lot of infrared light. Who knew a giant ball of flaming plasma in the sky emits a lot of IR radiation? Jokes aside, this means that your images will be washed out with a pink tint and a lot of haziness if your camera is sensitive to all frequencies.

The camera we picked automatically recognises the presence of visible light with some LDRs (Light-Dependent Resistors) and enables the IR filter. Awesome!

The camera sensor itself is the classic OV5647 sensor, which isn’t the greatest performing sensor in the world, but it does have 720p 60Hz video, which is the maximum resolution our screen can support anyway. As you’ll see in our code section, we’ll have to do a bit of clever resolution trickery to make the VR resolution work properly.

Choosing A Screen

Image Credit: Literary Hub

Choosing a suitable screen is just as important as choosing a suitable camera. There are a few options.

We first tried to use our phone as a display - we wanted to look into this as it’s affordable (everyone already has one!) and should, ideally, have great resolution and a vivid display.

To do this, we connected a USB capture card and a USB-C adapter to our phone, and connected a Windows computer to the HDMI output. The monitor display simulates our Raspberry Pi outputting our camera onto the screen. We installed a USB camera app and tested the performance.

Our concern with this method was the input latency. Unfortunately, our concerns were quite valid - a consistent lag of 100ms-150ms is present when testing it with an online latency tester. This may not sound like much, but human vision is extremely sensitive to delays in visual response time. It will almost certainly make our users sick or disoriented!

We instead looked at purpose-built Raspberry Pi displays. Since the VR headset we’re choosing to use is designed for phones 5-7” in size, we were looking for this size display too. Resolution is an important factor when considering what display to use. Virtual Reality fans will know all too well the frustration of the “screen door” effect, where users struggle to see detail in far-away areas thanks to an insufficient number of pixels. We recommend using a screen of at least 1000 pixels in at least one dimension.

We ended up choosing a 7 inch 1024x600 HDMI-based touchscreen. We won’t really be using the touchscreen aspect of the screen, but the large size should help with immersion. Resolution isn’t the greatest, but we should be able to make it work. We’re prioritising framerate and smoothness in this project.

We’ll also need somewhere to mount this screen! We could make our own VR headset to mount it to, but luckily for us, there is an abundance of ‘BYO’ VR headsets that are popular with the Google Cardboard platform. They’re designed to insert your phone and use its high-resolution screen as a VR display. We picked one up second-hand for a grand total of $25. It’s a decent-quality unit too, with adjustable straps and optics. Nice!

Fundamental Build:

Parts Required:Jaycar
Raspberry Pi 5MP Night Vision Camera Board-
Raspberry Pi 3/4BXC9100
1024x600 HDMI 7in Screen with USB Capacitive TouchXC9026
I2C Temperature Sensor-
I2C Compass/Gyroscope Sensor-
Micro USB CableWC7757
DIY HDMI - 10cm CableLittle Bird Electronics: AF-3560
DIY HDMI - Right-Angle Mini HDMI ConnectorLittle Bird Electronics: AF-3558
DIY HDMI - Straight HDMI ConnectorLittle Bird Electronics: AF-3548

For our fundamental build, we’re setting up our software and checking that everything we’re designing is going to work properly. Let’s dig in!

Supply Shortages

Unfortunately, Raspberry Pi’s are still in short supply. While Raspberry Pi claims their stock shortage will subside this year, it is frustrating not having access to a convenient electronics platform with a Linux development environment. Rather than just say “Good luck!”, we’d thought we’d provide some practical ways to get your hands on one of these little powerhouses.

If your local stores and trusted suppliers are out of stock, we’d suggest looking at the second-hand market. Many makers resell their Raspberry Pis, however, the prices are usually quite high. Check out whether your local electronics store has any demo or returned units in stock - a few of our readers have had some luck getting them that way.

Finally, if no official Raspberry Pi’s are available, a good option would be to look into alternative third-party models. We can’t speak for their compatibility with official Raspberry Pi accessories, but they’re definitely worth a shot if you’re willing to do a bit of tinkering.

Connections

First, let’s connect everything up. We’re using a Raspberry Pi 4B, so it’ll require a USB-C cable for power. Use a USB 2A supply if you can, as the Night Vision camera will pull a bit of power. We’ll be using a USB-C powerbank in our final build for the power requirements.

The display needs two cables - a HDMI cable for display signal and a Micro-USB cable for power and the touchscreen data. Note that the 4B requires a micro-HDMI cable, so either pick up an appropriate cable, or do what we did - use Adafruit’s “DIY HDMI” cable solution!

Essentially, they offer a bunch of different HDMI connectors and ribbon cables, so you can make your own cable. It’s not EMI shielded, so it’s not a great idea for long cables, but for short runs in tight spaces - like our project requires - we can make our own cable that won’t make the project bulky! We picked a right-angled micro-HDMI, a 20cm HDMI ribbon cable, and a full size HDMI connector for the display. Just slot it all together and connect it!

Finally, we can connect the night vision camera with its included camera ribbon cable. Note that you’ll need to appropriately configure your Raspberry Pi to talk to the display. Some displays such as the official Raspberry Pi 7” won’t require any setup, while our HDMI 7” display required a custom driver download. Refer to the online documentation manual to get everything working properly.

After a quick power-on test to make sure everything powers up correctly, we then started putting together a breadboard circuit to test our sensors.

Sensors

There is a massive variety of sensors available for use with this project. We picked a high-accuracy temperature sensor from Adafruit, and a gyroscope/compass module from DFrobot. We were originally going to use a Bluetooth Low Energy heart rate sensor too - for “vitals monitoring” - but, unfortunately, we ran out of time to implement this.

Our selected sensors run on I2C and 3.3V, which is very handy for us - they both can be connected directly to the Raspberry Pi with no additional circuitry. I2C is great because, if in future, we choose to add more sensors, we just pop them in parallel with existing sensors. We can connect up to 127 different sensors on the same line, provided that they all have unique addresses. Awesome!

We first inserted our sensors into the breadboard and connected them to the 3.3V and Ground lines. Connect their SDA and SCL lines together, and finally, use four header cables to connect them to the Raspberry Pi. That’s it!

Libraries

We’ll need to install some libraries to make all of our programming work. The first thing we’ll need to do is switch our Raspberry Pi to the Legacy Camera Stack - The default Libcamera library wasn’t happy.

To do this, type this into your console:

sudo raspi-config

Head to the Interface settings and enable the Legacy Camera stack.

We also disabled the Raspberry Pi desktop by navigating to System > Boot / Auto Login and selecting Console Autologin. We need to do this so that we can overwrite the frame buffer in our program without the Pi’s desktop trying to interfere.

Next up, let’s get the core libraries we’ll need for interfacing with the camera installed. The RPI-userland is a general purpose Pi ARM library, so run the commands below to get started:

sudo apt update
sudo apt install snapd
sudo reboot
sudo snap install core
sudo snap install rpi-userland --edge

We’ll also need Cmake so we can compile our programs without a huge command each time.

sudo apt install cmake
 git clone https://github.com/raspberrypi/userland.git
cd userland
./buildme

Ok, one more! We’re using a simple 8x8 bitmap font library for displaying our text. There are more advanced methods of doing this, such as using the FreeType library, but we’re choosing to write our own simple font renderer. You can install this font library by downloading/cloning the following Git repository to your Pi:

git clone https://github.com/dhepper/font8x8.git

Framebuffer Experiments

We challenged ourselves to do the software aspect of this project a little differently than we’ve seen online. Normally, many makers will use libraries such as Libcamera in Python to access data from the camera and process it in Python. While this is effective, it’s not super fast, and we’ve seen various projects have trouble with framerate and performance when this method is used.

For this project, we’re writing our entire camera processing code in C. This is a daunting task, but, if we can pull it off, it will let us reap massive performance gains. We’ll be operating close to the hardware, and won’t have to worry about the performance overhead that Python adds on top.

We’re prioritising framerate in this project because a laggy or stuttering night vision display is to be avoided at all costs. There is a reason why most new VR gaming systems use 90Hz or more screens to reduce motion sickness! Our headset should feel as close to reality as possible!

Essentially, we’ll be solely responsible for writing to the framebuffer - the place the Raspberry Pi looks to when it wants to write a new image to the screen. We’ll have to manage the pixel data ourselves, and ensure that any data we receive from the camera will be correctly parsed. We’ll also have to manage writing our received sensor data to the screen as it is received. Because we’re dealing with raw sensor data, it isn’t as simple as doing something like this:

screen.writeText("Hello There!");

We instead have to write the individual pixels of this text to the screen. To demonstrate this, if you have a Linux machine or Raspberry Pi with a screen attached lying around, run this command:

sudo cat /dev/urandom/ > /dev/fb0/

This results in a noisy mess of colours being written to the screen. What’s going on here? We’re quite literally writing directly to the graphics framebuffer! By grabbing a random stream of bytes at /dev/urandom/ and redirecting it to /dev/fb0/, we’re just putting random bytes on the screen. It’s that easy to put stuff on the screen in Linux.

When we first started learning Linux/C, this was a bit of a “Oh, wow!” moment for us. Computers aren’t really that complicated - it’s the tasks we do on them that are. Of course, we’re only writing to the framebuffer once. Once something else puts something on the framebuffer - such as text in the console - our beautiful mess of colours is overwritten. Our Night Vision camera code will need to continuously write to the framebuffer so that new frames are shown as often as possible.

C Code

Let’s get to writing some C! To write the code itself, we opted to connect Visual Studio Code on our desktop to the Raspberry Pi over SSH with a remote development plugin. We have a link in our resources section if you’d like to learn how to do this.

The bread and butter of our project is the main.c, which you can download in its entirety from our Project Files. It’s a heavily modified version of GitHub user Damdoy’s raspberry-pi-minimal-camera-example code - you can view the original version in the resources section.

Our version is nearly twice as long as the original code, so we won’t be showing how it all works here. We also have our Makefile, which is run whenever we type ‘make’ into the terminal - this compiles and runs our project automatically, although you can configure it to do any development task you’d like. The Makefile outputs the camera.o file, which is executable code. Aside from running the makefile, the camera program can also be started by typing ‘./camera’ into the terminal.

Keen-eyed readers may spot a Python file or two here! We’ll come back to this once we get to programming our sensors.

Colour Format

During the development of code for this project, we were faced with an interesting code problem - our camera outputs RGB888 format colour, while our screen uses RGB565 colour, at least in the configuration we had it set to. For those unfamiliar, RGB888 is a format where 24 bits of a number are dedicated to representing a colour on a screen. Typically, it’s represented with 32 bits, with the extra 8 bits for “alpha” or transparency information. RGB565 is the same, but with only 16 bits. 5 bits are used to store the Red and Blue channels, and 6 for Green - the human eye is more sensitive to the green wavelength, hence the additional bit.

We adapted an algorithm we found online for converting RGB888 to RGB565 as follows:

uint8_t r_in = buffer->data[camera_idx+0];
uint8_t g_in = buffer->data[camera_idx+1];
uint8_t b_in = buffer->data[camera_idx+2];
uint16_t b = (b_in >> 3) & 0x1f;
uint16_t g = ((g_in >> 2) & 0x3f) << 5;
uint16_t r = ((r_in >> 3) & 0x1f) << 11;
col = (r | g | b);

This code reads from the camera buffer, and uses some clever bit-shifts to convert the 8-bit colour into 565 colour. The output variable, ‘col’, is a 16-bit number that gets written to our screen. To fill the entire screen with mostly green and a little bit of blue (R = 0, G = 63, B = 5), we can use the following code:

const uint16_t colour = 0x07e5;
for(int i = 0; i < screen_size_x*screen_size_y*2; i++) {
    memcpy(fbp + i*2, &colour, 2);
}

We wrote a routine that runs at 60Hz to continuously grab data from the camera, resize the image and then convert the image to RGB565. We then have to split the image and write a copy to each half of the screen.

Note that because we are only using one camera, we don’t have the ability to simulate stereoscopic (3D) vision here. It’s really just a 2D image, projected twice onto the display.

If we compile and run our program, we can now see a real-time video of our camera’s feed pop up on the screen. It’s very smooth and with almost no latency, however, it does have some visual tearing and the resolution isn’t the greatest.

Drawing Text

Remember when we installed that font library earlier? It’s going to come in handy! Our font library is quite simple, it just includes a big array of numbers corresponding to whether a “pixel” of a letter should be filled in or not.


{ 0x0C, 0x1E, 0x33, 0x33, 0x3F, 0x33, 0x33, 0x00},   // U+0041 (A)
{ 0x3F, 0x66, 0x66, 0x3E, 0x66, 0x66, 0x3F, 0x00},   // U+0042 (B)
{ 0x3C, 0x66, 0x03, 0x03, 0x03, 0x66, 0x3C, 0x00},   // U+0043 (C)

This means that our text isn’t ‘vectorised’ - so scaling the text up won’t make it smoother. However, we quite like the retro look of this font.

To write the font to the screen, we simply have to decide whether our letter of choice should be written to a particular pixel. The example code below looks quite complex, but in reality, it just writes white to a spot where there is a pixel, and otherwise, it writes the camera’s data.

if(y/FONT_SCALE < 8 && font8x8_basic[(int)'D'] [y/FONT_SCALE] >> x/FONT_SCALE & 0x1) {
      col = 0xFFFF;
} else {
       //Write the camera data as usual.
      uint32_t camera_idx = (y*(screen_size_x/2)+x)*3;
       if(camera_idx > CAMERA_RESOLUTION_X * CAMERA_RESOLUTION_Y  * 3) {
         break;
       }
       // … RGB888 to RGB565 code here
 }
//Copy to first half.
memcpy(&fbp[idx], &col, 2);
//Copy to last half.
memcpy(&fbp[idx+screen_size_x], &col, 2);

When running this code, we can see our cute little letter ‘D’ in the top-left corner of each screen half:

Awesome! We then wrote a rather large function that can handle font spacing, scaling, strings, and text colours. It’s quite a handy function! We can call it like this:

drawScreenText("DIYODE Test", 100, 100, 2, 2, 0xFFFF);

Static text is boring. Let’s hook up our sensors and make our HUD complete!

Sensor Setup

If your sensors are connected, it’s worth checking whether they can be detected using the I2Cdetect command.

You should see that the addresses of your sensors appear in the grid.

To install the Python library for the temperature sensor, we can run the following commands:

cd ~
git clone https://github.com/adafruit/Adafruit_Python_MCP9808.git
cd Adafruit_Python_MCP9808
sudo python setup.py install

Wait a minute, Python? Aren’t we using C? We opted to use Python for this part of the project thanks to an abundance of easy-to-use libraries for our sensors. Interfacing with our sensors in C is most definitely possible, but it’s an effort to get everything working.

To install our Gyroscope and Compass module, we completed the same processes with the adafruit_lsm303_accel and adafruit_lsm303dlh_mag modules - you can see the full command list on Adafruit’s website.

The Python code is much simpler than our C code, and simply grabs our sensor data and writes it out to a text file.

while True:
    s_temp = temperature.readTempC()
    s_accel = accel.acceleration
    s_mag = mag.magnetic
    #print(s_temp)
    mx, my, mz = s_mag
    heading = -math.atan2(my, mx) * 180 / math.pi;
    if heading < 0:
        heading = 180 - heading
    s_abs_accel = math.sqrt(sum(pow(dim, 2) for dim in s_accel)) / 9.8
    print(f"Temperature: {s_temp:.2f}°C t Gravity: {s_abs_accel:.2f}G t Bearing: {heading:.0f}°")
    with open(file_name, 'w') as file:
        # Write the content to the file
        file.write(f"{s_temp:.2f}rn{s_abs_accel:.2f}rn{heading:.0f}")
    time.sleep(0.3)

We’re getting the temperature data, heading data from our compass module (the maths is likely not reliable - we’ll improve this), the total Gs of acceleration and writing it all out to a .txt file. We can then read the data in C like this:

get_data(); // This function uses scanf() to scan the .txt file for values.
char screenPrint[10];
sprintf(screenPrint, "%.2fC  %.2fG  %.0fdeg", temp_c, gravity, bearing);
drawScreenText(screenPrint, 50, 460, 2, 2, 0x07EC);

After some experimentation with the placement and colours, we ended up with this overlay:

You may notice that some readings don’t quite make sense - it’s cold in Australia at time of writing, but not THAT cold. We should also be seeing a G-force reading of ~1G thanks to the force of gravity. We’re not sure exactly what is going on here, but we suspect something on the I2C bus is interfering with the signal. Unfortunately, we didn’t get time to fully debug this issue. Regardless, it looks absolutely awesome in the headset view! It really feels like there is a HUD hovering in front of you!

Main Build:

Additional Parts Required:Jaycar
6x 3W 16mm 850nm LEDsLEDsales: 3W_850NM_16MM-30
6x 20mm x 20mm HeatsinksLEDsales: 20X20MM_FAN_HS
2x LED DriversLEDsales: CN5711_PCB
Small Heatsink for LED DriversLEDsales: HS18X13X6
USB-C CableWC7755
Thermal TapeNM2790
Prototyping BoardHP9554
Female Header StripHM3230
Female-Female Dupont CablesWC6026
Mobile Phone VR Headset-
Double Sided Tape-

Great, we know all of the technical stuff is working! Let’s finish off the project by assembling our final build.

3D Printing

We have two 3D printed parts for our build. We recommend printing them out of black or dark grey. While the parts will technically only support the VR headset we’re using, they can be adapted to fit whatever headset you have. As long as you’re comfortable with a little elbow grease and using some side cutters, it should work fine!

Assembling

Let’s put together the build. All of the connections are exactly the same as the fundamental build, just in a much more compact space - with the exception of some more powerful infrared LEDs.

To begin with, we first stuck our camera to the front recess of the enclosure. The signal cable has a slot that it can be slid through to save space. Double sided tape is your friend in this project!

Next up, we’re adding some more infrared LEDs to get a stronger video signal. These LEDs are no joke for their size - they’re rated to handle 50 times the current of an average 5mm infrared LED at 1A! Thanks to their 3W power rating, we’ll need to make sure we’re running these with heatsinks.

We picked 6 20mm radial heatsinks. To insert them into the circular slots in the front of the enclosure, we used a bench vice to press them in. Be very careful doing this! It’s very easy to destroy the plastic if too much force is applied in the wrong place. We recommend using another heatsink on the other side of the plastic while clamping down. You may have to do this a few times to get the angle right.

To apply the LEDs themselves, we’ll need some way of both adhering the LEDs to the heatsink and providing a thermally conductive bond. We’re using some thermal double-sided tape here - the same type used for cooling VRMs in desktop graphics cards. We used a 1cm x 1cm square of each and placed the LED units on top of each.

We then ran some white wires between the LEDs such that each side are connected in series - three in each strip. To test that everything is working, we connected each side at a time to our lab bench power supply. At 5V, our LEDs draw about 0.83A and heat up warm to the touch.

The visual appearance of infrared LEDs is a really interesting topic. Infrared LEDs that are strong enough to emit residual visible light will emit a dim red glow when seen in person. However, most digital cameras are sensitive enough to Infrared - even with the IR filter we talked about in a previous section - so photos of infrared LEDs appear with a pink glow instead!

We’ll also need to provide a constant current driver so that our LEDs do not overheat. We’re using a small 1.5A adjustable driver, which can be powered on 5V easily. Nice!

We connected 5V and ground to the input, and the 3 series IR LEDs to the output. After repeating the soldering process for both sides, we then mounted the drivers to heatsinks and affixed everything to the enclosure with double-sided tape. Bear in mind that the drivers we’re using are linear drivers, so they’ll heat up more than you may expect, especially if they are forced to drop a large voltage across them.

Finally, we can stick our Raspberry Pi down using - you guessed it - more double-sided tape! Make sure to connect all USB and display cables before sticking it down, because they’re a pain to connect after the fact.

Let’s solder up our sensors. We’re not planning to re-use these in a different project, so we’re soldering them directly - not using headers. We’ve soldered these in quite a compact configuration. If you’re using additional or different sensors, we suggest choosing a layout that will allow everything to fit in the enclosure.

We then connected four wires - 5V, Ground, SDA and SCL. All are soldered in parallel with each sensor, and lead to a four-pin male header. After chopping down the board, we stuck it in the enclosure with yet more double-sided tape.

Let’s close everything up! Make sure there is nothing that could potentially short out with the underside of the display, as it’ll be pretty cosy inside the enclosure.

Four screws will hold everything together, although we found we only could use two - the other two were unfortunately misaligned. Anyway, with a bit of poking and prodding, our electronics sandwich is ready!

To attach our sandwich to the headset, we’ll first need to push out the pin that acts as the hinge for the phone holder. We used a fine-tipped screwdriver and a hammer.

We then reinserted the pin into our 3D printed enclosure. It should be nice and tight, and shouldn’t move at all.

To hold the top on, we zip-tied the top of the frame to the headset. It makes accessing the screen a little annoying, but it ensures that it won’t jiggle around when in use.

We’re done! All we have to do now is pop a capable power bank in our pocket (preferably capable of 2A+), connect our USB cables, and find a suitably dark backyard or garage! Or, y’know, wait until nighttime.

IR LEDs

A quick word of warning on IR LEDs (and any non-visible form of radiation) - your eyes do not dilate for IR light. Normally, your pupils would dilate seeing a bright light, and you may experience pain when looking directly into it. Because IR light from LEDs is not visible, you will not realise that extremely bright light is entering your vision - especially if you look directly into our Night Vision headset. You won’t have a blink reflex, and your pupils will not dilate, which increases the chance of eye damage.

What is the relative danger of our project versus, say, the IR floodlights used in many CCTV installations? Probably not much. But, when building and testing this project, you’re likely to be looking at these LEDs a lot, so try to minimise your exposure when you can.

Testing

This is the fun part! After putting everything together and running our C program, we put on the headset for the first time and tried walking around. The first thing we noticed was that the FOV (Field Of Vision) was quite limiting. It felt as though we were permanently looking through a telescope, which makes it difficult to coordinate walking and judging distance. However, our code is not using the whole vision field of the camera, so when we update our code in future, we should be able to reduce this effect substantially.

Once we got used to that though, we realised just how incredible the IR illumination is. The VR headset really is immersive, and combined with a 60Hz frame rate and almost no latency, it’s an amazing experience. We could see objects about 4-5m away with relative clarity in pitch blackness, and once we enabled the additional infrared power LEDs, we doubled that distance up to around 10m.

Something else awesome about this project is that it lets us get a glimpse of the world in - literally - a new light. For example, the focusing light that smartphones use is very apparent if shone directly at our headset.

The actual brightness of the LEDs on our headset blew us away. We held up a glossy computer monitor to reflect the six LEDs back to the camera. The image immediately shows 6 blindingly-bright “white” light sources - it looks like something straight out of a sci-fi movie. It’s actually quite haunting seeing the headset’s reflection in something reflective on the other side of the room.

After a few minutes of walking around, we found that the headset gets heavy very quickly. The screen, Raspberry Pi, night vision camera, and circuitry all add up in terms of weight, which causes it to sit uncomfortably on the user's nose. If we revisit this project in future, we’ll definitely move most of the processing and sensors to somewhere else that isn’t the user’s head.

However, the project, even in its current state, is one of the most successful projects we’ve done to date. We’re very pleased with the result, and we’re glad we wrote everything in C - the performance has paid off.

Where To From Here?

Ok, we’ll admit - Night Vision isn’t the most useful thing in the world. A simple torch is much more effective if you don’t mind throwing visible light. However, this is a project with a ton of novelty appeal - with uses in everything from nighttime Airsoft matches to “exoskeleton” type projects. We’re quite interested in investigating the latter - essentially, how many equipable gadgets can we give the user to augment their awareness and control of the environment? Night vision, physical strength augmentation with stepper motors and enhanced hearing (see the Ultrasonic Microphone project in issues 68 and 69!) sounds like an awesome project to be fully prepared for the ChatGPT robot uprising.

Alright, that last part is probably not likely - yet. Regardless, it’s a cool idea! There are also a number of quality-of-life upgrades that could be made to improve this project. Using a better camera, screen, LEDs and code would all be good news for the performance of our night vision, but there is only so much we can do in a month! We’d love to see how far this project can go, so be sure to clue us in with @diyodemag if you come up with anything cool. ■

Not a subscriber? Get Issue 71 for just A$3.95

Issue 71 available now in PDF & interactive digital.

DIYODE Magazine