Monitor the weather, capture meteors, and create awesome time-lapses of the night sky with this All-Sky Camera! It even has an inbuilt weather station and a Home Assistant dashboard for smart automation! - By Liam Davies
BUILD TIME: A Weekend
DIFFICULTY RATING: Intermediate
Ah yes, the humble weather station. Some makers may call it the rite of passage for all new electronics enthusiasts, and evidently so - there are literally thousands of designs online for 3D-printed weather stations and their electronics.
But this month, we went one step further by using an upwards-facing camera to monitor the sky in addition to the regular environmental sensors found in most mainstream weather stations. The idea of using a camera is that it can be both functionally and aesthetically useful, producing both useful information about the conditions overhead and incredible images of the sky.
During the day, the camera can photograph cloud patterns, the path of the sun, overhead planes and rain. At night, it’s able to take awesome long-exposure photos of the Milky Way, along with satellites, thunderstorms and meteor showers. We can also stitch the photos we gather into a mesmerising timelapse over many hours (or days) if desired.
Besides making cool photos, what use does an All-Sky Camera actually have? The most common use that hobbyists have for one is in the astronomy and photography community, where clear conditions are required for observing or photographing the night sky, free of clouds and rain. However, they are also very popular in research - they can provide a visual indication of current weather and sky conditions. Predictions about solar power installations, atmospheric transparency and defence monitoring can be made with All Sky Cameras.
Any camera can be used to take photos of the sky, but we’re looking for a specific setup to capture the entire sky at once - hence the “All” part of All Sky Camera! You could, of course, use a bunch of cameras facing different directions to create panoramas of the night sky, but that’s not automatic or easy to set up.
The results of an All Sky Camera under dark skies is stunning!
We’re using a strong fisheye lens to gather a whopping 180 degree field-of-view for our camera, which lets us see the whole sky in one photo! It heavily distorts the edges of the photo, so it’s not useful for seeing high detail objects on the horizon, but objects directly in front of the camera can be resolved without much trouble.
In this project, we were interested in measuring the current cloud cover of the location where the All-Sky Camera is installed. Many weather stations don’t include a sensor for this, as it’s typically reserved for applications in astronomy such as at observatories where a clear sky is essential. We’re mainly installing this sensor as it provides quantifiable data about how clear it is outside, without looking at the camera’s images.
So, how do we measure the cloud cover above us right now? We could, of course, use our All-Sky camera in conjunction with some clever code (or even AI) to figure out if there is currently cloud overhead. This could be as simple as finding whether we can see the stars or the sun, however, this would have the potential to pick up on false positives (i.e. a circular-shaped bright cloud might register as the sun).
Instead, we’re using a dedicated sensor for this, which may come from an unlikely source - an infrared thermometer! This may seem like a strange sensor to do this job, but it’s actually extremely effective.
If you own a non-contact infrared thermometer (typically with a red laser dot for aiming), walk outside on a sunny day and point it at the sky. The result may surprise you!
Yes, that’s nearly 33°C below freezing! Now point it at some clouds, and notice that the temperature has shot up again.
If you try this experiment again at night, you may see even more dramatic changes in temperature. What’s going on here? Is the thermometer malfunctioning?
When measuring a clear sky, we’re actually measuring the temperature of the Earth’s upper atmosphere. That is much, much, colder than the surface of the Earth, and the only reason we’re not seeing the temperature of space is due to the low (but not negligent) IR radiation of atmospheric gases. Depending on the water vapour in the atmosphere, there also may be large changes in the temperature, as the amount of IR transmitted can vary depending on humidity. Pointing the thermometer closer to the horizon on a clear day will read higher temperatures as we’re hitting more atmospheric gases before eventually reaching the upper atmosphere.
However, low clouds and rain clouds are almost completely opaque in IR, so their temperature is directly read on the IR thermometer. Since they’re at a much lower altitude than the upper atmosphere, the temperature is much more comparable to the ambient temperature on ground level. High clouds are slightly warmer than the upper atmosphere, so we’re still able to observe a small temperature difference.
So, that pretty much sums it up! We compare the sky’s temperature with the ambient temperature. If there is a large difference, we can conclude that there are very few clouds in the sky, and if they are almost the same, we know that it is currently overcast.
The module we’re using for this is the MLX90614 IR thermometer, which is very similar to the units that are used in non-contact thermometers, and ours may even be the same internal component. It’s typically sold as GY906 in most electronics retailers and can be found mounted on a breakout board or by itself. It also includes an ambient temperature sensor which is very handy!
We’re also using a few other sensors to complement the sky-monitoring components. In addition to a good quality BME280 sensor for measuring temperature, humidity and air pressure, we’re also using a VEML7700 light sensor. Both of these sensors are fairly accurate and have enough precision to measure down to a few decimal points of precision.
|1x Raspberry Pi#||XC9104||Z6302H||PAKR-A0288|
|1x Raspberry Pi HQ Camera||-||Z6423||PAKR-A0362|
|1x VEML7700 Digital Light Sensor||-||-||ADA4162^|
|1x BME280 Environmental Sensor||-||Z6581^||DF-SEN0236|
|1x GY906 Infrared Thermometer and Ambient Sensor||-||-||DF-SEN0206^|
|1x PoE Ethernet Adapter for Raspberry Pi||-||Z6425 ^||ADA3848|
|1x PoE Injector Adapter or Switch||YN8040||D4229||-|
1x 1.55mm 180° Degree Lens and M12 Adapter PH-LN031
Breadboarding & Prototyping Hardware Also Required. ^May not have the same dimensions as the part we used. #Significant stock shortages are affecting the availability of the Raspberry Pi 4. You should be able to find them as a kit from some suppliers, or on the second-hand market.
Our prototype build will be mostly revolving around checking that our All Sky Camera will work, and doing some experimentation with software and hardware.
First up, we’re making a simple circuit on a breadboard for connecting all of our sensors together. Luckily, all of our sensors are connected on the I2C bus, which makes life incredibly simple when it comes to connecting them all together.
We put all of the sensors into the breadboard, with the exception of the BME280 weather sensor. Because this has an inbuilt 4-pin gravity connector, we have to break it out to header connectors.
We then connected the power wires. Each module needs 3.3V power and a Ground connection, which we use the breadboard rails for. The two white wires are for the I2C lines, which are the SDA and SCL lines. These are all connected in parallel, so every SDA line connects to all of the others.
Finally, we connected the Raspberry Pi via the GPIO pins. This uses just four pins total, which is impressive for three sensors! Refer to the Fritzing diagram for a complete layout.
We looked into a lot of different options for choosing a camera for this project, which led us down a rabbit hole of camera research and learning how they work. We want to be able to see the whole sky at once with decent quality, which is a harder task than it may seem. Even if you aren’t interested in building an All-Sky Camera, we hope this exploration will be an interesting insight to how digital cameras work.
There are two primary components of any camera - the sensor and the optical elements in front of it. The sensor itself is much more important than you may expect. Taking photos in daylight is not much trouble for most camera sensors - there is plenty of light, so only very short exposures need to be taken for a suitable photograph. Taking photos at night is a different ball game - a moonless night can be at least a hundred million times darker than direct sunlight. Long exposures are required, often at a higher gain than normal to get an appropriately bright photo.
To take awesome photos, a capable sensor is needed. A ‘capable’ camera is often associated with a high megapixel count, but there are actually instances where lower megapixel counts are better!
Let’s imagine we have a grid of 3x3 pixels and a grid of 2x2 pixels as an example, both of which have the same total area. These aren’t just conceptual numbers in an image, these are actual photodiodes that sit on a sensor and collect light to capture an image.
The photodiodes can be thought of as light ‘buckets’ that collect photons, which are converted into electrons for our camera's circuitry to turn into a digital number.
Having more light buckets (i.e. resolution) means we can resolve more details in whatever we are photographing, but there is a tradeoff.
The sensor of every camera is not perfectly efficient, so readout and thermal noise interfere with extra electrons in the signal, which makes it harder to discern what is noise and what is signal. Similar to audio and RF electronics, there is often a discussion of the ‘Signal to Noise’ ratio in high-performance camera applications.
The key point is that the noise of a particular pixel is usually fairly constant, regardless of its physical size. Going back to our light-bucket analogy, let’s drill the same sized hole in each of our light buckets to represent the signal we lose thanks to the noise. It should now be obvious that the grid of 2x2 pixels will fill up much faster with light, because they are roughly two times bigger than the buckets in the grid of 3x3 pixels!
Mind you, less than 10 pixels won’t exactly result in a crystal-clear image, considering most modern cameras have at least 5-10 million times that - but the analogy still applies! The technical term for the physical size of each pixel is the ‘Pixel Pitch’, and with the absolutely tiny pixels in most modern cameras, will be anywhere from 1μm to 5μm wide. For reference, the new Samsung S22 Ultra has a 108MP camera with 0.7μm pixels, far smaller than our professional Canon 6D camera with 6.5μm pixels. The 6D would blow the S22 Ultra out of the water in terms of low-light performance, despite the much lower 20MP resolution.
Anyway, enough nerding out! Long story short, we actually want a lower megapixel count for this project. So long as we can see sufficient details in the sky, stars and clouds, the reduction in image noise will be significant, and the overall clarity will be great. As a bonus, it also reduces the file size!
We have a few options. The first option is an astronomy camera, like many other makers seem to be using for AllSky cameras. This is quite a niche market, but they are extremely well built and can be used for many different imaging applications. Brands such as ZWO, QHY and Player One are quite prominent, however, ZWO is the most popular.
If you’d like to go this route, the ZWO 224MC is probably the best choice. Despite having just 1.2MP, it has awesome performance and even includes a lens specifically for use as an All-Sky Camera! You can also get monochrome (black and white) versions of these cameras which have extremely high sensitivity, great for projects where colour isn’t important.
Unfortunately, it was out of stock in Australia at time of writing, so we ended up forgoing the ZWO camera and went with our second option - the Raspberry Pi HQ camera. It’s a 12 megapixel camera, and has a number of possible lens attachments and mounting solutions.
It will still provide excellent results, however, it won’t be as rugged or fast as the ZWO camera.
The lens we choose can be limited depending on your location, but here in Australia we managed to find a 180 degree lens for our Raspberry Pi Camera with a 1.55mm focal length. It’s not professional quality, so don’t expect spotless images - this is another reason why high resolution isn’t always the answer to everything, sometimes the lens is the limiting factor.
These cheaper Raspberry Pi lenses typically have poor colour correction, so blue halos are seen around lights and focusing can sometimes be tricky. In any case, it’s very simple to install - just screw in the lens until focus is reached. You may have to use an M12 nut to secure the focus position, as it can shake and lose focus over time.
Power Over Ethernet
Power Over Ethernet (PoE) is a clever way of compacting both power and network connectivity for a device into one cable, which the Raspberry Pi can leverage with additional hardware. VoIP (Voice over IP) phones, security cameras and WiFi access points all commonly use PoE, so it’s a very handy technique for transmitting power and high-speed data. There are actually a number of reasons why this may be done beyond just compactness.
The main reason we’re using PoE here is to avoid the need of installing power in a high or roof-mounted place (given that the All-Sky Camera needs to be clear of trees and other obstacles). 230V outlets and rain don’t mix very well!
It also makes our lives easier with dealing with the cables entering and exiting the All-Sky Camera. With this setup, the total number of cables that need to be water-tight will be a grand total of one - the Ethernet cable itself!
We’re using a module that can supply up to 1.8A at 5V, which is also isolated! This may be useful if you’re doing precise analog measurements. Although 2.5A is typically recommended for Raspberry Pi’s, this is only necessary if using other power-hungry components with it too, such as a screen and peripherals. While we do have some additional components, such as the camera and weather sensors, in practice we probably wouldn’t go much over 1.2-1.3A in regular use.
Many PoE modules are sold as HATs that mount over the top of the Pi, however, the one we’re using attaches directly to the PoE and GPIO pins.
The four PoE pins are connected to the module with two pairs, providing a differential 48V to the PoE module. The module then regulates that down to 5V, which we can provide back to the Raspberry Pi via the GPIO pins.
Note: Providing power directly to the Raspberry Pi via the GPIO 5V pins isn’t a bad idea by any means, but you should be aware that by doing this, the 5V fuse is no longer wired in series with the rest of the Pi. This means that if you short the Pi’s GPIO pins, the PoE module (or whatever is providing 5V) will be directly shorted, potentially damaging it and the Pi. Alternatively, you can use the test pads on the underside of the Raspberry Pi to connect to 5V in series with the fuse.
Ethernet ports don’t usually have a 48V line unless you directly inject the power, so you’ll need a PoE injector such as this one from TP-Link:
Most decent injectors will come with a wall wart adapter that provides the power, which can be connected inside your house, safely away from rain and the nasty outside world. The input side of the injector needs an ethernet cable which will be connected to your internet router, as any other device would be. The output side has another female Ethernet socket that has the PoE line on it. You can connect a long cable to run outside to the Pi here, which we’ll do in the main build.
Our All-Sky Camera is interfaced through Home Assistant, which is an IoT-friendly platform for home data and automation. Our IoT Motor Control project in Issue 56 used it, however, in this article, we’ll be going a bit further into its usage and capabilities. Since there will be a full Home Assistant server running on the Raspberry Pi, it can interface with almost any type of device in your house using the data sourced from our ASC. For example, if we wanted to turn our house’s smart lights once the reading from the Lux sensor falls below typical twilight levels outside, that's easily possible with Home Assistant. You could even set it up to email you photos of the sunrise or Milky Way if desired!
We’re going to be installing Home Assistant on our Raspberry Pi by using a Docker container. Normally, Home Assistant runs as a standalone operating system, however, with a Docker container, we’re able to virtualise its functionality and run other programs simultaneously. To install Docker, run the following commands:
sudo apt update
sudo apt upgrade
curl -sSL https://get.docker.com | sh
sudo usermod -aG docker pi
This will update your Raspberry Pi and get the install script for Docker. You’ll also need to provide the necessary permissions to run Docker on your account.
After this, we can install Home Assistant in a new docker container. To do this, we’re setting up a Docker Compose configuration that will start the container with the settings we choose. This can be applicable to any Docker Container that you run on your Raspberry Pi, not necessarily just Home Assistant. Docker Compose makes it easy to run Docker containers by putting in the configuration just once, and it also supports auto-restarting if the container crashes.
Create the compose file like this:
sudo nano compose.yml
And enter the code below into it:
Press Ctrl+O on your keyboard to save the file and press enter when prompted to save changes and exit. Then, just run:
docker compose up -d
This will install and run the Home Assistant image, which doesn’t require any compilation or messing about! The newly-running Home Assistant is accessible from any device on your network by navigating to http://[hostname]:8123 in your web browser. We changed our hostname to allsky, which we recommend doing as the AllSky software we’re using later will do it anyway.
You’ll be asked to register an account, and make sure to remember your password as you’ll need it to login when working with Home Assistant!
After you’ve registered and set your location, you’ll be taken to the Home Assistant dashboard where you can work with many different integrations to create a super simple but aesthetically attractive dashboard front-end.
Our AllSky camera will send its sensor data through an MQTT server, which makes it incredibly easy to send and receive data between Internet of Things programs. We can set up an MQTT server by installing the Mosquitto server as another Docker Container, in the same compose.yml file we were using before:
The above code also gets inserted into the services tab, so the MQTT server should be set up for you. To integrate it into Home Assistant, you’ll need to add the MQTT Integration under the Settings panel.
While we’re here, we can also add the ‘Generic Camera’ integration to convert the AllSky camera into a basic RTSP security camera! Once added, we can configure it as follows:
You can replace the URL with any camera image currently displayed.
Going back to the MQTT integration, we can then add new topics in the configuration.yaml file within Home Assistant. To get to the file in our case, we ran this command:
sudo nano /homeassistant/configuration.yaml
This will open the config file and allow us to make changes to the integrations and devices running within Home Assistant. The code below shows some of the snippets we inserted to support our sensors we’ll be hooking up shortly with a Python script. The most important part to get right is the state_topic entry, which is how data is sent and received through the MQTT server. Additional formatting can be applied to source data from any sensor you’re interested in using.
- name: "Ambient Temperature (IR)"
- name: "Cloud Cover"
- name: "Cloud Cover Status"
… (more sensors omitted)
For the prototype build, we wanted to have a crack at writing our own software for collecting weather data and relaying it to the Home Assistant server. We wrote our MQTT client in Python, which queries and submits data from the sensors to the relevant MQTT topics.
Below are the libraries we installed:
sudo pip install paho-mqtt
sudo pip install adafruit-circuitpython-veml7700
sudo pip install RPi.bme280
sudo pip install PyMLX90614
We can use ‘sudo nano send_data.py’ to start a new script. We wrote quite a lot of code for experimentation, so we’ll only show what we think is the most interesting from a functionality perspective. The rest can be found in the project files.
import paho.mqtt.client as mqtt
from smbus2 import SMBus
from mlx90614 import MLX90614
from picamera import PiCamera
from PIL import Image
from statistics import median, mean
import numpy as np
We’re using quite a few libraries to make everything work, some of which you may have to install if you haven’t used them on your Raspberry Pi before.
port = 1
bus = SMBus(port)
# BME280 Code
address = 0x77
# MLX90614 Code
sensor = MLX90614(bus, address=0x5A)
calibration_params = bme280.load_calibration_params(bus, address)
# VEML7700 Code
i2c = board.I2C()
# uses board.SCL and board.SDA
veml7700 = adafruit_veml7700.VEML7700(i2c)
client = mqtt.Client("P1") #create new instance
#connect to broker
This code sets up the process of talking to our sensors, which are all running on the SDA and SCL lines. We also need to connect to our MQTT broker, which is running locally, hence the reason why we’re using localhost for the hostname.
amb_temp = sensor.get_amb_temp()
sky_temp = sensor.get_obj_temp()
diff_temp = sky_temp - amb_temp
cloud_cover = map_val(diff_temp, -10, 30, 0, 100)
status = ""
if diff_temp <= -5:
status = "CLEAR"
elif diff_temp <= 0:
status = "CLOUDY"
status = "OVERCAST"
b_data = bme280.sample(bus, address, calibration_params)
light_rounded = veml7700.lux
if light_rounded > 100:
light_rounded = round(light_rounded,2)
While we’re managing the sensors, we also wanted to operate the PiCamera and calculate the correct exposure for the scene the camera is pointing at. Besides just changing the brightness of the output image, the ideal way to increase exposure is to simply increase the shutter speed - i.e. how long we exposure the image for. At its most basic, auto exposure is simply a matter of looking at the histogram of the image. The histogram of an image is simply how common the brightness of each pixel is.
If the peak of the histogram is in the centre, we know we’re exposed correctly - this prevents lights from becoming too bright or dark areas becoming devoid of detail. If the histogram is to the left or right, we know we need to change our exposure.
Our first thought was to calculate a new exposure based on the current histogram peak. For example, if the peak was located 75% of the way along, we could reduce the exposure by ⅓ to get it to 50%.
camera = PiCamera(
camera.iso = 100
camera.shutter_speed = 2000000
camera.exposure_mode = 'off'
output_image = Image.open(path)
h = output_image.histogram()
target = 0.5
med_val = h.index(np.percentile(h, 50, method='nearest'))
possible_vals = len(h)
target_val = target * possible_vals
shutter_multiplier = target_val / med_val
new_shutter_speed = shutter_multiplier * camera.shutter_speed
Calculating the image histogram was done by using the PIL library, which we store in the 'h' variable. This gives us an array that corresponds to how common each brightness value is in the image.
We then calculate the median brightness value of the image. We can do this by taking the 50th Percentile value from the histogram and store it in 'med_val'. We're trying to calculating a new shutter speed based on the brightness of the image we just took, so we need to set a target of how bright we want it.
We just left it at 0.5 as it shouldn't under or overexpose anything too much. Using some basic multiplication and division, we can estimate what shutter speed we need based on the current brightness and target brightness.
One problem we found with this approach was when the image was overly dark. This can often result in a median pixel value of 0 (i.e. completely black), which will crash the program as we're dividing by zero. Sure, we could approximate black to 0.0001 or some other very small value, but this would cause a huge increase in shutter speed.
But the main problem we found is that reducing the exposure like this tends to produce erroneous results and causes the camera to jump back and forth between two intermediate shutter speeds.
A much better (if slower) approach is to use small increments or decrements of shutter speed until we get the target histogram we’re looking for. We did end up implementing this, however, it wasn’t anywhere close to the performance of the software we worked with in the Main Build so we ended up not using it.
The Main Build:
After our experimentation with the Prototype Build, we’re adding a tough enclosure and 3D printed support frame, as well as using some more software to add a bunch more features to our All Sky Camera.
|Additional Parts Required:||Jaycar||Altronics||Pakronics|
|1x Sealed Polycarbonate Enclosure 171x121x80mm||HB6223||-||-|
|2x 5-12V 50mm Case Fans||YX2504||F1046||DF-FIT0240|
|1x Acrylic Dome^||LA5332||-||-|
|12x M3 x 6mm Bolt*||HP0404^||H3115A||-|
|14x M3 x 2.5mm Nut*||HP0425||H3175||-|
|8x M3 x 20mm Bolt*||HP0410||H3155A^||-|
|1x Breadboard Layout Perfboard||HP9570||H0718^||ADA1609|
|1x 1/4" x 20mm Bolt*||-||-||-|
|1x Ethernet Grommet and Passthrough||-||-||-|
* Quantity shown, may be sold in packs. ^ May not have the same dimensions as the part we used.
Our control board has exactly the same wiring as the original prototype build, with the addition of a place for the PoE module to mount, which helps clean up the wiring and keeps it compact inside the main build enclosure.
We’re once again using the breadboard layout perfboards, as they’re very easy to understand and build on.
We started by adding headers for all the components we’ll be using. Unlike the first build, we’re not mounting the sensors directly on the board and instead breaking them out to get the best positions in the enclosure. All are male with the exception of the female headers for the PoE module. To get the spacing right, refer to the Fritzing diagram.
We then added wires for the SDA and SCL lines, the same way we did in the prototype build. Note that each sensor has a different layout for the data and power pins, so don’t connect the first pin across all sensors to 5V, for example.
We then added power to each of the sensors using short jumper cables. Since the perfboard has traces on the underside linking these, we don’t need to do any unnecessary soldering.
A ground and 5V wire needs to be added to bridge the power wires. The right-side male header goes back to the Raspberry Pi, and only needs five pins.
One last thing! We need to power the fans inside the enclosure that we’ll be adding shortly, so we added two 5V headers on the power lines. These are just connected with male headers and provide our 50mm fans with power. Note that these fans are 12V, so they will spin slower than they are designed for. This isn’t really a problem considering that we do not need significant airflow to keep the components inside cool, and we do have a current limit when sourcing 5V from the PoE module.
This project is designed to be outside for many months at a time, so we need to make sure anything that isn’t supposed to be inside the chassis doesn’t get in. That mainly pertains to rain and insects. However, we also want fresh air and an unobstructed view of the sky for the weather sensors and camera.
The enclosure we’ve chosen is a transparent-lid polycarbonate enclosure from Jaycar, which includes a rubber gasket for protecting against dust and water ingress on the side of the panel. We will have to see how this enclosure performs long-term, as a transparent lid may introduce its own problems. The greenhouse effect will cause the inside of the enclosure to heat up more than if it was opaque, which will hopefully be minimised by the addition of small chassis fans for air circulation.
Besides just looking cool, the transparent lid is also handy since one of our sensors, the light sensor, doesn’t need to be mounted externally or in a dedicated cutout.
The same can’t be said for the IR sensor, which needs an unobstructed view of the sky to work properly. Polycarbonate blocks IR at the wavelengths the sensor operates at (5.5μm to 14μm), so we need to expose it directly to the sky.
We thought of a few different strategies for protecting the IR sensor against the sky, such as using an angled mirror to reflect IR or using IR-passing plastic, however, all of these solutions were much more complex than simply exposing the sensor directly to the sky. We’ve seen other cloud-detection systems expose the sensor, so hopefully, the sensor is relatively waterproof!
To get fresh air, we’re using two custom-designed Stevenson screens. You’ve no doubt seen these before on weather stations - they are basically an array of downwards facing vents to prevent rain falling directly into the enclosure. These 3D printed models will require removal of material on the front and rear panels of the enclosure - we used a Dremel with a rotary cutting bit to do this. It’s definitely not the cleanest solution in the world, however, the bevels built into the model will help cover the scuffs we make with the Dremel.
In addition to the enclosure, we’re using a 3D printed bracket to organise the electronics to both improve performance and keep the entire build clean. Since the fans are helping regulate the temperature inside the enclosure, a clear airflow will be beneficial to cooling the Raspberry Pi and the PoE module, both of which can get pretty toasty, especially under direct sunlight.
The light sensor is friction fit and is pressed in place, with five male headers attached to the underside. The environmental sensor sits behind the path of the fans secured with M3 bolts and has a pre-soldered gravity connector for simple connections to the main board.
The Infrared Thermometer won’t be mounted onto the 3D printed frame, and instead will be mounted directly to the underside of the enclosure. It will still connect to the main board with its SDA, SCL and power pins.
The 50mm fans are mounted to the front of the frame with 8x 20mm M3 bolts. You may wish to use rubber washers if you find they produce vibration or unwanted noise.
The Raspberry Pi is mounted vertically to save space with 4x 5mm or 6mm M3 bolts. Unfortunately, Raspberry Pi’s come with M2.5 mounting holes, so as long as you’re willing to commit blasphemy and drill into the Pi with a hand drill, this step is fairly simple. If not, double sided tape does wonders.
Speaking of double-sided tape, that’s exactly what we used to secure everything down in the case that needs to stay put, including the main control board. The cables are still a bit of a mess, so feel free to add cable ties to help secure everything down.
A hole needs to be drilled for the Ethernet passthrough, which will ensure no water makes its way in through the Ethernet cable. The rubber sealing grommet is placed on the outside, while the female-female adapter is screwed in from the inside.
We need somewhere for our camera to see the outside world! Since our camera can see 180 degrees at once, we need to make sure we have visible clearance across the entire top half of the box. A dome is the best option for this as it prevents most optical distortion and protects the camera from rain and dust.
Sourcing a dome was much harder than we first thought! Acrylic domes are common, but usually aren’t spherical and don’t have a mounting flange. One possible solution is to use an acrylic dome usually used as a pet viewing dome, where dogs can look outside of a fence at any angle. However, these are often at least six inches in size, far too big for our use.
We experimented with using a fake security camera from Jaycar with a dome underside. After pulling it apart and removing the plastic dome, we found it was poor quality. There were scratches and very uneven deposited areas of acrylic, leading to reduced clarity when we put the camera underneath. In fairness, there is no reason for it to be high quality given that the original product has no camera inside!
Unfortunately, many Australian sources for high-quality domes are either unsuitable in size, clarity or lead times. We ended up ordering a suitable dome from the US, but it won’t be here before this project is featured! In the meantime, we will temporarily mount the security camera dome.
We drilled a 55mm hole in the base for the camera’s lens to protrude. We purposely made the diameter smaller than the dome so we could easily add a dew heater or other electronics on the ‘shelf’ created. In addition, we also drilled a small 8.5mm hole for the Infrared Thermometer to sit in.
To tidy things up, a custom ring was 3D printed to sit around the dome. This can be secured with silicon sealant, or Blu Tack for our temporary installation (since we are changing out the dome later). Obviously, Blu Tack won’t be particularly weather resistant so it’s not a permanent setup.
The ring is sized so it also protects the IR sensor, however, we suspect this may accumulate water if not allowed to drain.
The prototype build was a test to see how viable writing our own software was for the Raspberry Pi HQ camera in Python. While it worked, we’re switching to a better piece of software specifically meant for All Sky cameras. Thomas Jacquin has authored his open-source All Sky Camera library which is mostly written in C for high performance, as well as providing a dedicated Web GUI and many awesome time-lapse features. You can check out his awesome repository here: https://github.com/thomasjacquin/allsky
There are excellent installation instructions there - we installed the ‘allsky’ and ‘allsky-gui’ package for both core functionality and an inbuilt web interface for easy browsing of captured media. Note that they can take some time to download the prerequisite packages - we had to wait about half an hour before our’s was finished.
The installer will give you an IP address to visit once it has finished. Most networks will be able to access the camera by simply typing ‘allsky/’ into the address bar of your browser. The web GUI for this software is excellent, and will show you all of the images the camera has taken, timelapses and other media that we’ll show shortly.
You will need to configure camera settings based on your specific setup, but much of this will come down to what camera, lens and configuration you have. Check that the maximum exposure time suits whatever location you have your camera in.
The SD card in our Raspberry Pi is a measly 16GB, which is only enough to hold a few days worth of data before filling up. Of course, we could change the interval between capturing images, but that’s just pushing the problem back another step.
We set up our camera to upload all images, timelapses and startrails to a local network storage server, which will be able to hold much more data, as well as host a website to display the All Sky live view from anywhere in the world. There is more detail on how to do this on the AllSky GitHub page.
We’re not completely ditching our code from the prototype build, as we'll still be using the MQTT server for relaying the sensor data to Home Assistant. We just won’t be handling any of the camera related functionality, as AllSky will take care of it. The only remaining code is for collecting data from the sensors. If you’re interested in this part of the code, you can download the updated code in the Project Resources.
Our sensor data gathering script won’t run at startup by default. We recommend adding an entry to your local user’s crontab file, which can be set up to run scripts and actions at a given time or frequency. Type the following in your terminal to enter the crontab editor:
Then, add the code below to the file. Just save, exit and restart your Raspberry pi to check everything is working properly.
@reboot sleep 60 && /home/pi/sensors/start_data.sh &
This script will automatically start our Python script after 60 seconds upon booting - we have the delay to ensure the docker containers have enough time to boot.
When accessing the home assistant app, data should now automatically appear from the sensors on the dashboard with any cards available.
Bravo 6, Going Dark
When the Raspberry Pi is taking long exposures at night, you may be surprised to learn just how much light the Raspberry Pi HQ camera can actually pick up. The power and status LEDs on both the Raspberry Pi and the attached sensor modules significantly affect the quality of the exposures, so we need to disable them.
Disabling the LEDs on the Raspberry Pi itself is actually fairly straightforward. Use this command to enter the Boot config file:
sudo nano /boot/config.txt
Then, enter the following options at the bottom:
This will disable the Ethernet and Status LEDs on the Pi. Disabling the LEDs on the sensor modules is a much more low-tech process, coming down to a choice of either using Blue Tak or just desoldering the LEDs entirely - we opted for the former. In any case, your enclosure should be as dark as possible to remove stray light reflections.
One other thing worth doing is adding some form of anti-reflective material around your camera. The plastic dome will reflect the camera and nearby electronics into the image, so black felt or a similar material can be used to stop this.
There are a selection of cool data types to generate with the AllSky software.
All timelapses generated by AllSky are in mp4 form, which, by default, create time lapses of the sky every day. It’s fully automatic, so there's no configuration needed! We’ll be posting these to our social media, so stay tuned! Depending on the framerate and image interval selected, these timelapses are usually 1 to 2 minutes for a full day.
You’ve probably seen star trails before, compiled from long time lapses of the night sky with a DSLR. However, we can also do it with our All Sky Camera, fully automatically! All images taken each night are averaged into a ‘master’ file, which overlays each captured star on top of the previous image. The result is an awesome star trail image that appears to rotate around the celestial pole in the sky. If you’re in the southern hemisphere, this points directly south! If the automatic star trail generation isn’t good enough, you can use a software like Sequator to manually generate star trail photos.
On a side note, we left our All Sky Camera running in the office while testing and accidentally generated this cool-looking ‘star’ trail. It obviously used images from all of the ‘experiments’ we were running that day…
Lasers may or may not have been involved.
Keograms, like timelapses, are automatically generated by the AllSky software at the end of each day. Keograms were originally created for the viewing and analysis of auroras, and are almost like a timelapse formatted as an image. For each image taken that day or night, a column of pixels stretching from North to South is taken and added to the Keogram.
Unfortunately, we didn’t have enough clear sky when testing this project to make any interesting looking Keograms, so here is a result from another maker:
This is an awesome scientific tool and can be used to visualise some incredible data. Twitter user Cees Bassa (@cgbassa) compiled their Keograms into a year-long graph to create this graphic.
The varying day and night lengths can be seen clearly throughout the year in the hourglass shape, and it’s easy to tell when there are short breaks of clear sky for a few days. You can even spot the moon changing positions and cycles in the sky throughout the year thanks to the slanted white lines! This graphic would look more like a diamond instead of an hourglass if we did this experiment in Australia (because the seasons are flipped), but unfortunately, we don’t have a year to wait.
To make sure everything was working before we sealed everything, we left one of the Stevenson screens off of the camera and put the Camera in our office car park for a day. The AllSky software will automatically start up and begin capturing images, and together with the Home Assistant dashboard, we got a lot of awesome information about the current weather conditions outside the office.
The cloud cover measurement strategy we are using works really well - during small breaks of clear sky, the sky temperature measurement from the infrared thermometer drops significantly. You can see below at around 2:30pm that the ambient temperature rises (presumably due to the sun coming out) and the sky temperature drops to about 5°C.
We tested our All Sky camera by leaving it outside on a clear night and letting the AllSky software work its magic. Obviously, you’ll want to put this underneath a sky with minimal surrounding trees and houses. If that isn’t possible, putting it as high as possible would be ideal. You can also leave it on the ground like we did to see what’s going on in your backyard too! It’s best if you can orient the camera such that North is facing upwards on the camera, as this makes image processing quite a bit easier if you want to add overlays such as the constellations and star positions.
It’s awesome seeing the night sky overhead, and on a moonless night, you can see a surprising amount of detail in the Milky Way and stars. When the moon is up, it looks as though the photo is taken during the day!
Where To From Here?
We hope you’ve learned something from our exploration into making an All Sky Camera! Whether you’re keen to make your own, or just want to use specific parts of it to improve your own projects, there are still a ton of improvements we could make to this.
There are a number of sensors that would definitely be handy if they were added to the weather station component of this project. An optical rain sensor, wind speed and direction sensor, and even UV index measurement would be great additions to a weather station like this.
Keep in mind that it’s possible to fully automate many actions within the Home Assistant app, as it supports hundreds of integrations with existing devices. It would be dead simple to set up a trigger to close a window when it starts to rain, or water your plants when humidity levels are very low. All you’d need is smart devices on the other end - perhaps a DIY smart relay or a pre-made solution.
Of course, the camera itself could be used for a variety of purposes too. It doesn’t have to be pointed at the sky - why not use it as a security camera with a live video feed? You can run this command to instantly broadcast a TCP video feed from your Raspberry Pi’s camera:
libcamera-vid -t 0 --codec h264 --inline --listen -o tcp://0.0.0.0:8888
The 180 degree camera would be great for monitoring a huge area outside a home or office, although it could be difficult to make out objects near the extremes of the lens’ image circle. The Raspberry Pi HQ camera supports C and CS mount lenses, so there is a huge variety of super wide or even telephoto lenses that could be used for almost any purpose.
There is a ton to experiment with in this project, so if you make anything cool, be sure to tag us at @diyodemag