Artificial Intelligence

Machine Learning for Makers - Part 4

Set up a Raspberry Pi with trained AI Image Recognition

Rob Bell

Issue 33, April 2020

This article includes additional downloadable resources.
Please log in to access.

Log in

Let's train an "exit" routine to improve our classifications!


As we explored in Part 3, Machine Learning is a powerful tool, yet it only knows what we train it to know.

If you tell it to classify between an apple and an orange (or as in our example, an Arduino or a Raspberry Pi), it’s naturally going to try to find the best fit, even if we show it something which is entirely outside of its trained data.

This is funny when you first toy with Machine Learning to say "I'm an Arduino", but the reality is it can hamper your results.

Just as when we (humans) try a new food we haven’t had before “it kinda tastes like chicken” is a common response, because it’s a fairly low-feature food so it’s easy to say many new things are like it.

We’re going to continue using Teachable Machine but we’ll try training it with an “exit”.

That is, a final class of images which represent all sorts of every day objects and people, to provide our system with an alternative classification that's available when one of our primary classifications isn't logical. Let’s consider the decision making which is essentially going on here.

var input = classify_my_image();
  if (input == ‘arduino’) :
    print ‘an arduino!’;
  else :
    print ‘an rpi!’;

This code is essentially what our Machine Learning Classifier is doing trying to do, however as you might have noticed, we don’t give it an “out”. What if you don’t have an Arduino or a Pi? The code doesn’t have a choice, it MUST output SOMETHING, because that’s what you’ve told it to do!

Now it’s perhaps worth noting that in our example above we observe that it’s essentially what our Classifier is doing - that’s probably an oversimplification with one major omission. Machine Learning will derive this in a far less linear fashion.

With our code example, unless you show it an Arduino, you’re probably going see it classified as a Raspberry Pi. So a plane, a person, a computer, will all fall into the last classification because they don’t explicitly match “arduino”.

If we were to try and represent this in pseudocode more accurately we’d probably go for something like:

var input = classify_my_image();
  if (input [is_more_like_arduino_than_rpi] ‘arduino’) :
    print ‘an arduino!’;
  else :
    print ‘an rpi!’;

Now our code is still going to preference RPi because of the flow of the question. Another comparison can be drawn with switch statements which have more of a “is it true” approach than an “either or” approach, which is useful for some things.

switch (input) :
  case ‘arduino’ : 
    print ‘an arduino!’;
  case ‘rpi’ :
    print ‘an rpi!’;
  default :
    print ‘not sure what this is’;

This is arguable better than an IF statement due to the “if true” style matching, however there’s still an order bias to the result. That is, it still works through and exits as soon as it finds a suitable match.

However you probably noticed the “default” line near the end. This does give us a “if no match then do this” option within the statement.

With Machine Learning, a classifier doesn’t typically have one, however there are two ways to do this. You can either include a class for what is NOT a suitable classification, or you can look at the confidence level of the result and handle things accordingly. Which solution is better for you will be determined by your particular problem, the training data you have, and other factors too.


Now you may remember last month when I simply ran the webcam live into Teachable Machine, it was a little confused. Now we’re going to fix that in the crudest way possible.

In order to train our model with our primary classes (Arduino and Raspberry Pi), we’re going to use the “Arduino General” and “Raspberry Pi General” image collections that we created last time. We’ve included the images in this month’s downloads in case you don’t already have them.

However before we add our “something else” class we need images of everyday objects and scenarios. You could grab your phone or camera and start snapping around your home or office to easily compile this “alternative” group of images. Though for simplicity, sharability, and a little bit of laziness, we’ve grabbed 100 images from a free stock image website called

It doesn’t particularly matter where the images come from as long as you’re allowed to download them so you aren’t breaking any laws. In this case it means we can share the images with you also.

This approach to gathering the images also gave us a varied mix of workplaces, homes, different technology, and things like that. It would have been difficult for us to gain this variety of images, including images of people, too many other ways. We’ve specifically targeted rooms, people, as well as other things we might find around our workbench such as TVs and laptops.

Essentially what we’re trying to do here is train the classifier to understand what images are NOT in our target classifications. While the process is precisely the same as adding images we DO want to classify, it naturally develops an exit or “Unclassified” outcome. We actually classify it as unclassified, which is an oxymoron, but it becomes useful for identifying when an image or stream doesn’t actually include what we were specifically looking for.

To get this going, we’ll create a third class here called “Unclassified”, and upload the 100 images located in the resources or that you’ve collected yourself. Train your model with the default classifications.

PRESTO! I’m no longer classified as part Arduino, part Raspberry Pi! Yet we can still identify our boards when we want to!

You can also see that various tests for “other” objects yield.

We can even grab a Pycom board and, because it’s not part of the training data for Arduino or Raspberry Pi, it is still “Unclassified”.


It’s important to recognise that with Machine Learning, there are almost as many variables as there are stars in the sky.

While there are a handful of tunable settings available to us in Teachable Machine, this is an incredibly simplified way to train an image classifier.

With more data (such as more attributes against each image), Tensorflow can create deeper links between like images. Rather than our Raspberry Pi images having simply one data point (that is, the class name) in addition to the image, we can add attributes.


Teachable Machine really is the simplest way to train and implement a simple Tensorflow-based image classifier without having to mess with Tensorflow itself. This is more than enough for simple classification tasks.

We will dive further into raw Tensorflow soon, but let’s try and do something practical with it?

One of the best features of Teachable Machine is the ability to download usable code with just a few clicks.

The code you're provided is near ready to go depending on your platform, which is pretty handy indeed.

For our purposes, we want to use this on a web server with a browser, so we'll select Tensorflow.js, and to Download the code. The other options are for use primarily server side with Tensorflow itself, or Tensorflow Lite for mobile / edge computing.

This process of deploying the model is what fundamentally allows deployment of trained Machine Learning to devices and embedded devices such as smart cameras.

The heavy lifting is done during training, to keep the deployment as light and versatile as possible.

We've deployed it to the DIYODE site, and you can see it in action at!

But we want to do much more than see it in action - we already did that on Teachable Machine. So let's deploy the trained model onto a Raspberry Pi by creating a Web Server, so our Tensorflow model is fully operational outside of Teachable Machine (though note, the snippet we'll be using requires Internet access to run, so ensure your Pi has Internet access too. It is possible to localise it entirely and we'll look at doing that later.

creating a web server with node.js & express.js

Let's create a web server. For this we're going to use Node.js, which is a lightweight and powerful web server. In fact, to make life easier for ourselves, we're actually going to use Express.js, which is a lightweight framework for Node.js.

One of the major advantages of using Node.js on a Raspberry Pi compared to say, a traditional Apache server, is that we have powerful options to interact with and control the GPIO. Express.js gives us a framework to handle much of the common functionality we'll need.

Firstly you need to update your system to ensure the latest packages are available. Though it's worth noting that if you're using Debian Buster (ie - the latest version), NodeJS is already installed.

In order to confirm that Node.js is successfully installed, run the following command in terminal:

node -v

You should get something like this:


This demonstrates that Node.js is running, and we're ready to move on to creating our server. As long as you have version 10 or so, you should be good to go without any other major changes and you can skip ahead to installing EspressJS.

If you get a message such as "package not found", you'll have to install NodeJS yourself. First update your system.

sudo apt-get update
sudo apt-get upgrade

Then install NodeJS.

sudo apt-get install -y nodejs

Unless done already, you'll need to install ExpressJS, which simplifies our interface with NodeJS.

sudo apt-get install -y node-express

FOLDER STRUCTURE FOR the server and tensorflow.js

You can download the entire server's configuration files including the Tensorflow content, from the digital resources. But we'll take you through things here too.

You can create your working files for the web server wherever you like, it's fairly inconsequential overall. For ease of location, we'll place ours on the desktop and call it "server". The path for this, we'll assume, is /home/pi/Desktop/server. If you have configured a different user for your Raspberry Pi you'll need to update paths.

Within the "server" folder, you'll need to create another folder called "myfiles". Next you'll create the Node.js configuration file. Node is very lightweight and basic configuration only takes one file.

var express = require('express');
var app = express();
app.use('/files', express.static(__dirname + '/myfiles'));
app.get('/', function (req, res) {
  res.sendFile(__dirname + '/index.html');
var server = app.listen(5000);

We've provided this file in the digital resources as nodeserver.js, which needs to go into the "server" folder. Essentially all we're doing here is setting up some very basic functionality. A route to all the static files (this can be CSS, JS, as well as our Tensorflow files we've downloaded from Teachable Machine.

We also create a route for the main request (that is a forward slash "/") to route to our primary HTML file. This HTML file includes the content that came from the Teachable Machine snippet, as well as some of our own embellishments.

Naturally, that index.html file needs to be placed (grab it from the resources, we'll go through it another time). You should have a "server" folder looking something like this.

Next, we need to copy over the files from Teachable Machine. Use our files from the digital resources, or use the files Teachable Machine downloads if you've trained your own. Naturally, these files are easily replaced if you re-train Tensorflow for any reason.


With everything else in place, you're now ready to fire up your Node server. You'll need your Terminal application. Navigate to the folder location where you've put your files.

cd /home/pi/Desktop/server

Then we start the Node.js instance with our desired configuration file (we only have one at this stage anyway).

node nodeserver.js

You won't get a new prompt, the server is running as a foreground process right now, so it'll only run until you exit terminal or quit the process currently running. Proving there's no errors, you can navigate to your server using Chromium or another installed browser. Simply go to http://localhost:5000 on your Pi.

You should be greeted with a fairly simple page. If so, you can click the start button. You may be prompted to "allow camera access" you'll need to click "Allow". Wait a few seconds and you should be greeted with an image from your Pi camera.

You should now see your webcam stream live, and be able to test images by holding Arduino or Raspberry Pi boards to the camera.

If you don't have any boards handy, you can even pull up images of them on your phone and hold it up to the camera - that will work too!

This is loads of fun, but what's next is to create tangible outputs we can use, not just numbers on a screen. That is where the real fun starts, and our AI journey really begins!







Rob Bell

Rob Bell

Editor in Chief, Maker and Programmer