Projects

Google Assistant In Practice

Tim Blythman

Issue 5, November 2017

This article includes additional downloadable resources.
Please log in to access.

Log in

Making practical use of the Voice Control technology.

I was impressed how a bundle of wires connected to a Raspberry Pi could listen and respond to my voice commands, but my wife was not. Such is the life of a maker! So in this project, I’m going to make my Pi Assistant look a bit nicer, and do a few more useful things.

The Raspberry Pi Google Assistant build from Issue 4 was very much a proof of concept, which showed that building a custom version of the Google AIY could work. It did, however, result in a setup that has wires going everywhere so it can’t really be put on the table or enhance the aesthetics of the room. Also, using a very big computer a long way away (Google’s servers) just to turn the lights on and off is a bit underwhelming.

To improve the aesthetics, I’m going to shrink the hardware by creating a custom “hat” for the Raspberry Pi, and then neatly enclose it in a 3D printed case. After that, I’m going to look at some of the other things we can do on the software side to make the Pi Assistant more useful, by leveraging the GPIO to add other features directly to the hardware. I’ll also show you how to add a lamp attachment, which is one thing that Google Home doesn’t have! The build in this article will depend a lot on knowledge and construction that was done in Issue 4, so I’d recommend making sure you’re across that project before immersing yourself into this one.

OVERVIEW

The custom hat for the Pi board is not much more than a few components soldered onto a prototyping board and attached to the Raspberry Pi with a stackable header strip. There’s a small amplifier module connected directly to a speaker, plus some other breakouts for things like the pushbutton and LED. This also means it’s easier to remove the hardware and put it aside if you want to use the Raspberry Pi for some other project. I also added an Arduino LED module to the hat as a lamp, and have set that up to be controlled from a GPIO pin, which is easy to add to the Python code on the Pi to make it capable of being voice controlled.

The custom case had to come after everything else, as I had to know how much I could shrink the Pi Assistant before I could design the case. There’s not much to it – it just needs to keep everything tidy and look nice, so it’s essentially a hollow box with holes in the right places for things like the power lead to fit through. I designed the case using OpenSCAD, which is a mathematical-based modelling program. If you enjoy working with formulas in Excel spreadsheets, then I’d recommend checking it out.

HOW IT WORKS

Looking at the October 2017 issue build of the Pi Assistant, the speakers are easily the largest element, so they will be the first to shrink. This is achieved by moving the speaker and its amplifier to the new hat (we’ll still take the audio feed from the sound card, because it is needed for the microphone). The other thing the hat does is break out the GPIO pins to spade terminals, for the arcade button and its LED to connect to. Later on, we can attach our “lamp” (a 3W Arduino LED module) to the Pi’s GPIO pins, which are also on the hat.

schematic 1

THE BUILD

As you might expect, the hat consists of a few prototyping components, combined with some other basic parts to hold everything together. On top of the pieces from the original build. In this build we will use the following:

Parts Required:JaycarAltronics
1 x Prototyping Board HP9550 H0714
1 x Stackable Headers HM3208 P5384
1 x 3.5mm Stereo Plug PP0130 P0030
1 x Amplifier Module AA0373 -
1 x Speaker AS3000 C0604B
1 x Arcade Pushbutton SP0666 -
1 x LED Module XC4468 Z6376
1 x Spade Terminals PT4630 H1806

I also used some offcut resistor legs as standoffs and some Arduino style plug-plug jumper leads for wires, as they have solid ends that don’t need stripping, and are easy to solder to since they’re pre-tinned. I used plug-socket jumper leads for the LED module, as it makes it easy to attach, detach and extend it as necessary, or even change to a different module to add different functionality. There’s also a 3D printed case, so you will need some filament of your choice – I ended using most of a 250g roll of PLA.

CONSTRUCTION

I did the construction of the hat in stages – mostly so I could check the hardware was still working at each step of the way. The first step was to integrate the speaker and push button into the hat, so I could determine how big the enclosure would need to be.

The construction starts by placing the stackable header strip in position and then laying the prototyping board over it to determine the layout. I thought about using a double-row header strip, but I found it was a bit more difficult to remove from the Raspberry Pi. Given that I would probably be taking it off and on a few times while I reviewed the design, I realised that all the GPIOs and power pins I needed were in one row, so a single row header strip would suffice. The piece I ended up using was an offcut about 17 holes long – a full length piece would be 20 holes on a Pi 3B. The pins that the Google Assistant software uses are GPIO 23 and 25, plus two GNDs; I use a 5V and GND to power the amplifier module, so even a row of 11 pins would be enough if that’s all you have lying around. I let the prototyping board rest gently on the USB sockets on the Pi, figuring that it would probably try to bend that way under the weight anyway.

Solder the pins in place and trim to length.

Solder the pins in place and trim to length

The next step is to mount the amplifier module onto the hat. I removed the amplifier from its housing and soldered the stereo plug to the input connection. I soldered the black wire to the large tab, and the red wire to one of the smaller tabs (it doesn’t matter which as we’re taking one of two identical mono channels). Remember to thread the wire through the backshell of the stereo plug, as it’s hard to get on otherwise!

stereo plug

Unsolder the other wires, so that you’re left with this.

unsolder

Then solder some component leg offcuts onto the board, leaving a bit of leg exposed on the other side, to give us something to solder onto later. Note that there are two GNDs at one end, but because they are connected together, we only need one. Keeping a short length away from the board, bend the legs outwards; this will make it easier to solder to the proto board.

proto board

The standoffs are now soldered onto the proto board. I placed the module away from the GPIOs so that the board wouldn’t be too crowded. If necessary trim the standoffs.

standoffs soldered onto the proto board

The next step is to wire the parts together. The speaker can be connected using scraps of the previously removed wires, and the power to the module is taken from the 5V and GND pins. The pushbutton contacts go to GND and GPIO 23 (7th and 8th from the end), and the LED in the pushbutton goes to GND and GPIO 25 (10th and 11th from the end). I found I could not easily work out which way the LED was wired, so I had to test it, and then reverse it when it didn’t light up. With all the parts attached to the hat, it looked like this.

all the parts attached to the hat

Finally, I tested that everything worked like the previous hardware. The stereo plug goes into the sound card headphone socket, the microphone stays attached to the microphone socket, and the hat is attached to the Pi. As I noted above, I had to reverse the LED spade connections because I could not easily tell which way it was wired; but otherwise, the hat performed as expected, taking the place of the speakers and temporary push buttons.

stereo plug into sound card headphone socket, microphone attached to microphone socket, hat attached to Pi

To add the LED lamp, I added a breakout to 5V, GND and GPIO18 pins using plug-socket jumper leads; the LED module is then simply plugged into the end of this. I find that using the brown/red/orange group of jumper leads means that because most modules have the same pinout, the wires are in the correct order to plug straight into the module. Be careful using this setup with an input type module, as there is nothing to prevent 5V being fed back into the 3.3V GPIO pins. In this case with the LED, we need 5V to power the LED properly. If in doubt, rearrange the wiring to take power from a 3.3V pin rather than a 5V pin.

As for the 3D printed case, there’s not much more to it than outputting it via a 3D printer. In reality, there is almost no limit to the shape and style of the case design. We have provided one for you, but you can easily experiment with a different design.

3D printed case

There is a slot in the bottom of the base to ensure the Pi does not rest on the MicroSD card. Feed the USB cable through the hole in the side, plug it into the Pi, and then fit the various components to the lid. The wires for the LED lamp protrude through the hole, and the microphone is a simple press-fit into its hole. I was going to hot glue the speaker in place, until I realised I did not have a hot glue gun! But then with a bit of careful positioning, I was able to use the nozzle of my 3D printer to glue the speaker in place by handfeeding the filament.

glue the speaker in place

The lid is a tight press fit into the base as it relies on the layer grooves to hold everything together. The tightness of the fit will also depend a little on the settings of your 3D printer.

Flexibility is difficult to build into a 3D printed piece, so I’ve added a socket into the lid to hold a piece of flexible wire that can act as the arm for the lamp. I used a short piece of brass rod, but even something like the wire from a coat hanger should work.

When assembled, the Pi Assistant is probably a bit larger than a Google Home or similar device, but is starting to look the part. The next thing I’m going to add is some rubber feet, so it doesn’t slide around too much.

completed

The Code/Setup

If you have already done the hard work of getting the Pi Assistant set up and generally working based on the instructions from Issue 4, then you would have seen that most of the customisation of the voice interface – such as custom voice commands – is done by editing a file called “action.py”. There are a heap of other files that are needed to make the voice control work, but action.py has been designed to be modified, so a good precaution to take is to make a backup of this file before you modify it – just in case you make a change and can’t fix it.

It also pays to shutdown the voice-recogniser service, and to run it manually from the dev terminal, as the program will need to be restarted after any changes, in order to recognise that the changes have occurred.

sudo systemctl stop voice-recognizer
sudo systemctl start voice-recognizer

The dev terminal will also provide useful error messages, which are handy if the Python code needs debugging.

I’m no Python expert (most of the coding I’ve done has been Arduino/C), but I found creating the Python code quite straightforward, simply by copying other parts of the file. The thing I found most important when working with Python, was knowing that statement blocks are separated by changes in indentation, rather than brackets. Other things are slightly different, like the use of “import” instead of “#include”. My approach was to copy some existing code and make small changes to see the effect. Searching online is also a great way to find code snippets to work with too.

The easiest commands to add are those that respond to a fixed phrase and run a system command. These can be added as a single line of “actor” code in the section “Makers! Add your own voice commands here” and above the “return actor” line.

Some examples that can be useful to add are:

actor.add_keyword(_(‘system reboot’),SpeakShellCommandOutput(say, "sudo reboot",_("Reboot Failed")))
actor.add_keyword(_(‘system shutdown’),SpeakShellCommandOutput(say,"sudo shutdown -h now",_("Shutdown Failed")))

Remember about the indenting – the actor.add_keyword should start four spaces from the margin at the first indent level. These commands work by looking for the keywords inside the first part, and then executing the shell command in the second part. The last phrase on each line is an error message, which is spoken if the command fails. Multiple commands can be tied together using the && operator, and because the SpeakShellCommandOutput actually speaks its output, we can use the echo command to give it something specific to say; for example, to acknowledge a particular command.

actor.add_keyword(_(‘say something’),SpeakShellCommandOutput(say, "date ‘+%A %B %e’ && echo ok",_("fail")))

This example speaks the date and then says “OK” to finish. Keep in mind that if a command has some text output, then this will be spoken as well.

To add control of the LED lamp module, we can borrow some of the code from the Issue 4 project, and tweak it to use GPIO 18 instead of 24 by changing the 24s to 18s in the SetPinOn and SetPinOff classes (only the SetPinOn example is shown below, but it's virtually a mirror of the SetPinOn - see the provided code file for full details of this section.

class SetPinOn(object):
  def __init__(self,say):
    self.say = say
    GPIO.setmode(GPIO.BCM)
    GPIO.setup(18, GPIO.OUT)
  def run(self, voice_command):
    GPIO.output(18,GPIO.HIGH)
    self.say(voice_command)

We also need to add the “import” and “actor.add_keyword” sections of code. If you want to control multiple pins using these functions, you will have to duplicate the classes and give them unique names for separate pins.

An alternative is to create a command that can respond to part of a command, and take the rest of the command as input. This allows us to name specific files as part of our command, for example, to open a website by name. The first step is to edit the actionbase.py file which is in the same folder as action.py. Replace the “handle” function (which is everything from “def handle” to the end of the file.

def handle(self, command):
  if("*" in self.keyword):
    match = re.match(self.keyword.lower(),command.lower())
    if match:
      param=match.group(1)
      self.action.run(param)
      return True
    else:
      if self.keyword in command.lower():
        self.action.run(command)
        return True
    else:
      return False

The “match” function also needs a library imported, so at the top of the file, add the following line:

import re

This is the only change we need to make to actionbase.py, so once these changes are applied, you can close it. The other changes are made in action.py. In the SpeakAction class, change the lines after def run:

def run(self, voice_command):
    self.say(self.words)

to:

def run(self, voice_command):
    newwords=self.words.replace("$1",voice_command)
self.say(newwords)

This means that if we add a keyword which includes a “*”, then the recogniser will respond to the keyword and pass the rest of the text as a parameter to the command. For example, add the following line to the voice command section of action.py:

actor.add_keyword(_(‘speak (.*)’),SpeakAction(say,"speaking $1"))

Now when the keyword “speak” is detected, the rest of the phrase will replace the $1 placeholder, so that saying “speak the truth”, will cause the Pi Assistant to respond with “speaking the truth”.

A useful variant of this is being able to speak a command and have it executed by the Pi Assistant, as though it were being typed into the console; although it can also be very dangerous too! In the class section of action.py, add the following definition:

class SpeakShellCommandOutputByText(object):
  """Speaks out the output of a shell command received as voice command."""
  def __init__(self, say, failure_text):
    self.say = say
    self.failure_text = failure_text
  def run(self,voice_command):
    print(voice_command)
    self.shell_command = voice_command
    output = False
    try:
      output = subprocess.check_output(self.
shell_command, shell=True).strip()
    except:
      print(‘Command Error’)
    if output:
      self.say(output)
    elif self.failure_text:
    self.say(self.failure_text)

Then in the actor section of action.py, add this line:

actor.add_keyword(_(‘command 
(.*)’),SpeakShellCommandOutputByText(say,_("fail")))

After restarting the voice recogniser, you can use the keyword “command”, followed by a shell command to execute that command. For example “command date” will run the date command, which will then read out the date as it would be displayed on the screen by that command. You can also do a reboot by saying “command reboot”. Be careful of adding this functionality though, as it can be dangerous in two ways: firstly, you can run any command, and you don’t even have the chance to see if it’s been entered correctly before it’s executed. Secondly (and perhaps more annoyingly), if the output of the command is very long, the Pi Assistant might start speaking screenfuls of text. This may well be why we don’t have voice-controlled computers yet!

A more entertaining feature is to be able to play some music by voice command. A simple way to do this is using the built in MP3 player program “omxplayer”. By tweaking our last command a little, we can pass the spoken text to a command, which plays a file that matches the text. To do the following, you will need to have some MP3 files in your /home/pi/Music folder. Add this code to the class section of the action.py file:

class PlayByName(object):
  """Searches for file with name matching voice command and plays it"""
  def __init__(self, say, failure_text):
    self.say = say
    self.failure_text = failure_text
  def run(self,voice_command):
     print(voice_command)
    self.shell_command = ‘omxplayer -o alsa /
home/pi/Music/*’ + voice_command + ‘*’
    print(self.shell_command)
    output = False
    try:
      output = subprocess.check_output 
(self.shell_command, shell=True).strip()
    except:
      print(‘Command Error’)
    if output:
      self.say(‘Have a nice day’)
    elif self.failure_text:
      self.say(self.failure_text)

To make use of this class with the “music” keyword, add this line to the actor section:

  actor.add_keyword(_('music
  (.*)'),PlayByName(say,_("sorry")))

The command that is passed when the music keyword is spoken “omxplayer -o alsa /home/pi/Music/*text*” tells the omxplayer program to play the first file whose name has text in it, which may not even be a match for a complete word. For example, when I said “music is”, the Pi Assistant responded with “Dizzy Miss Lizzy”, matching the two letters in the middle of “Miss”. If you like your music played as though by a slightly eccentric DJ, then this isn’t too bad, and it’s a good way of hearing songs you may not have heard in a while. The one issue I found was that the spoken text is usually converted to lowercase, and if your files have uppercase names, then they will not be matched properly. I found the easiest way to fix this was to convert the files to all lowercase by running the following command in the music folder:

cd /home/pi/Music 
rename ‘y/A-Z/a-z/’ *

I hope these additions give you some idea of the potential of the Pi Assistant. One more thing I have done is to turn on SSH and VNC (Menu>Preferences>Raspberry Pi Configuration, click on Interfaces tab, and enable SSH and VNC then click OK). This will let you access the Pi remotely, and configure settings without having to attach a monitor and keyboard.

WHERE TO NEXT?

The music command is a great example of being able to respond to a specific command with a prompt. If you did have a monitor attached, you possibly could use a similar mechanism to search for videos, or even dictate text to a file. The GPIO output is quite flexible too, so the same wiring can be used to control something like Arduino-compatible single-relay modules, provided a 5V supply is all that is required. Because the Google Voice library is based around Python, extensions can be added in Python code as well. This is a great way to interact with more complex hardware interfaces like SPI, as there is native support for many of these interfaces built into Python.

Another option is to connect an Arduino to the serial port (either the TTL UART bought out on the GPIOs, or even via USB), and have the Pi pass commands to the Arduino for processing.

As I noted before, the case itself could do with a few feet to stop it sliding around, and I would probably also like to find a way to bring a USB socket to the outside of the case, so that I could connect a USB stick to add files (like extra music) to the Pi’s card for it to use.