DIY Big Brother - Part 2

Build the perfect surveillance system!

Rob Bell

Issue 59, June 2022

This multi-part series explores DIY surveillance, so you can create your own powerful (or tiny) surveillance solution.

In the first installment of this series, we explored various software surveillance solutions available, and installed AgentDVR onto a virtual machine. In this installment, we’re going to provision our own hardware server, again using AgentDVR, and get a full system running.


As discussed in Part 1, the important aspects of hardware need some consideration. However, you don’t need a particularly new computer to run a quality surveillance system. Even an old PC that’s perhaps 10-years old will do the trick.

Modern operating systems such as Windows and macOS use substantial amounts of RAM and CPU to do their thing. When you’re dedicating hardware to a (relatively) simple application such as a surveillance server, you can get away with a lot.

There are obviously limits… a Pentium 75 from 1995 with 8MB of RAM and a few hundred megabytes of IDE hard drive isn’t quite going to cut it. The Pentium 75 was, as the name suggests, running at 75MHz. Even a Raspberry Pi runs at a few gigahertz now!

You really want a 64-bit processor, something at least around an Intel i7 (an i5 will probably work too). Anything less may struggle, particularly if you’re doing advanced monitoring or any AI on the feeds. Ultimately you can get set up with whatever is available, and see how it goes. There are ways to reduce CPU usage too.

If you have an old laptop lying around with a battery that barely lasts the boot sequence and therefore has been retired paper-weight status, this will probably do the job too.


Now you might be thinking - aren’t Mac Pro’s high spec and expensive? Well, the answer is, yes. And they typically hold their value for much longer than the average PC. But they all ultimately get to a point where their original purpose is no longer viable.

We have a few of these heavy metal-bodied Mac Pros kicking around the office now unused, purely because Apple no longer supports the hardware with their recent operating system releases. Not an issue in its own right, but we’ve all been there… need to upgrade the OS to upgrade software we need to use, and so on.

As a result, these have been retired, but they’re perfectly good computers for many purposes and they have not been sold / recycled because we recognise their ability to perform various tasks. They’re simply no longer able to be used with a current Apple operating system required for graphics software and such.

This particular model is a 2008 model Mac Pro. In computer years, that’s quite old. However, they’re built exceptionally well, and still houses two 2.8GHz Quad-Core Intel Xeon processors. Our collection of these retired machines varies a little, but this one has 8GB of DDR2 RAM. Importantly, they were upgraded a little while ago with NVIDIA GeForce GTX 980 graphics cards, which may come in handy for video processing later too.

8GB of RAM isn’t terribly high by modern standards, however, unless you’re running loads of cameras, it will be more than enough - particularly when running a Linux operating system.


Now don’t be deterred if you’re using an old PC. Our guidance here will still work essentially the same for any PC too. As this generation of Mac Pro is an Intel-based processor, there are very 3few differences if any.

Since we’re using Ubuntu (as we did during our virtual machine installation in part 1) you shouldn’t have too many issues regardless of the hardware you’re using.


Perhaps the biggest potential issue you should consider is the age and speed of the hard drive. For many years, computers could be given a new lease on life with a RAM upgrade (either replacement or simply expansion), and a new hard drive.

If your old computer has a traditional mechanical drive and not an SSD installed, this is probably something to consider upgrading before you commission your old hardware as a surveillance server.

The surveillance server itself doesn’t need to have ultra-fast hard drive access to work well, like you would with your standard computer, so it’s not necessarily required. However mechanical drives (particularly consumer ones) are arguably the biggest failure point of a regular PC. They’re also the biggest headache to resolve if they do fail.

If your RAM goes faulty, your power supply bites the dust, or even the motherboard decides to stop functioning entirely, all of those things can be replaced without going through a full reinstallation which you’ll have to do if your hard drive fails.

Naturally, for any mission-critical system, you should even consider redundancy in the hard drives, but if it’s mission-critical you’re probably not going to start with an old recycled PC for this project anyway.

That said, upgrading to an SSD is very cheap. Linux distributions are very lean on their required disk space (particularly compared to Windows, or any modern operating system using complex applications). Even a 64G B SSD which isn’t nearly enough to run Windows, could service your needs here very well. If you’ve been upgrading your own workstation over the years, you might even have an old one lying around.

While recycling an SSD can still theoretically increase the likelihood of failure, SSDs by nature are far less prone to failure than mechanical drives.


One caveat here is that some old PCs, depending on their BIOS, won’t detect some of these new ultra high capacity drives. This is generally due to firmware available at the time, since drive capacities were much smaller years ago.

The same is also true for external enclosures. Many external disk arrays, even some you can purchase brand new today, still use cheap or out of date SATA controllers and may have limits on drive capacity they’ll work with.

We have seen quite a few docks currently available, even using USB Type-C and appearing to be “the latest and greatest”, which only support up to 6TB disks.

That’s still plenty of capacity, just do your own cross-referencing of compatibility before you purchase ultra high capacity drives, only to find you can’t use them (and may
not be able to return drives that have been opened or installed).


Naturally, you’ll need to load the Ubuntu operating system onto something bootable. This can be a USB drive, or a DVD. If you’re going down the USB drive route, which is generally more popular, then you’ll need to search for how to make a Bootable USB with Ubuntu on your current operating system. It’s a fairly straightforward process.

We already have a burnt Ubuntu DVD made which was created for a server we were repurposing recently, so we’re going to use that. Mac Pros from this generation had built-in DVD drives. Naturally, if your computer doesn’t have one, it’s probably best to take the USB method.


With any old computer, even without a hard drive installed, you want to power it up and check it boots.

You may not get too far if there’s no operating system, but you’ll at least enable the BIOS to run system checks and ensure that all the required firmware is functioning correctly to allow you to use it. Depending on the quality of the hardware inside, it will generally warn you or halt if it detects major RAM issues, CPU issues, or any other critical malfunction that might mean it’s better to use another computer.

Without a bootable hard drive installed (or a freshly formatted hard disk), you’ll generally receive a message saying no bootable disk installed. On our old Mac, we get a very stylish Apple-style folder with a question mark. Installation from a DVD is generally a little slower due to the read speed compared to a USB drive, but it can be more reliable as it’s simpler hardware that’s been around for longer. Some servers we’ve provisioned have great difficulty with USB media for installing operating systems, but all have a DVD drive so sometimes it's just simpler.

Our old Mac, was perfectly operational when it was last switched off. Running the last-supported subversion of MacOS, High Sierra. However it no longer supports modern apps that we need, so is essentially dead in the water in terms of usability for us.

All we need to do is to restart the computer and hold down the ALT key.

Mac boot sequences work a little differently to PC, and this will bring up the option of booting from the DVD drive. If you’re using a PC, you’ll either have a hotkey you can use, or you may need to go into the BIOS and shuffle the boot order to allow it to boot your media. Of course, if there’s no bootable installation on the computer other than your installation media, it should boot the installer automatically with no user intervention.


The installation process from here is essentially the same as we described in Part 1, so we’re not going to cover it in detail. Simply go back to Part 1 if you need step by step guidance.

A few notes:

Drive formatting. If you’re using an existing Mac or PC, it’s best to start fresh with a clean format, which can be done during installation. Just make sure you haven’t left anything important on the drive, as it’ll be gone forever (well, at least not recoverable without expensive data recovery techniques, if at all). If you’re using a brand new drive, you’ll need to format it anyway.

Follow the prompts, but actually read them. We see so many people get a little click-happy during installation in the impatience to move on, ultimately making poor selections or using defaults they don’t want.

Ensure your computer is connected to the internet (LAN preferable) so Ubuntu can automatically install updates during the process. It saves you another step, and also allows some simple functions like syncing the clock which will help ensure the required certificates are valid. An out of sync clock can cause so many issues and cryptic errors these days.

If you want to do so, select drive encryption. This is a great security feature to stop direct access of the disk by connecting it directly to another computer. Keep in mind however that you’ll need to enter a key/password on every boot.

This can be an issue if you're, say, offsite, and the power goes out. Sure, a UPS can help avoid restarts, but it’s just another point to consider. Weigh up the risks of someone obtaining and accessing the physical drive (note this is just the operating system, not the footage which we’ll store elsewhere), vs the ability for the machine to boot / reboot without intervention.

Once you’ve followed the prompts, you’ll be asked to restart the system to complete installation. You’ll also need to (and be prompted to) remove the installation media.

All things going well, you should soon then be greeted with a login window with your username you set up during the installation process.

Now you can get on with installing AgentDVR, following the same process described in Part 1 (Page 25 - Installing AgentDVR).

Note that during this installation, we also had to complete the manual FFMPEG installation section under “Installation Fails” too.


Now we’re using actual hardware and not a virtual environment as in Part 1, we’re going to try something different here with the physical setup.

Rather than congest our regular network with all the data from the camera streams, we’re going to configure a physically separate network for our cameras to connect to.

Naturally, this is entirely optional. Your own network setup and hardware will determine whether this is worthwhile, and even of any use at all. Particularly if you’re using WiFi cameras, it would require another WiFi router on the separate network too.

Now the Mac Pro we’re using has dual ethernet built in already. If your PC doesn’t have dual Ethernet, you can purchase a PCIe ethernet card for around $20, probably even less for a used one, to obtain this functionality.

You’ll notice that Ubuntu has detected both connections. However, since one is connected to our regular network and can receive an IP address from the DHCP server / router, whereas the other is connected to a plain network switch with nothing else connected, their status is very different.

In fact, since DHCP is the default setting for most network adapters, Ubuntu is regularly refreshing the second port trying to find a network. As a result, you’ll often get a notice like this:


You may, after completing this configuration, realise that you no longer have internet access.

The operating system may try and use your separate / static ethernet connection for internet connectivity, which naturally isn’t going to work.

On Mac and Windows machines, the operating system will often work around this, or particularly in the case of macOS, you can easily reorder the priority of your connections to use your main network for internet access.

Ubuntu makes it a little more challenging, so it’s best to ensure the first network interface is being used for your main network (with access to the internet), and your second is used for the camera network (with a manual address / DHCP server etc).


Dynamic Host Configuration Protocol (DHCP) plays a huge role in the plug and play nature of networking. It’s not actually required, but generally makes life MUCH simpler when it comes to configuring devices to use a network.

Particularly since most surveillance cameras are configured to use DHCP out of the box, it’ll make life much simpler for you when adding new cameras.

In order to have Ubuntu allocate IP addresses to your cameras, you need to configure a DHCP server to run on your second ethernet port. Fortunately, this is relatively straightforward.

First, however, you’ll need to consider a manual IP range for your separate network. Within the address range, we’ll often pick a low number for the host connection (such as, and then set a range for DHCP (for the cameras) to something higher (such as It doesn’t really matter what these are, however, as long as your ethernet connection isn’t part of the allocation pool as that can create conflicts and things won’t work properly.

Since you have two ethernet ports in the computer, you’ll also have to tell the DHCP server which ethernet port to run on, so it doesn’t try and assign IP addresses on your regular network to which the computer is also connected to.

You can find the ethernet port’s ID in the Network settings. Just make sure you grab the correct network port ID.

To get the DHCP server running, you’ll need to install the ISC DHCP package. It’s quick, lightweight, and won’t add much to your system.

sudo apt install isc-dhcp-server

Follow the prompts and approve the installation.

Now we need to adjust the configuration file, replacing the values we’ve entered with the IP range you’ve decided to use.

sudo nano /etc/dhcp/dhcpd.conf

Comment out the options, we're not using this network for internet access, so it's not required.

Then add your configuration as below.

This is the code we've added:

subnet netmask {
      option routers;

Essentially what we've said here is that we want to create a network in the 192.168.2.x range. We want to assign IP addresses between and (that is, for our cameras).

We've also added a router line, with the static IP address on the server we're about to set. This field is generally used for the internet gateway. While we're not providing internet access to cameras specifically, and it's not technically required, we've seen issues with connectivity when there's no router line included.

Now you’ll need to modify one more file to ensure the DHCP server runs from the correct interface.

sudo nano /etc/default/isc-dhcp-server

Naturally, replace the interface name with the interface name shown in your network configuration. Ensure it's the one connected to the separate network for the cameras, not the main network. Make note of the "ethernet priorities" section on the previous page here too.

While you’re in there, add a manual IPv4 address to the network interface. Since the DHCP server is running on this port, it can’t self-assign the IP address.

The gateway address can match your computer's IP on the main network (note that later ours changed to due to lease renewal).

DNS fields aren’t really required here and can be left blank.

With all that done, let’s restart the DHCP service so our new settings take effect.

sudo systemctl restart isc-dhcp-server.service

Now all that’s left is to connect a camera and see if your DHCP works!


There are a few ways to test your DHCP server.

Perhaps the most visual is to connect a spare computer (not the surveillance server) to the ethernet switch and see if it receives an IP address. You can use the operating system's network information utilities to see what's happening with a high degree of clarity. Something surveillance cameras don't provide you with.

However you can also go ahead and connect a camera or two, then check the IP leases for the DHCP server.

Now in our example, we’ve specified a DHCP range of, and our server’s manually allocated address is

With a camera or two connected, run the following command in terminal:

dhcp-lease-list --lease /var/lib/dhcp/dhcp.leases

You an obviously just view the file itself, but this command digests the contents into far more usable output.

You should see an output something similar to the image below:

From this we can confirm that two ethernet connections are active on the network, and have been assigned

If you do - you’re all set! IP addresses are generally allocated for the first time in sequential order. If your first camera connects as, you can be fairly certain that the second one will obtain the IP address, the third, and so on.

The DHCP server will usually reallocate the same IP address to the same device too. Often if this doesn’t happen, it’s because the device was switched off when the lease expired, and some other device was allocated the IP address. If your cameras are left running, it’s unlikely the IP addresses would ever change for a specific camera.

Regardless, you can always get a list of current IP leases using the command previously used to figure out what’s going on if something does appear to have changed.


It’s still entirely possible to create a private network with no dynamic IP addresses. Simply configure all devices on the network (surveillance server / cameras) with manual IP addresses.

This process varies from camera to camera, but essentially you’ll have to put the camera onto the regular network (allowing an IP address to be assigned), access the camera’s admin interface, and change the network settings to use your manual network settings (with a different IP address for each device). Then disconnect the camera and reconnect on your separate network.

You’re essentially doing the same thing as a DHCP server does in the blink of an eye. In my view, it’s not really worthwhile.


If you haven’t been able to get DHCP leases to assign to your cameras, there’s a few things you can check.

  • Check that your server’s ethernet port is connected to your separate network switch and that the ethernet activity lights are active.
  • Check that you have allocated a suitable static IP address to your ethernet port.

  • Confirm that you have configured the DHCP server to use the correct ethernet interface, and not the one connected to your regular network.
  • Check that your cameras are connected to your separate network switch.
  • If you’re using PoE to power your cameras, ensure the PoE activity light is showing active. If you’re using separate power for your cameras, ensure it’s plugged in and switched on.

If all else fails, give the entire server a restart and retry. If it still won’t work, try re-tracing the network installation and configuration steps.


This process is now essentially the same as was described in Part 1, however your IP addresses will be in the private/separate network range you've configured.

Using the information from your IP leases, you can now start to add cameras. You’ll need to determine the username and password for your brand of cameras (often listed in the manual). You may even find that the cameras are auto-discovered by AgentDVR during setup, which can make things even easier. However it won't always work with the dual-network configuration, as it'll often search the main network.


It’s very likely that you don’t want to record footage to the operating system’s disk. Depending on what you intend to do with the system, how many cameras, and how long you want to store your footage for, it’s good to give some consideration to storage.

Similarly to the SSD, there are some considerations here regarding how critical you consider your system to be. This will determine any redundancy considerations in your storage.

The reality is that the chances of an event that you need footage for (evidence of a break in for instance) would coincide with the failure of a single hard drive is fairly slim, given that the chances of both of these things are relatively slim in the broad scheme of things.

If you’re happy to accept potential drive failure and go for a single hard drive, then all that’s left to consider is capacity. I would recommend a new hard drive here, if it’s an option. Some hard drives are tagged as suitable for surveillance use, or Network Attached Storage (NAS) devices. These tend to be more reliable when being used for extended periods (since your surveillance system is likely to be always-on). That said, you may decide that it’s not worth the extra cost.

A single drive could cost as low as $50 for a 1TB drive from a reputable brand and retailer, but even if you want plenty of storage, you can pick up even 10TB drives for a few hundred dollars now. That’s significant storage capacity for relatively low cost, should you need it. Just remember that it’s not redundant.


Your surveillance system will also work perfectly well with external storage such as a USB enclosure. Ubuntu can mount external storage just like any other operating system, and AgentDVR can be configured to store your footage on any available drive.

If you’re using a multi-drive enclosure, there’s usually support for redundant configurations.

The only real consideration here is that USB technology 10-years ago wasn’t nearly as quick as it is now. However, if it’s USB 2.0 or better, you should have enough bandwidth available.

For this test build, we’re going to use a cheap 2TB external hard drive. It’s SSD so it’s really fast, but it doesn’t have to be. We only selected this because we had it.


It’s generally best to use native disk formats unless you have a specific need to use them on another system (such as Windows). Fortunately, formatting your media in Ubuntu is straightforward.

Once you connect it, it should pop up in the sidebar of the file navigator, or as a popup that you can click to open the drive.

To format it, right click on the drive in the sidebar and select Format.

You can name the drive
whatever you like, and select either Ext4 (the preferred format for Ubuntu) or use NTFS and FAT systems if you want to be able to use the drive on a regular PC too.

You can also make it a password protected volume if that's what you want to do. Use the same security considerations discussed previously. Unless there's a likelyhood of your drive being stolen, or a security issue of someone seeing whatever you've been recording, there's generally no need.

Once you’re happy to proceed click the red “Format” button.

Once formatting has finished, it will open up the empty drive.


Once you have added your storage to your system (and formatted it if was required), you now have to update the configuration in AgentDVR to use the storage, rather than the default location.

Now, we must preface that we had some difficulty doing this thanks to what appears to be a bug in the user interface. There are some reports of it online, but not many, and we suspect it’ll be fixed in a version in future.


Find the hamburger menu top left of AgentDVR, then click on the Settings button.

From the blue dropdown top right of the Settings popup, click on Storage towards the bottom of the list.

Now this is where you should be able to update the storage path.

We’ve tried adding new ones, updating the default one, but we’re always met with a cryptic message of “Sequence contains no matching element”.

To us, this sounds like a small piece of broken code, a mis-typed keyword, or something else small that’s stopping this from working. We might investigate more deeply another time.


Fortunately, Linux has a filesystem function called symlinks. These are a little like shortcuts on Windows. They allow you to completely navigate the file structure but from aother filesystem location.

There is one caveat to this method… AgentDVR will poll the main disk for available storage monitoring, not the external disk. This may mean that file archiving / rolling deletions to avoid the disk filling up completely don’t work properly.

There’s not really any way to tell in advance, but worth consideration anyway.

Creating a symlink is simple. Essentially we’re going to remove the “Media” folder and create a symlink in its place to our external drive. Only a few commands are needed in Terminal to achieve this.

First, you need to confirm the full file paths for the original media location, as well as the mount point for the external storage. You can easily find these by opening up “properties” on the drive itself. Naturally, the original storage location for yours will probably be the same as ours, but with your username instead of DIYODE.

Our external drive path is /media/diyode/recordings

The default media location is /home/diyode/AgentDVR/Media/WebServerRoot/Media

First, you’ll need to remove the existing Media directory, since you want the symlink to replace that folder (and the symlink command won’t overwrite a real folder).

rm -R /home/diyode/AgentDVR/Media/ WebServerRoot/Media

Symlinks are created with the following format:

ln -s /path_of_source /path_for_link

It’s fairly straightforward, however trailing slashes change the meaning of commands substantially on Linux, so you need to be careful.

Using the path data we’ve obtained, it’s a single command to create the symlink.

ln -s /media/diyode/recordings/ /home/diyode/AgentDVR/Media/WebServerRoot/Media

If you list the directory contents of the WebServerRoot folder, Terminal should confirm that the symlink is there, and where it’s pointing to.

Now, the ultimate test… set at least one of your cameras to recording in AgentDVR. Navigate back to your external drive. You should see some oddly-named folders. Click in there, and you should see video files being created.

Success! You’re now recording to your external drive!

We’re going to reach out to AgentDVR about the bug in the user interface, as it would be better to have it adjusted properly in the system. However this certainly works around it for now.


AgentDVR provides a really great way of reviewing footage, particularly when you’re using motion-activated recording.

You can do this from any computer on the network too. Simply use the main IP address of the server (not the static one configured with the DHCP server running).

In our case, it’s, so we can pull up


The default screen is the Live Video screen, we’re going to explore the two primary review modes. Click on the camera icon and select “Timeline”.

You should be presented with a view something like this:

The great thing about this view is that it very readily shows you record times for each of your configured cameras. Naturally this is only useful for trigger-recording setups. If you have it recording constantly then you’re not going to gain much of an insight into what’s going on. However for overnight triggered recording in particular, it will rapidly highlight any areas of potential interest or concern.


Time Machine is a slightly more functional version of the Timeline. It allows you to scrub through the recorded timeline, and instantly play back all cameras at that point in time. It’s very easy to use. A great pinch-zoom style interface allows you to scrub the timeline very quickly.


This is a very crude option but has its purpose. Essentially it’s just a file grid, not too dissimilar to if you were to pull up the recording folder on your computer’s file explorer and take a look. However, it gives you single-click review and works very well.

We’ll soon take a look through the floor plan functionality too. It allows you to map your cameras to a floor plan which can be helpful when reviewing footage of a person who has moved through a particular space.


Naturally, while monitoring from your in-house network is handy, remote monitoring is arguably far more practical for most setups, since you’re unlikely to be where your system is when an event happens.

AgentDVR really makes remote monitoring simple, with none of the common port-forwarding and firewall trickery to make it work.

Commercially there’s been a huge move towards this type of technology too, since it often needs to be installable by someone with little technical expertise. Many home routers have ultra-basic firmware on them too, so even if you know what you’re doing, it can be a frustrating experience getting it sorted out.

The magic in AgentDVR comes via a login, though this is where costs do start to come in, which help cover development costs, and the tech running in the cloud to make life simple.


We’re not entirely sure if doing this is within the usage terms of AgentDVR or not, and we’ll reach out for clarification, but we can’t find anything against it, only documentation to encourage subscription.

However it's important to note that existing technology can enable access for you with relative easy.

It all comes down to your technical know-how for networking and firewall configuration. There are two approaches to theoretically get it to work:

  • Port forwarding
  • VPN access to the network


There’s something of a conundrum with the port forwarding option. Securing AgentDVR.

AgentDVR is geared towards a cloud subscription to secure access. There doesn’t appear to be another way of doing so without reworking the code for the web server to handle authentication.

So port forwarding to a mobile device isn’t a great option, since you’d have to expose your AgentDVR server to the world, with no real way of locking it down. So if something stumbled across your open port, they could not only view your surveillance cameras, but adjust settings, delete footage, etc.

None of these are acceptable, obviously. And while you may think “nobody will stumble on my IP address and the correct port”, there are hordes of bots out there doing precisely that. Scanning IP addresses, looking for any open ports they can exploit.

The one potential exception to this is if you’re remote monitoring from another fixed connection, say for monitoring the office or workshop while you’re at home.

As long as your router supports the functionality, you should be able to port-forward the server, while only accepting connections on that port from a specific IP address.

Now I should add that this isn’t perfectly foolproof either, as IP addresses can be spoofed, but it would still greatly reduce the likelihood of having your system remotely accessed without authorisation.


This is undoubtedly the best way to access an internal network from anywhere in the world. The VPN does all the tunnelling through the firewall in a secure way, and (as long as it’s configured correctly) does all the heavy lifting for you.

With a proper VPN running, it’s as though you’re physically inside the building you’ve VPN’d into. So as far as AgentDVR is concerned, it’s no different to accessing it from the local network, because you actually are!

The only trouble is a secure VPN can overwhelm consumer routers fairly quickly, if it is supported, which it’s not guaranteed to be.

We did cover how to do provision your own router/firewall way back in Issue #015, by installing pfSense onto a computer or device. It gives you all the flexibility and routing power you’ll ever need. A combination of pfSense running OpenVPN server, with a quality VPN client on your computer, it’s fast, secure, and very very handy. Go and check that out if it's something of interest to you.

Sadly, pfSense runs as an operating system so you can’t simply install it onto the same Ubuntu server and use it. It’s absolutely possible to run it as a virtual machine, but for something so critical as an internet connection, I would recommend using bare-metal hardware. System requirements are fairly lean, so you can run it on an even lesser computer than your surveillance server. The only real requirement is having two network interfaces - no big deal!

We did test the functionality of accessing AgentDVR across our VPN from an external location, and everything worked perfectly.


If you want to avoid all the hassles, this is the way to go. Subscriptions start at US$7.95/mth, so it’s certainly not expensive. It only takes a few clicks too.

You can always test it out with the 7-day free trial and decide which option to progress with!

To get started, click the little person icon top right of the AgentDVR screen, and click Account Settings in the popup menu.

You’ll be asked to connect your iSpy account (you can create one in the next step if you don’t have one too).

Login or create your new account. You can sign in with a Google account too (always handy).

I’ve never been a fan of “usernames” as it seems to be one more thing to remember, but fill it out as required. Your mobile number is for using SMS alerts (which require purchase of credits).

Add your server. The claim code links your server to your account. It’s possible to add multiple servers to the same account too.

This claim code didn’t appear to work the first time we clicked OK, but a second try got it through.

PRESTO! Once complete, you now have access to your server with the same interface and functionality as you have on your local network!

One key difference with this experience is that the stream has to go out to the broader internet and back, so video quality won’t necessarily be the same.

All in all however, it’s a solid experience and made really simple, and definitely gives you the ability to see what’s going on, even if you need to review the high quality footage when you’re back on your local network.

All functionality such as configuring cameras and system settings is available to you just as it is locally.

NEXT MONTH: Virtual reality review, exploring ai image processing and advanced funtions


Rob Bell