Moonshots

Feeding Artificial Intelligence

Rob Bell

Issue 30, January 2020

Has AI already outgrown modern processing capabilities?

It’s no secret that Artificial Intelligence uses high levels of computing power to draw increasingly complex links between data points. Especially in a neural network or deep learning scenario, it’s difficult to predict how to engineer hardware for its purpose. Similarly to Machine Learning software, we allow the system to create links between data, after all.

However, one thing we do know is that it’s mathematical. The issue, however, is that the numbers we’re crunching are getting more complex. The speed we expect calculations to complete is getting quicker, and the volume of data bandwidth required is growing exponentially as we feed more and more data to our systems. As such, the hardware we’re using to process this data must evolve.

THE RISE OF THE GPU

For years, the Central Processing Unit (CPU) has been considered the “brains” of a computer. After all, a typical personal computer can’t do much without one. Whether it’s an 18-core 36-thread hyperthreaded CPU or a 1-core 1-thread microprocessor in an Arduino, the approximate theory of how they do their thing is fairly consistent.

It didn’t take long before the Graphics Processing Unit (GPU) of a computer had performance which outstripped the computational power of the humble CPU, but it’s not necessarily feasible to compare the two directly. While a CPU is really good at performing a versatile array of simple and complex tasks, to a degree, it trades versatility for performance.

"A jack of all is a master of none, but oftentimes better than a master of one."

A GPU is specifically engineered to process graphics. When it comes to 3D graphics in particular, this typically involves geometry calculations, and lots of them. So a GPU is GREAT at crunching numbers.

This is why gaming computers often have as much value invested in their graphics cards as they do everything else. Sure, everything else matters, but all the CPU power, RAM, and market-leading hard drives are going to be left behind if the GPU isn’t up to the task. This is especially true for high performance gaming which involves 3D game visualisations at ultra high resolutions and frame rates.

The GPU is similar to a CPU in that it runs multiple cores. However, while a CPU might have 4, 8, even a few dozen in the latest processors, a GPU has hundreds, often thousands of cores. This makes them exceptionally good at crunching complex numbers, really fast. The cores themselves are engineered specifically to handle complex floating point calculations, which are required for geometric calculations.

Think about any sort of 3D game. The GPU needs to create a human-recognisable world. While certain textures and such may be forgiven, display characteristics such as perspective, reflections, shadows, and other features which help us understand the world we see, need to be replicated. All of these characteristics take loads of geometric mathematics to create something that our brain can interpret properly.

Naturally, in just a few decades we’ve gone from exceptionally low-quality 3D images to graphics so good, they’re barely discernible from a movie. Take it from full HD to 4K, add to that stereoscopic 3D or Virtual Reality, and you can see how the demands on computation have increased so drastically.

This is why GPUs are now often used for blockchain work (such as BitCoin Mining), because they can perform the hashing algorithms much faster than a traditional CPU.

BLOCKCHAIN MINING

Whether you’re a cryptocurrency fan or not, you’ve probably heard of blockchain, and indeed “cryptocurrency mining”. This is essentially the process of performing huge computational algorithms to validate the authenticity of data.

Due to the complexities (and increasing complexity) of the numbers, these validations have to be undertaken by specifically capable hardware in order to complete this computational work.

In the early days of Bitcoin, things were relatively straightforward and you could set up a Raspberry Pi and mine Bitcoin, with a degree of success.

However, what quickly happened is miners realised that high output GPUs were exceptionally good at cryptocurrency mining. So much so that mining hardware is now effectively made up of a bank of GPUs and a huge power supply. Sure, there’s still a CPU in there, but it’s not doing any mining.

WHERE DO FPGAs FIT IN?

Field Programmable Gate Arrays are an important piece of hardware in the quest for the perfect AI chip. They’re a special breed of hardware which are essentially configurable to work better in any given task, enabling hardware level optimisation for the code being run. We’ll be exploring FPGAs shortly, especially the Arduino FPGA boards such as MKR Vidor 4000, which makes exploring FPGAs much simpler.

CREATING FLEXIBLE HARDWARE

AI itself is already optimising hardware for its own use. Machine Learning can already test the speed and efficiency of different configurations of hardware, even if it’s effectively destined for its own use.

It’s entirely expected that chip development could take on a life of its own, reaching far beyond what we can currently conceive. These chips will outperform and evolve current technology at a blistering pace!

Naturally, this is still rooted in binary approaches, and Quantum computing changes this game even further.

Rob Bell

Rob Bell

Editor in Chief, Maker and Programmer