Moonshots

Is Artificial Intelligence Fallible?

Rob Bell

Issue 21, April 2019

As artificial intelligence grows in its uses, and the decisions being made are infinitely more complex, do we need to become more understanding of the results?

TYPICAL COMPUTING

Typical computing is very binary. It’s either on or off, and decisions are exceptionally predictable.

Even an extensive decision process using a seemingly complex collection of switch statements really boils down to just a collection of binary problems. It either matches a case (or multiple cases), or it doesn’t. There’s nothing “fuzzy” about the outcome. Run the problem 10,000 times, and (assuming the same input), you’ll get precisely the same result each time.

This makes general computing exceptionally reliable. If you ask a calculator for the square root of 354,025, no matter how many times you run the numbers, the result will be 595. This is a stunning reliability, because the answer will never change. It also doesn’t require external variables, because the maths behind it never changes. There are no external factors which can change the fact that 5952 is always 354,025, xy draws a beautiful parabolic curve, and any number divided by zero throws an exception in code.

These things are indefinitely predictable, reliable, and repeatable. At least not unless we rewrite the entire foundation of mathematics. Since we’ve convinced ourselves that even Aliens would be able to understand our mathematics, I don’t think anyone’s hunting for an alternative either.

THE HUMAN DECISION MODEL

When people make errors, we accept and understand that as part of how people make decisions. We’ll also often use the phrase “we’re only human” which is a scapegoat for the truth that we are imperfect in our actions and made errors in judgment.

With so many inputs feeding our brains, from temperature and weather, whether we’re tired or hungry, what notifications we're getting on our phone, to the numerous bells and whistles sounding from the gadgets around us, it is easy to make poor decisions.

Consider going to a restaurant and having two choices for your meal; a grass fed scotch fillet steak or a jellied moose nose (yes that’s a thing, a unique Canadian dish). I would suggest that all but the most adventurous of you, will choose the steak. You may very much enjoy some jellied moose nose, but based on your preferences and experience, you know that the steak has a higher probability of being enjoyed.

This is just one example of how thousands of decisions we make on a daily basis gravitate to what’s familiar, safer, and more reliable.

WHEN AI HAS A BAD DAY

Imagine for a moment, a potential scenario in the future. You get into your self-driving car, and instead of it doing precisely what you expected, the voice-synthesis inside politely advises you that it doesn’t feel like driving you to work. Instead, it has decided that it’s a lovely spring day and is instead going to head for the beach.

Many of us have seen interesting scenarios as the basis of Science Fiction movies, where the AI engine determines that in order to protect civilisation it must annihilate 95% of the planet’s population. Or that in order to provide a safe working environment, all staff must be confined to their sleeping quarters and let the machines do all the work (though that one doesn’t sound so bad sometimes).

While these apocalyptic scenarios are demonstrated to us in some entertaining ways, there are a number of less dramatic and more realistic ways this could play out.

IT MAY NEVER HAPPEN

Fortunately (or unfortunately), we may not have to ask our vehicle nicely to drive us to work, or whisper sweet nothings to our coffee machine for a cup of brew of sweet caffeinated nectar.

However, we must consider... if we’re driving more and more decisions with Artificial Intelligence, should we become more accepting of the outcome?

There are very real and meaningful discussions regarding how AI makes its decisions. Ultimately, some of those decisions are amongst the toughest decisions humans ever have to make also. If the AI of a self-driving vehicle finds itself with a braking failure, and has to choose what to do. It can either crash through a group of people with potentially fatal consequences, or it can steer into a nearby wall, potentially killing the occupants. These decisions have no “correct” answer.

When we start asking machines to calculate these complex answers, should we be remarking “it’s only AI” when we disagree with the outcome?

If a human driver decided to save themselves, it would be labelled “human nature”. If they deliberately crashed the vehicle to save the crowd, they’d be labelled a hero. However recently reported incidents involving self driving cars point the figures at the “heartless computer”, while we naturally forgive thousands of incidents (minor and major) which happen on our roads every single day.

It is indeed then curious that as humans, we forgive our own human shortcomings, but don’t allow the same of our AI systems. Perhaps it’s just that we’re not used to the complexity AI systems are dealing with, and are more used to the binary nature of typical programming. But these AI systems are so different...

After all, if the decision isn’t complex enough to require a complex decision, considering many different factors, why is it using Artificial Intelligence at all?