On the inevitability (or otherwise) of robots taking our jobs

September 16, 2020

Image: “Blue Robot” by peyri. Creative Commons license CC BY-ND

This article was inspired by a Kevin Kelly article in Wired magazine: Better Than Human: Why Robots Will — And Must — Take Our Jobs. It’s a little old now (2012), and seems unable to escape the idea of having jobs in the first place, but it’s a little more nuanced compared to a lot of the robotic inevitablism prevalent in Silicon Valley.

Kelly sees us as inventing new jobs for robots to do, while they do the old jobs. Our ‘job’ woule be to come up with new tasks that would eventually be automated. Just gotta stay one step ahead of the robots!

My own thoughts are that there are, right now, many tasks in life that we will always want humans to do. So once we automate the dull stuff we will free our time to concentrate more on those, rather than find more dull things we can’t wait to offload.

Robot overlords

Now, I know Kelly isn’t suggesting we delegate all responsibility to robots, even less rule our lives. Rather, he seems to confuse productivity with achievement.

Those are two types of work being discussed as if they were the same thing. Under ‘productivity’ you might put car assembly, dish washing, and parcel delivery. There is a right way to do them, and an end product or state that we can all agree on. There could be an element of satisfaction in a job well done, and exercise, which are nothing to a robot (and in fact might represent inefficiency), but generally these things can be done satisfactorally by an automated machine. We’d benefit from passing the load onto the robot.

The second type of work is ‘achievement’. In its simplest sense this is also getting something done. But it encompasses work that we might not all agree on how to do – but we have a strong idea that our way is preferable, at least to us. We might disagree on what the end product looks like, or even demand that the end products all look slightly different.

We might also feel a strong need to have gone through the process ourselves, or know that another human did. It encompasses, if not actively embraces, mistakes. Sometimes a ‘mistake’ by a human is not a mistake; it’s a difference of opinion. Sometimes a change in our own opinion can be fruitful or wonderful. That’s one of the reasons why presents are so exciting, and when recommendations by a friend are even more satisfying when “I’d never have chosen that myself”.

Achievement can also include ‘societal’ moments, including everything from civil rights movements to jury decisions. How do you feel when a jury gets the verdict ‘wrong’? How about if a computer had made the same decision? Under which circumstance do you feel hope for restitution? The point of a jury is not to be flawless, but for justice to be seen to be done, and fairly.

Rage against the machines

This isn’t just a matter of fairness, but success too. I want robots to be doing all the useless toil that I get no pleasure from but have to do – but I also want to do some of those jobs myself too on occasion.

I want an oven to cook my tea nine times out of ten, but I want a professional to do it every couple of months. I might want a driver to take me on my commute, but I might want to get behind the wheel for a leisurely drive around the mountains of Wales (ok, I hate driving altogether, but you get my point).

As I mentioned with the jury example above, mistakes or inaccuracies by humans are often acceptable, part of the system (whether you agree with them or not). Robotic judgements are different. For a start, they’re not ‘jdgements’, they’re decisions. Same data in, same data out. and if you put machines to work on more than just routine jobs then their mistakes – their unacceptable decisions – are going to produce a backlash, which we don’t want.

Artificially Intelligent

Of course, much will be made of computers’ increasing abilities in artificial intelligence. One day, we hear, the computers will think like us, and then we can trust them with judgements. On the contrary, once they ‘think like us’ we will no longer know how they think, and we can trust them all the less.

There would always be the nagging doubt that there was a bug in the machine; a bug you couldn’t ever find, nor question. You couldn’t even ask the robot why it decided in the way it did if it was thinking in a way we couldn’t distinguish from human. It wouldn’t know.

The thing I think Kelly gets wrong is that the ever receding limitations of computers are not the limitations we need, or would want, to remove. Sometimes the limitations are part of the system.

Already automated

There are already many things in this world that are automated, or guided by computers, for the worse. These are things which were once done by a human, slowly, but with greater knowledge, care and expertise. Today we accept are not done as well as they once were, but that the workload is too great to go back to those ways, if only for the sake of affordability. Examples include:

  • Mortgage appraisals: we used to know our bank managers, who could judge whether we were credit-worthy. Now computers decide how trustworthy you are through the numbers punched into a terminal.
  • Spellcheck: a useful tool, I’ll grant you, but can it ever be perfect? Even another human would occasionally need to ask you whether you meant to put a comma there, or use their instead of they’re. There’s no like-for-like substitute for human intervention.
  • Phone based help systems: do I need to convince anyone that a properly staffed human phone system is preferable to an automated attempt? Doubly so when your aims directly conflict with those of the company you’re calling, like trying to close an account.
  • Search engines: why does a search engine return a page of results, rather than just send you straight to where it think you most likely need to go? If nothing else it’s because you, the searcher, are human, and even you’re not sure. But I hear that Google and pals will keep trying to decide the answer that you’re looking for, so that you don’t have to work it out yourself, and that way madness lies.
  • Poker: a proper game of poker isn’t gambling. A group of skilled players are playing each other, not the deck or the luck of the draw. That’s why the best players keep on winning, and you can win with a bad hand. But a computer opponent is a different opponent. You can’t read them; they can’t read you. A computer poker opponent is useful, but not the same.

I’m sure you can think of other places where, however good robot automation becomes, a human will always be preferable there. We should avoid those of Kelly’s ideas that smack of ‘faster horses‘, like creating automated delivery trucks when we should be moving freight onto rail.

Instead we should be chasing those tasks we love to do, where human ‘fuzziness’ is part of the task. And then we can hand the dull stuff over to the robots.