Pragmatism About the Future of Technology


There is a great deal of fear and cynicism about the continued acceleration of technology, accompanied by endless dystopian predictions of our demise. I sometimes imagine travelling through time to see the future for myself. I think making decisions from this perspective might prove helpful for future generations — playing on our own pride and selfishness to make life better for those to come. Realistically, we don’t need to visit the future to predict the consequences of our actions, but our level of certainty is vanishingly small after just a few short weeks.

We have an abundance of data — from sources as varied as historical science fiction to economic modelling — to know precisely how inaccurate our projections tend to be. It is difficult for me, as an admittedly biased member of present-day humanity, to believe that the world today might seem like some horrible dystopia to those of the near past. I think the fear and cynicism many espouse may be useful for charting our course wisely, but ultimately we tend to reach a more palatable outcome than many envision. And while I acknowledge the unfathomable suffering of many in the present — war, poverty, and oppression are still undeniable fixtures of human existence — on the whole I consider our lives better than those of our ancestors.

Technology has played a starring role in the betterment of our lives: where starvation was once a constant threat, now we can readily have delicious and, for the weird ones so inclined, nutritious food delivered to our doors. Over the past few centuries we have transitioned from battling Mother Nature to exploiting her at will. The only real threat to humanity now is ourselves — slaughtering one another over matters of opinion, while destroying our host planet like an overly-ambitious virus. Admittedly, technology may accelerate our downfall, but it may equally be our saviour. Either way, it will not be technology itself at fault, but the way in which we wield it.

As an engineer, I have tacitly dedicated my life to the continued advancement of technology. I am cognisant of this bias in general, but I am often oblivious to its various manifestations. I want to encourage and understand the opinions of others, but I acknowledge that I will probably be burdened with a love of technology for life. It is hard to predict where we will be even after the few decades I expect to live, but I am nonetheless looking forward to watching the story unfold.

There are a great number of technological hurdles left to surmount. Personally, I find the low baud rate of the human-machine interface to be a constant nuisance, calling either for some invasive augmentation of humanity or, more probably, dramatic humanisation of machines. Human-emulating machines such as androids are fascinating and compelling because they fire many of our pre-programmed neurological responses, but pragmatism calls for application-specific machines built with application-specific shapes; where Mother Nature shaped us to be generalists, the real strength of automation is built-to-purpose specialists that exceed our own abilities. Rather than build a person-shaped robot to wield a broom or wash dishes in the way a person might, it is far more practical to build a Roomba and dishwasher and celebrate their robodiversity1. With this sort of specificity, we can rapidly meet and exceed the capabilities of humans. Unfortunately, machines that exceed our capabilities make some people, shall we say, uncomfortable.

Sensor technology is fast approaching — and in some metrics exceeding — the limits of humans. With specialisation, we can generally do better: single photon avalanche detectors and image processing can see what the human eye cannot; ultrasonic microphones and signal processing can hear what the human ear cannot; chemical sensors can taste and smell what humans cannot. By dramatically limiting the scope of what these sensors must do, we can exceed human-level performance. Similarly, it is trivial to construct stronger or more powerful machines than humans — in fact, we’ve been doing it for centuries. We are decreasingly limited by the ways we sense or affect our environment. However, humans are still regularly included in industrial processes, typically as a means of control when decisions need to be made.

We have been removing humans from control loops since James Watt’s steam governor, and we have been aggressively iterating on the concept ever since. We are making machines increasingly more useful to humanity through automation, and gradually doing more useful things ourselves as we hand over control.

Fear of machines stealing our jobs is an individually heart-wrenching but generally irrational concept — mechanisation and automation have continually proven to increase productivity, in turn improving our standard of living without the burdensome undertaking of birthing endlessly more people to add to the workforce. Fear of our eventual obsolescence is marginally more rational, but this would be making projections so extraordinarily far into the future — beyond the predicted end of Moore’s law — that the probability of any specific projection coming to fruition is infinitesimally small. However, a large fraction of humanity believe that we will eventually create an intelligence that rivals or exceeds our own, and I am among them.

If we were to create some generalised form of artificial intelligence, and if the level of its intelligence were to continue to grow even sub-linearly — two enormous challenges, but plausible on an infinite time scale — then human-level artificial intelligence seems inevitable. The idea that this intelligence would develop without our knowledge, or outside of our control, seems extremely remote. We can be relatively sure that such an intelligence would not be burdened by human instincts and values and ipso facto that it would not act in the same irrational way. Some might consider this possibility terrifying. Personally, I think the idea that it would then view us as a threat and decide to eradicate us would be extremely unlikely to unfold in any of the myriad dystopian ways foretold in our beloved sci-fi classics.

It may come as no surprise to you, given that I am kind of a robot nut, that I really loved the Terminator movies as a kid — specifically Terminator 2 — but the entire premise of this dystopian universe is irrational. It would be inefficient for machines to fire individual bullets at each person, or wipe out entire cities using nuclear weapons — the energy expenditure and collateral damage alone are illogical — unless such weapons were so readily available that it just proved convenient. It would be much easier to eliminate our species in the same way that humans might eradicate snakes or spiders — with a targeted virus. Rather than eliminate a species, however, there is a stronger argument for genetic modification — making the species more compelling or useful — in the same way that we have domesticated dogs and cattle. Not as some form of energy source, harvested in endless fields, as The Matrix suggests — that’s patently stupid. You know what’s better at generating energy than ‘body heat, combined with a form of fusion’? Fusion. What an utterly inane premise. The more likely scenario, if this imagined AI with insurmountable intellect somehow turned out to be adversarial, is that it would defeat humanity with knowledge and actions as far beyond our comprehension and perception as viruses are to spiders and snakes.

Unfortunately, to predict whether or how humanity might be a threat to such an all-powerful machine, we would first need to know its purpose. Why does a sentient being choose to exist? Why would it choose for another species to cease existing? Humans, on average, just seem to follow their genetic programming: we seek pleasure within the confines of some (varied) human-centric value system, occasionally procreate, then die. Lather, rinse, repeat. Thus the chain of humanity unfolds, and yet we haven’t reached a unanimous conclusion about the point of the whole exercise. What about a being that could literally re-write its own programming. Would such a creation be a net-positive to the universe? Is it even possible for humanity to comprehend let alone welcome its own obsolescence?

The idea that machines would be interested in dominating Earth seems unrealistic — with all its inconvenient gravity and atmosphere making movement inefficient, and unnecessary weather always making things wet, not to mention the finite resources of iron and copper. The universe is a big place. Surely there would be somewhere more suited to the purposes of our AI progeny. Perhaps it would be more convenient to simply orbit the sun — collecting abundant energy to power its circuits — if indeed its purpose were as basic as continued existence with power to its circuits.

Ultimately, I find most of our human-centric fears about our human-centric future to be totally improbable and utterly baffling given the lack of evidence beyond our capacity to fear change. Certainly, we will make many mis-steps as technology advances. Perhaps, yes, billions could die at the hand of technologies we create if all goes catastrophically wrong. But perhaps technology will just slowly and surely improve the quality of human life and incrementally help us to understand why we are here. It’s impossible to predict, but I’m hoping for the latter. Knowing as little as I know, I think it’s worth the risk.


  1. See Robot or Not for a light-hearted discussion about what constitutes a robot