At first blush, inequality and artificial intelligence (AI) don’t see to have much in common, but scratch the surface and they begin to look like twins separated at birth. Artificial intelligence is the creation of software that can improve itself, to the point where it might enable machines to become sufficiently human-like for us to think of them as intelligent, potentially even conscious, and therefore deserving of, if not ‘human rights’ then at least human-like rights. Ultimately, humanity might be nothing more than “the biological boot loader for digital superintelligence” as Elon Musk put it. That is, the whole 3.4 billion years of biological evolution on planet earth, ending with us, will give way to much more rapid evolution in the software we humans create – and just as the canals were used to transport the steel that enabled the railways to be laid that made the canals unviable, so we may be the last step in the biological chain.
How much this happens, and how fast, is the subject of many books, but we’ve already taken many of the early steps along the way, in software, computing power, robotics, and so on. And these developments have wide-ranging social impacts. Many of these impact upon social inclusion, fairness and equality. And the more we move towards full AI, the more these intervening developments will impact on social justice.
For those who understand these new technologies in AI, and can fund them and profit from them, there will be untold wealth. We ain’t seen nuthin yet. Imagine how much money a company like Uber will make if it uses driverless cars, or if Apple didn’t need factories full of people, only machines. Imagine the same happening to say, law, or medicine, when one company and its robots can handle your house sale better than any human solicitor, or diagnose all illnesses better than GPs can? Some will get stupendously wealthy, most of us will be out of work.
The central issue posed by AI is not “Humans vs Machines” – it’s “Humans who build and own machines vs Humans who don’t”.
We have not yet begun to take seriously the issues we confront, on the meaning of human life, who deserves what rights, how the bounty of our environment is to be shared and protected, and many other huge issues. But we can start by educating ourselves. Dr Michael Wilby (Course Leader and Senior Lecturer in Philosophy at Anglia Ruskin University) is holding a series of lectures on the Philosophy of AI in December, and next year. These are:
Thurs 6th December 2018: Dr Rupert Read (UEA) – AI, The Future of Work and Sustainability
Thurs 13th December 2018: Dr Kanta Dihal (Cambridge) – AI Narratives
Thurs 28th March 2019: Dr Sam Coleman (Herts) – AI and Consciousness
Thurs 2nd May 2019: Dr Karina Volt (Cambridge) – AI and Ethics (topic tbc)
Should be fascinating!