Why AI singularity just doesn’t exist1


And if it does why we shouldn’t care

    © Christian Müller 2018  
«home   Compared to the standards of the 1980s, almost everyone now carries around a supercomputer which is capable of all sorts of things way beyond anyone’s imagination thirty years ago. The development of information technology at large has not only scaled down computers in size and scaled them up in power, it generally has been and still is mind-boggling in a great many aspects.  
    Today, much attention is paid to big data, machine learning, smart networks, quantum computing, robots and other buzzwords that promise either utopia or dystopia as the result of technological progress on speed.  
    The highest pinnacle of all, it seems, is artificial intelligence (AI). As the term suggests, artificial intelligence is set to replicate human capability that is so far considered unique among all matter both dead and alive. On the face of it, machines have, in fact, come extremely close to human intelligence as our race’s world champions of chess and Go have learned the hard way. Machines are so close now that some already think that the moment of AI singularity is at our fingertips. But, is it really?  
    Though the definitions vary in their details, AI singularity is generally considered the moment in which computer intelligence is a match for human intelligence to the extent that machines become self-conscious and indistinguishable from humans in all their mental capabilities and skills.2 If this moment arrives, the logic goes, humans would be dispensable because there will no longer be any task or role left for us.  
    So while technological progress at large enjoys the benefit of doubt that it may yield more good than bad, AI singularity is sure death by success for humans. Sounds frightening? Rest assured, the moment of AI singularity will either never come, or, if it does, we need not care.  
    Why will AI singularity never arrive? For a start, let us think about the implications of the AI singularity hypothesis. If we seriously ponder the idea of AI singularity, we should ask ourselves what the new, self-conscious, intelligent race would think about itself. It would certainly first of all recognise its self-consciousness, its intelligence and unique position on earth to say the least. Sounds familiar? It should, because that is more or less what we think of ourselves.  
    In short, if we only assume that anything like the AI singularity moment may exist, we must accept that this moment may well have already happened. We would have to accept that there is no way of proving that we ourselves are not the result of some AI singularity that occurred ages ago, perhaps as the result of someone running an experiment whereby machines replicate themselves. If so, all what we see, feel think and so on would simply be the product of some super clever algorithm that we will never be able to decipher because we are inside the box without any chance ever to look at ourselves from the outside.  
    Therefore, simply hypothesising AI singularity means accepting that AI singularity has actually already happened. But, if it has already happened then we also have to live with the fact that humankind too, simply is a form of created machine irrespective of our definitions of living and dead matters and of everything else what we think makes us special.  
    There is logically only one way to escape this conclusion. We simply have to do away with the idea of AI singularity altogether: AI singularity just does not exist.  
    Now suppose, however, that AI singularity does exist. If it does, we have to return to our earlier conclusion that we are the product of some earlier AI singularity and that we, therefore, are inside some very clever machine that stealthily rules our lives. What we believe to be our faculty or will, self-determination, or whatever we want to name it would hence simply be nothing but a perfect illusion. All this being an illusion, our capacity of postponing or advancing the moment of AI singularity would also be an illusion. Therefore, if against all odds AI singularity did exist we would not need to care because we could not do anything about it anyway.  
    Having established that AI singularity either just does not exist or if it does that we do not have to care, is AI still worthy of any discussion? The answer is an obvious “yes”.  
    The answer is yes because AI singularity should make us think about what it really means to be human: Technological progress has demonstrated that very many superficial human skills ranging from solving higher order differential equations, learning to play chess to outer space exploration and burger flipping, all can be done much, much better by machines than by us. Apparently, all this can therefore, not be the essence of being human.  
    So, what is there left for us? While machines apparently are very helpful for all sorts of troubleshooting it is still us who spell all the troubles and create all the interesting problems to be solved. In fact human creativity, feeling and compassion are those features that machines are not capable of and at the same time those things that make our lives worth living.  
    The conclusion to be drawn from this is yet another proof that reality always bends a little towards the paradoxical and ironic. Creating the incredible powers of machines teaches us that we should not try to keep up with our technologies’ troubleshooting might but to focus on developing our human creativity instead. Therefore, the arts, humanities and care for our fellow humans should, and hopefully, will assume a much stronger role in education and public discourse.  
    The bottom line thus is this. Let us continue to push the limits of technological progress and let us enjoy the benefits of ever better, never ever perfect machines. At the same time, however let us also push the limit of our human core. If we do so we will always ask the questions and let the machines work out the answers.  
1 See: Uncertainty and Economics: A paradigmatic perspective (Routledge, 2019) https://t.co/HI2RyNu7GJ .
2 For an early mentioning, see Vernor Vinge: The Coming Technological Singularity: How to Survive in the Post-Human Era, Department of Mathematical Sciences San Diego State University, 1993.
    © Christian Müller 2018  
    Jacobs University Bremen